Introduction
This policy is intended for recipients of Criminology Research Grants (CRG) from the 2024-25 round onwards and provides guidance on how generative artificial intelligence (AI) can be used in research projects funded via the CRG program. For the purposes of this policy, generative AI refers to algorithms that generate or aggregate information to create new content, such as text, images, audio and videos.
This policy does not affect existing legal and regulatory obligations to which CRG recipients are subject.
General principles
Generative AI can be used for research purposes in CRG projects. This can include (but not be limited to) the generation of research designs / analytical strategies, bibliographic searches, generation of reference lists, literature reviews (eg summarising papers), production of code to analyse data, production of classifications based on data and other forms of data analysis.
The proposed use of generative AI must be clearly articulated in the CRG application. This must take into account the principles outlined below.
In using generative AI, the following principles apply.
- Ethical use. Generative AI should only be used in an ethical way. As a minimum, a Human Research Ethics Committee should approve the use of generative AI in the project. Careful attention should be paid to ensuring the results produced by generative AI are not subject to biases, errors or falsification that could discredit the research, or, more importantly, that could lead to harm to research subjects or the wider community. If research findings rely upon decisions made by a generative AI, these decisions should be made clear and substantiated.
- Transparency. Research reports should include a description of how generative AI was used, including the AI employed and how it was accessed; whether there were conditions placed upon the researchers when accessing the AI; the training data used and the start and end dates of the training data; a description of the instructions provided to the AI; and any priming instructions used prior to the instructions specific to the research task. The limitations associated with using generative AI should also be clearly articulated. Replication is an important principle in research and researchers should be confident that results produced using generative AI can subsequently be replicated using the same inputs.
- Data protection. Any information obtained from a third party should only be input into a generative AI algorithm with the written permission of the original data custodian, who must be informed about the intended uses of the data. Any data input into a generative AI should adhere to the ethics requirements of the use of that data, including de-identification. Researchers should be confident that re-identification is not possible subsequent to being input into the algorithm, and that the research data will not be aggregated into the training data of that AI.
Quality assurance
The Australian Institute of Criminology will continue to quality assure all CRG reports using a double blind peer review process. The use of generative AI to create peer reviews is not permitted.
Exceptions
An exception to this policy may exist when a recipient of a CRG or a team member named in a CRG application owns the intellectual property associated with a proprietary generative AI that is used in a CRG project. A proposal to use proprietary generative AI must be included in the CRG application, setting out the nature of its use. All other requirements (including ethics approval) remain the same as with other uses of generative AI.
Date of policy: 19 March 2024