Artificial Intelligence (AI) Transparency Statement

The policy for the responsible use of AI in government provides mandatory requirements for departments and agencies relating to accountable officials, and transparency statements. It sets out the Australian Government approach to embrace the opportunities of AI and provide for safe and responsible use of AI. This page outlines the AIC’s commitment to these policy requirements and for the ethical use of AI in its criminological research, ensuring transparency, accountability, and inclusivity.

Defining artificial intelligence

The AIC applies the definition of artificial intelligence provided by the Organisation for Economic Co-operation and Development (OECD):

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Accountable officials

Agencies must designate accountability for implementing the policy for responsible use of AI in government to accountable official(s).

The responsibilities of the accountable officials are to:

  • be accountable for implementation of the policy within their agencies
  • notify the Digital Transformation Agency (DTA) where the agency has identified a new high-risk use case by emailing ai@dta.gov.au. This information will be used by the DTA to build visibility and inform the development of further risk mitigation approaches
  • be a contact point for whole-of-government AI coordination
  • engage in whole-of-government AI forums and processes
  • keep up-to-date with changing requirements as they evolve over time.

The accountable official for the AIC is the Deputy Director.

Use of artificial intelligence by the AIC

Use in research

The AIC has published a policy for use of generative AI in projects funded by the Criminology Research Grants program.  This permits grantees to use AI for research purposes, while maintaining ethical standards, transparency and ensuring data protection.

Use of AI in research by AIC staff is currently restricted and reviewed on a case-by-case basis. For example, supervised machine learning algorithms have been used in a small number of studies undertaken by the AIC to generate prediction models.

In using AI for research purposes, AIC staff are not permitted to export or upload data to any third-party AI application.

Prior to undertaking research that involves applying AI, AIC staff will seek approval from a human research ethics committee.

Other uses of AI

AI is used to improve productivity of staff by:

  • Generating abstracts from publicly available and open-source non-commercial publications for use by the library.
  • Generating and debugging code used in data analysis.

Staff training

All AIC staff are required to complete the Australian Public Service Academy’s AI in Government Fundamentals course by 31 May 2025.

Public interaction and impact

The AIC does not propose to use AI where the public may directly interact with or be significantly impacted by it.

Monitoring

The AIC will review on an ongoing basis the internal policies and governance approaches to AI to ensure they remain fit for purpose.

Compliance

The AIC only uses AI services in accordance with applicable legislation, regulations, frameworks and policies.

Policy for the responsible use of AI in government

The AIC complies with all mandatory requirements of the policy.

Accountable official

The accountable official was appointed and notified to the DTA on 21 February 2025.

AI transparency statement

The AI transparency statement was first published to this website on 28 February 2025.

AI contact

For questions about this statement or for further information on the AIC’s use of AI, please contact frontdesk@aic.gov.au.