Last updated:

What is Artificial Intelligence?

''Artificial intelligence (AI) is a collective term for machine-based or digital systems that use machine or human-provided inputs to perform advanced tasks for a human-defined objective, such as producing predications, advice, inferences, decisions, or generating content.'' (definition from Solomon, L., & Davis, N., (2023) The State of AI Governance in Australia, Human Technology Institute, The University of Technology Sydney).

AI technologies can help with efficiency and timeliness when used with appropriate governance to undertake logical tasks. AI technologies require ethical frameworks to work within, and human oversight, to ensure that: 

  • bias or harm does not occur
  • content provided is accurate and not misleading
  • malicious activity does not result from its use
  • fundamental human rights are not impaired.

Different types of AI have different purposes. For example: 

  • Automation AI technologies undertake actions in accordance with specific parameters and data sets available
  • Machine learning and other decision-making AI technologies make decisions in accordance with specific parameters and data sets available
  • Generative AI including Large Language Models (LLMs) create new content in accordance with specific parameters and data sets available
  • Combination AI technologies can do all of the above: create content, form decisions on that content, and then perform actions on that content in accordance with specific parameters and data sets available.

 

Why is it important to capture records of AI technologies?

It is not enough to have records of business, actions and decisions on their own. Associated with those records of business, it is vital to also have records of the technologies and processes that produced them. This is because the processes and technologies used impact on how the record was formed, what was captured and how it will be kept. These in turn impact on a range of factors including access to the record, the context within which the record is understood, and the integrity of the record as evidence.

Stakeholders have increasing awareness of harms and inaccuracies produced through using AI technologies. Stakeholders will want to know:

  • Was AI technology used to make the decision/take the action or generate the content?
  • Was/is the algorithm and/or underlying data biased?
  • Was/is the content produced through AI technologies accurate and relevant?
  • Has the agency done enough to mitigate any negative impacts?

Capturing records of AI use

Multiple factors impact what records of AI will need to be kept and managed. These include the type of AI used, the possible harm caused through using the AI, and the level of risk concerned.

The below table describes different types of AI technologies and what records would need to cover (please note that the below is not a definitive list).

Type Description Records required to verify:
Automated Decision Making (ADM) Systems Use data to classify, analyse and make decisions that affect people with little or no human intervention

- that the decision was lawful

- that the decision was made in line with appropriate procedures and lines of authorisation

- that there was no bias involved or that bias was addressed appropriately

- that the decision was based on accurate and current information

Expert Systems Use a knowledge base, inference engine and logic to mimic how humans make decisions

- that the decision was lawful

- that the decision was made in line with appropriate procedures and lines of authorisation

- that there was no bias involved or that bias was addressed appropriately

- that the decision was based on accurate and current information

Generative AI Systems that produce code, text, music, or images based on text or other inputs

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Large Language Model (LLM) A type of generative AI that specialises in the generation of human-like text

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Multimodal Foundation Model (MfM) A type of generative AI that can process and output multiple data types (e.g. text, images, audio)

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Machine Learning Systems (MLS) A broad set of models that have been trained on pre-existing data to produce useful outputs on new data

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Natural Language Systems Models that can understand and use natural language and speech for tasks such as summarisation, translation, or content moderation

- that the content produced is authorised and valid

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Robotic Process Automation Systems that imitate human actions to automate routine tasks through existing digital interfaces

- that the action was lawful

- that the action was made in line with appropriate procedures and lines of authorisation

- that there was no bias involved or that bias was addressed appropriately

- that the action was based on accurate and current information

Virtual Agents and Chatbots Digital systems that engage with customers or employees via text or speech

- that the engagement content produced is authorised and valid

- that the engagement content produced has been fact checked and is accurate, current and appropriate

- that the engagement content produced is in line with relevant legislation and regulation

 

Stakeholders are increasingly aware of the impacts that use of AI technologies have on the decisions and actions that directly affect them. Using AI technologies without thought as to their possible impact on all stakeholders involved has been demonstrated to cause harm. Records provide documented evidence of what transpired.

The below table describes some of the risks and harm associated with the use of AI technologies and ways that good recordkeeping practices can assist with their mitigation (please note that this is not a definitive list).

Risk/Harm Description Mitigation re Recordkeeping
Human Rights Risks/Harm due to Bias (race, gender, economic, age, etc.)

Occurs for various reasons, including the data set not being sufficiently diverse or having inherent bias that is magnified by the AI.

Examples include racism in predictive AI due to there being racism in the underlying data; facial recognition software (FRS) not recognising black faces as they have only been trained on white faces (see https://www.ajl.org/)

- approval process decisions captured and addressed in line with Approval Processes Policy

- notification that AI is being used/ created/captured and managed along with content

- create and capture records of monitoring and addressing risks/areas of bias

- retain for duration of retention period and dispose of content lawfully

Accuracy Risks/ Harm due to incorrect/ inaccurate information - Fiction reported as Fact

Referred to as an 'hallucination'.

Occurs for various reasons, including the language used by the human and the way the AI was taught to respond.

Examples include references being invented by the AI, unverified content being referred to as fact by the AI (see usatoday.com article and sify.com article)

- notification that AI is being used and what content is being generated by it, created/captured and managed along with content

- create and capture records of due diligence, e.g. fact checking and confirming whose intellectual property it is

- retain for duration of retention period and dispose of content lawfully

Transparency Risks/Harm due to inability to explain AI decisions and actions Occurs when it is unclear whether an AI was involved in making decisions or taking action, and/or how an AI was involved (see www.uts.edu.au article)

- notification that AI is being used and what content is being generated by it, created/captured and managed along with content

- approval process decisions captured and addressed in line with Approval Processes Policy

 

Recordkeeping should be considered at all parts of the lifecycle when designing and implementing AI technologies.

The below list is a starting point for what may be considered to ensure accuracy and transparency as parts of a records management program:

 1. Identify, describe and document each AI system used across the organisation, including:

  • the intended purpose and outcomes
  • desired benefits
  • type of AI used
  • context and scope of use
  • stage of implementation
  • sources of data
  • identified harms and risks
  • any controls or systems in place to mitigate risks; and
  • other relevant information, such as what cannot be documented (e.g. 'black box technologies' that do not disclose algorithms and/or other elements used to make decisions).

2. Document and report on how the system has worked over time (e.g. using automated logging to record events, capturing reports on issues and their migration, capturing the results of assessment and monitoring programs).

3. Determine and document what content can be created and captured by AI technologies, what needs to be created and captured by a human being, and for AI generated content, at what points a human being is to be included.

4. Determine and document what actions can be undertaken by AI technologies and when a human being is to be included, confirm, or undertake the actions.

5. Be transparent and accountable regarding ownership and responsibility in relation to all types of AI used:

  • provide a clear attribution for (or transparent acknowledgement regarding issues attributing ownership of) text, images, sound or video produced by generative AI
  • establish appropriate oversight of and responsibility (including lines of delegation and accountability) for each AI system used that factor in AI system failures, overuse of AI, and malicious use
  • determine the legal requirements or obligations applicable to each system and carefully attribute legal liability for harms or errors to entities across the AI value chain, with effective safeguards placed at the most effective and appropriate points.

6. Establish an ongoing assessment and monitoring program that assesses AI technologies for trustworthiness:

  • use an assessment framework that is designed around a set of ethical principles to assess the AI technologies and systems from the idea/design phase through to implementation and ongoing operations
  • assess the AI technologies performance in relation to risk, bias and harm, and take steps to mitigate or otherwise address that risk, bias or harm
  • document and report on the results as part of an ongoing monitoring and assessment program.

 

Ethical principles are included in most AI frameworks to ensure that harm caused by or through the use of AI technologies are fully considered, and the potential harm addressed.

The below table provides the Ethical Principles from the NSW AI Ethics Policy alongside some recordkeeping considerations to help address meeting the principle.

Ethics principle Recordkeeping considerations

Community Benefit

AI should deliver the best outcome for the citizen, and key insights into decision making

- documentation to justify why the AI is being used and what it is used for

- evidence of alignment with relevant legislation, including the Public Records Act 1973

- connection with Privacy Impact Assessment - see OVIC

- alignment with the Human Rights Charter - see Victorian Human Rights Commission (VHRC)

- documentation of how decisions are reached

- risk assessment documenting possibility of harms (e.g. inaccurate information/bias/IP breach) and their mitigation (if possible) or consequences (if not)

Fairness

Use of AI will include safeguards to manage data bias or data quality risks, following best practice and Australian Standards

- documentation showing results of assessment for accuracy, bias and associated mitigation

- IP, copyright, and other data permissions/human rights charter alignment/Privacy Impact Assessment

- evidence showing data integrity and associated analysis

- systems performance reporting

Privacy and Security

AI will include the highest levels of assurance

- Privacy Impact Assessment

- security assessment

- Human Rights Charter alignment

- data/information governance

Transparency

Review mechanisms will ensure citizens can question and challenge AI based outcomes

- full and accurate records - evidence of how decisions were reached, what level of detail is possible, risk assessment and mitigation re: harm/other impact

- evidence of consultation

- open access information about AI - what is publicly available

- appeals process and associated documentation

Accountability

Decision-making remains the responsibility of organisations and Responsible Officers

- full and accurate records - evidence of how decisions were reached, what level of detail is possible, risk assessment and mitigation re: harm/other impact

- chains of responsibility mapped

- appeals process and associated documentation/human in the system at point of authorisation

 

Useful tools for documenting AI technologies, including related processes, are provided below. As the tools were designed for international or interstate jurisdictions, please remember to tailor them for use within Victoria. 

 

AI technologies are constantly evolving, including how, when and what they are used for, what they are capable of doing, and the regulatory environment they are used within.

One of the ethical principles commonly referred to is explainability so that people interacting with the business are able to understand how it has impacted on their interactions. This required documented understanding of how the processes and technologies work in relation to specific types of records/data.

Some things that can be useful in communications with stakeholders are to:

  1. Acknowledge when AI technologies are being used, which ones, what they are being used for, and how they will work throughout their lifecycles.
  2. Provide a notification for individuals interacting with generative AI and other AI technology/applications along with information regarding opting out and request for review processes. Include information on any 'black box technologies' used.
  3. Explain the processes and technologies used that resulted in the decision, action, or other business the stakeholder was involved with.

Material in the Public Record Office Victoria archival collection contains words and descriptions that reflect attitudes and government policies at different times which may be insensitive and upsetting

Aboriginal and Torres Strait Islander Peoples should be aware the collection and website may contain images, voices and names of deceased persons.

PROV provides advice to researchers wishing to access, publish or re-use records about Aboriginal Peoples