Last updated:

What is Artificial Intelligence?

'Artificial intelligence (AI) is a collective term for machine-based or digital systems that use machine or human-provided inputs to perform advanced tasks for a human-defined objective, such as producing predictions, advice, inferences, decisions, or generating content.' (definition from Solomon, L., & Davis, N., (2023) The State of AI Governance in Australia, Human Technology Institute, The University of Technology Sydney).

AI technologies can help with efficiency and timeliness when used with appropriate governance to undertake logical tasks. AI technologies require ethical frameworks to work within, and human oversight, to ensure that: 

  • bias or harm does not occur
  • content provided is accurate and not misleading
  • malicious activity does not result from its use
  • fundamental human rights are not impaired.

Different types of AI have different purposes. For example: 

  • Automation AI technologies undertake actions in accordance with specific parameters and datasets available
  • Machine learning and other decision-making AI technologies make decisions in accordance with specific parameters and data sets available
  • Generative AI including Large Language Models (LLMs) create new content in accordance with specific parameters and data sets available
  • Combination AI technologies (including agentic AI) can do all of the above: create content, form decisions on that content, and then perform actions on that content in accordance with specific parameters and data sets available.

 

Why is it important to capture records of AI technologies?

It is not enough to have records of business, actions and decisions on their own. It is vital to also have records of the technologies and processes that produced them. This is because the processes and technologies used impact how the record was formed, what was captured and how it will be kept. They directly impact access to the record, the context within which the record is understood, the integrity of the record as evidence, and can affect what decision or action was taken.

People have increasing awareness of harms and inaccuracies produced through using AI technologies. When interacting with Victorian government, people will want to know:

  • was AI technology used to make the decision/take the action or generate the content?
  • was/is the algorithm and/or underlying data biased?
  • was/is the content produced through AI technologies accurate and relevant?
  • has the agency done enough to mitigate any negative impacts?

Keeping records enables AI technology use to be explained, including what was done to balance bias and confirm accuracy. For example:

  • chat-based AI technologies are designed to be convincing, but they are not necessarily going to be correct. Accuracy depends on their source data, how the AI technology behaves (a combination of their algorithm and training for example), and the prompt used.
  • historical bias in the source data will be picked up by the AI technology as it will look for statistically viable results, not ethical ones.
  • undertaking and documenting checks to confirm the accuracy of an AI technology-produced result before it is acted upon helps with identifying and addressing potential harms as part of the decision-making process. A formal record of decisions and actions made will demonstrate how possible bias has been considered and mitigated.

Capturing records of AI use

Multiple factors impact what records of AI will need to be kept and managed. These include the type of AI used, the possible harms caused through using the AI, and the level of risk concerned.

The below table describes some different types of AI technologies and what records would need to cover.

TypeDescriptionRecords required to verify:
Agentic AIApplies the results from generative AI/large language models and communicates with other tools to undertake a specific goal autonomously

- that the actions and decisions were lawful

- that actions and decisions were undertaken in line with appropriate procedures and lines of authorisation

- that there was no bias involved or that bias was addressed appropriately

- that the actions and decisions were based on accurate and current information

Automated Decision Making (ADM) SystemsUse data to classify, analyse and make decisions that affect people - with little or no human intervention

- that the decision was lawful

- that the decision was made in line with appropriate procedures and lines of authorisation

- that there was no bias involved or that bias was addressed appropriately

- that the decision was based on accurate and current information

Expert SystemsUse a knowledge base, inference engine and logic to mimic how humans make decisions

- that the decision was lawful

- that the decision was made in line with appropriate procedures and lines of authorisation

- that there was no bias involved or that bias was addressed appropriately

- that the decision was based on accurate and current information

Generative AISystems that produce code, text, music, or images based on text or other inputs

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Large Language Model (LLM)A type of generative AI that specialises in the generation of human-like text

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Multimodal Foundation Model (MfM)A type of generative AI that can process and output multiple data types (e.g. text, images, audio)

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Machine Learning Systems (MLS)A broad set of models that have been trained on pre-existing data to produce useful outputs on new data

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Natural Language SystemsModels that can understand and use natural language and speech for tasks such as summarisation, translation, or content moderation

- that the content produced is authorised and valid

- that the content produced has been fact checked and is accurate, current and appropriate

- that the content produced is in line with relevant legislation and regulation

Robotic Process AutomationSystems that imitate human actions to automate routine tasks through existing digital interfaces

- that the action was lawful

- that the action was made in line with appropriate procedures and lines of authorisation

- that there was no bias involved or that bias was addressed appropriately

- that the action was based on accurate and current information

Virtual Agents and ChatbotsDigital systems that engage with customers or employees via text or speech

- that the engagement content produced is authorised and valid

- that the engagement content produced has been fact checked and is accurate, current and appropriate

- that the engagement content produced is in line with relevant legislation and regulation

 

People are increasingly aware of the impacts that use of AI technologies have on the decisions and actions that directly affect them. Using AI technologies without thought of their possible impact on all stakeholders involved has been demonstrated to cause harm. Records provide documented evidence of what transpired.

Some questions to consider when documenting AI technologies and their use include:

  • Has an assessment been undertaken to determine whether the source data used for for the AI tool is and remains accurate, and the results documented?
  • Has a review of the source data used for the AI tool been undertaken to assess whether the data is free of unfair bias, and the results documented?
  • Does the documentation address questions relating to the possibility of incorrect content through use of the technology, such as AI 'hallucinations'?
  • Does the documentation address risks related to using the technology that may affect its use or the level of documentation possible?
  • Does the documentation address risks relating to bias in the algorithm used?
  • Does the documentation address questions relating to intellectual property, including copyright?
  • Does the documentation address questions relating to privacy and confidential information?

The below table describes some of the risks and harms associated with the use of AI technologies and ways that good recordkeeping practices can assist with their mitigation (please note that this is not a definitive list).

Risk/HarmDescriptionMitigation re Recordkeeping
Human rights risks/harm due to bias (race, gender, economic, age, etc.)

Occurs for various reasons, including the dataset not being sufficiently diverse or having inherent bias that is magnified by the AI.

Examples include racism in predictive AI due to racism existing in the underlying data; facial recognition software (FRS) not recognising all faces as they mainly been trained on a small sample of faces.

- capture and address in line with Approval Processes Policy

- create and capture notifications and communications that AI is being used along with benefits and opt out information

- create and capture records of monitoring and addressing risks/areas of bias

- retain for duration of retention period and dispose of content lawfully

Accuracy risks/ harm due to incorrect/ inaccurate information or fiction reported as fact

Referred to as an 'hallucination'.

Occurs for various reasons, including the language used by the human and the way the AI was taught to respond.

Examples include references and case law being invented by the AI, unverified content being referred to as fact by the AI.

- create/capture and manage notification that AI is being used, what content is being generated by it and what source data is used

- create and capture records of due diligence, e.g. fact checking, explainability documentation, and confirming whose intellectual property it is

- retain for duration of retention period and dispose of content lawfully

Transparency risks/harm due to inability to explain AI decisions and actionsOccurs when it is unclear whether an AI was involved in making decisions or taking action, and/or how an AI was involved

- create and capture notifications and communications that AI is being used along with benefits and opt out information

- capture and address approval process decisions in line with Approval Processes Policy

 

Recordkeeping should be considered at all parts of the lifecycle when designing and implementing AI technologies.

The below list is a starting point for what may be considered to ensure accuracy and transparency as part of a records management program:

 1. Identify, describe and document each AI system used across the organisation, including:

  • the intended purpose and outcomes
  • desired benefits
  • type of AI used
  • context and scope of use
  • stage of implementation
  • sources of data
  • identified harms and risks
  • any controls or systems in place to mitigate risks; and
  • other relevant information, such as what cannot be documented (for example, 'black box technologies' that do not disclose algorithms and/or other elements used to make decisions).

2. Document and report on how the system has worked over time (for example, using automated logging to record events, capturing reports on issues and their migration, capturing the results of assessment and monitoring programs).

3. Determine and document what content can be created and captured by AI technologies, what needs to be created and captured by a human being, and for AI generated content, at what points a human being is to be included.

4. Determine and document what actions can be undertaken by AI technologies and when a human being is to be included, confirm, or undertake the actions.

5. Be transparent and accountable regarding ownership and responsibility in relation to all types of AI used:

  • provide a clear attribution for (or transparent acknowledgement regarding issues attributing ownership of) text, images, sound or video produced by generative AI
  • establish appropriate oversight of and responsibility (including lines of delegation and accountability) for each AI system used that factor in AI system failures, overuse of AI, and malicious use
  • determine the legal requirements or obligations applicable to each system and carefully attribute legal liability for harms or errors to entities across the AI value chain, with effective safeguards placed at the most effective and appropriate points.

6. Establish an ongoing assessment and monitoring program that assesses AI technologies for trustworthiness:

  • use an assessment framework that is designed around a set of ethical principles to assess the AI technologies and systems from the idea/design phase through to implementation and ongoing operations
  • assess the AI technologies performance in relation to risk, bias, and harm, and take steps to mitigate or otherwise address that risk, bias or harm
  • document and report on the results as part of an ongoing monitoring and assessment program.

7. Establish processes to ensure records generated through AI technologies are retained for the mandatory minimum period(s) required for the business function they document. For information about minimum records retention periods refer to the relevant functional or agency specific retention and disposal authority (RDA).

Ethical principles are included in most AI frameworks to ensure that harms caused by or through the use of AI technologies are fully considered, and the potential harm addressed.

The below table provides the Ethical Principles from Australia's AI Ethics Principles alongside some recordkeeping considerations regarding the creation, capture and management of records to address the principle.

Ethics principleRecordkeeping considerations

Human, societal and environmental wellbeing

AI systems should benefit individuals, society and the environment.

- have alternatives been explored before the decision to use an AI tool and has the decision-making process been documented?

- has the purpose of the AI tool been documented?

- does the documentation include the objectives of the tool, justification of why and how the specific AI tool is being used in relation to the business of the office?

- does it describe how the use is beneficial for individuals, society and the environment?

- are regular assessments undertaken and documented?

- do the assessments demonstrate that the tool is and remains fir for purpose?

- are any issues flagged addressed in line with the organisation's formal risk management program?

- is there evidence of alignment with relevant legislation, including the Public Records Act 1973?

Human-centred values

AI systems should respect human rights, diversity, and the autonomy of individuals

- how are the tool and its use aligned with human values and how have the human values been defined?

- does the tool and its use align with the Human Rights Charter?

- has a Privacy Impact Assessment on the tool and its use been conducted?

- does the tool and its use align with the Public Sector Code of Conduct (where applicable)?

- does documentation include how any discrepancies or areas of non-alignment are addressed?

- are regular and routine risk assessments being conducted throughout the life of the tool to address the possibility of harm and their mitigation (if possible) or consequences (if not)?

- has the tool been designed to augment, complement and empower human cognitive, social and cultural skills?

- how are diverse backgrounds, cultures and disciplines influencing the deployment and operation of the tool?

Fairness

AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups

- can the AI tool's use and outputs be explained?

- is there a process to address stakeholder questions about the extent and involvement of AI in decisions and actions?

- what stakeholder consultation is undertaken about the tool, its use and the associated impact of its use?

- how often is stakeholder consultation undertaken and how is it documented?

- are regular and routine assessments being conducted throughout the life of the tool to address the possibility of harm and their mitigation (if possible) or consequences (if not)? 

- does the tool and its use align with the Human Rights Charter?

- does the documentation include how any discrepancies or areas of non-alignment with the Human Rights Charter are addressed?

- are relevant and lawful retention periods in place and implemented for records about the tool, its use, and impact? See PROV retention and disposal authorities for details

- have the points at which a human undertakes the approval, decision or other action been determined and communicated?

Privacy protection and security

AI systems should respect and uphold privacy rights and data protection, and ensure the security of data

- are regular security assessments undertaken as part of a broader information and data security program?

- do the security assessments address unintended applications of AI systems, and potential abuse risks?

- are identified issues addressed in line with formal risk management programs?

- how are data and information integrity demonstrated, and does the documentation include the analysis as well as the overall result?

- how are data and information used by and produced by the tool governed, and

- are data and information governance managed as part of the broader aligned records management, information management, data management, and information technology programs?

- can the AI tool's use and outputs be explained?

- is there a process to address stakeholder questions about the extent and involvement of AI in decisions and actions?

- has a Privacy Impact Assessment on the tool and its use been conducted?

- is a disposal program for records (including data and information) in place to ensure that records are not retained beyond their minimum retention period unless there is a justification exception (such as legal hold)?

Reliability and safety

AI systems should reliably operate in accordance with their intended purpose

- can the accuracy and relevance of AI generated information be demonstrated, and does this include the location and validity of source data?

- is the tool regularly monitored for errors?

- does the monitoring include system performance?

- what assessments are undertaken to ensure the tool is and remains reliable, and that outputs are accurate and reproducible?

- has the purpose of the AI tool been documented?

- does documentation include justification of why and how the specific AI tool is being used, and whether it is and remains fir for purpose?

- has responsibility for ensuring the AI tool and system are robust and safe been assigned and documented?

- are regular and routine risk assessments being conducted throughout the life of the tool to address the possibility of harm and their mitigation (if possible) or consequences (if not)?

Transparency and explainability

There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI and can find out when an AI system is engaging with them

- does documentation address how stakeholders know when an AI system is engaging with them (regardless of the level of impact)?

- is there a process to address stakeholder questions about the extent and involvement of AI in decisions and actions?

- can the AI tool's use and outputs be explained?

- does the documentation of the tool include how it works and the location of source data?

- is the algorithm and data the tool was trained on known and documented?

- is it clearly indicated when an AI tool was used by the organisation, how it was used, and to what extent?

- are there rules in place governing what AI tool can be used when, what tool cannot be used, and what data or information the tool is not to use?

- are staff adequately trained to understand the AI tool, as well as their responsibilities in using, interacting, auditing and managing the AI tool?

- is there a monitoring and assessment program that identifies and reports on inappropriate, inaccurate, or poor use of the tool? Is there a process in place to address any issues identified?

- is the use of the tool included in a regular and formal audit and review program undertaken by a human?

- are any errors or issues documented and mitigated as part of a broader and formal risk management program?

Contestability

When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system

- does the documentation of the tool include how it works and the location of source data?

- are the algorithm and data the tool was trained on known and documented?

- is it clearly indicated when an AI tool was used by the organisation, how it was used, and to what extent?

- can the AI tool's use and outputs be explained?

- is there a process to address stakeholder questions about the extent and involvement of AI in decisions and actions?

- is there an accessible process for challenging a decision or action if the individual or community found it harmful to themselves or the environment?

- what stakeholder consultation is undertaken about the tool, its use, and the associated impact of its use?

- how often is stakeholder consultation undertaken and how is it documented?

- are regular and routine risk assessments being conducted throughout the life of the tool to address the possibility of harm and their mitigation (if possible) or consequences (if not)?

- is the use of the tool included in a regular and formal audit and review program undertaken by a human?

- have the points at which a human undertakes the approval, decision or other action been determined and communicated?

Accountability

People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled

- is there evidence of alignment with relevant legislation, including the Public Records Act 1973 and Privacy Data Protection Act 2014?

- is there documented evidence of who is responsible for oversight of the tool, and the use of the tool, including the results of using the tool?

- does the documentation of the tool include how it works and the location of source data?

- are the algorithm and data the tool was trained on known and documented?

- is it clearly indicated when an AI tool was used by the organisation, how it was used, and to what extent?

- are regular and routine risk assessments being conducted throughout the life of the tool to address the possibility of harm and their mitigation (if possible) or consequences (if not)?

- are the implementation and use of the tool included in a regular and formal audit and review program undertaken by a human?

 

Tools for documenting AI technologies and related processes are provided below. As the tools were designed for international or interstate jurisdictions, please remember you may need to tailor them for use within Victoria. 

AI technologies are constantly evolving, including how, when and where they are used, what they are capable of doing, and the regulatory environment they are used within.

One of the ethical principles commonly referred to is explainability so that people interacting with the organisation are able to understand how AI has impacted on their interactions. This requires documented understanding of how the processes and technologies work in relation to specific types of records/data.

Some things that can be useful in communications with stakeholders are to:

  1. Acknowledge when AI technologies are being used, which ones, what they are being used for, and how they will work throughout their lifecycles.
  2. Provide a notification for individuals interacting with generative AI and other AI technology/applications along with information regarding opting out and request for review processes. Include information on any 'black box technologies' used., as well as the benefits of using the tool and the impact of opting out.
  3. Explain the processes and technologies used that resulted in the decision, action, or other business the stakeholder was involved with.

Material in the Public Record Office Victoria archival collection contains words and descriptions that reflect attitudes and government policies at different times which may be insensitive and upsetting

Aboriginal and Torres Strait Islander Peoples should be aware the collection and website may contain images, voices and names of deceased persons.

PROV provides advice to researchers wishing to access, publish or re-use records about Aboriginal Peoples