Safe and responsible AI in Australia
Resources
Introducing mandatory guardrails for AI in high-risk settings
ANA welcomes the intention to introduce mandatory guardrails for AI in high-risk settings. Australians use AI across most cultural and creative domains, but are still cautious, as our forthcoming Analysis Paper ‘Guide, Steer, Repeat’ shows. We attach an advance copy of this report for your information. In this submission, we answer the following consultation questions:
- Question 1: Do the proposed principles adequately capture high-risk AI? Are there any principles we should add or remove?
- Question 2: Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country?
- Question 4: Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)
- Question 8: Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings?
- Question 9: How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve ICIP?
In our role as a philanthropically funded, independent think tank, ANA is ready to provide further information about the response in this submission and would welcome the opportunity to discuss. We confirm that this submission can be made public.
Question 1: Do the proposed principles adequately capture high-risk AI? Are there any principles we should add or remove?
ANA considers the proposed principles adequately capture high-risk AI, subject to the comment below about Principle (a), ‘The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification’.
ANA suggests Principle (a) omits the caveat ‘without justification’. Even if justifications are given for an adverse impact to human rights law, those impacts remain relevant to consider. ANA notes this change would be consistent with the other proposed principles.
Question 2: Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country?
ANA welcomes the work led by the federal Office for the Arts (OFTA) to develop standalone legislation to protect Indigenous and Cultural Intellectual Property (ICIP). ANA also welcomes the ongoing discussions regarding ICIP and copyright law between the Department, OFTA and the copyright area of Attorney-General Department.
ANA also welcomes the intention for the principles to account for First Nations impacts. If the mandatory guidelines come into effect before Parliament passes standalone ICIP legislation, the principles should be updated to include any additional impacts on First Nations people, communities and Country recognised in this legislation.
Question 4: Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)?
ANA understands some cultural and creative industries consider there is potentially a high risk from the unauthorised copying of copyright material from Australian creator to train AI systems, which is largely occurring overseas under foreign copyright laws. ANA also understands that some cultural and creative workers have concerns about job losses where applications of AI substitute for or reduce need for their occupations.
Based on the information currently available, ANA does not consider these impacts rise to the level of unacceptable risk at this time. They are not, for example, equivalent in risk to the application of AI in biometric systems for identifying individuals.
Question 8: Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings? Are there any guardrails that we should add or remove?
ANA provides comments on select guardrails below.
ANA welcomes proposed Guardrail 3 (which includes ‘the original and legality of the data set and collection processes) and proposed Guardrail 9 (which includes record keeping requirements for ‘a description of datasets used and their provenance’). These will be essential for understanding impacts on copyright (and possibly other arts and culture-relevant contexts), and informing industry responses and policy making in these spaces.
ANA also welcomes proposed Guardrail 6 (which makes deployment of AI more transparent to end-users) which and Guardrail 8 (Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks). These will enable cultural and creative industries, as well as Australians engaged in cultural and creative activities, to decide when, why and how they use AI.