Topics
- Compliance and Regulation
- Technology and Cyber Risk
Department of Industry, Science and Resources
10 Binara St
Canberra ACT 2601
[Submission via Webform]
Re: Response to Proposals Paper – Mandatory Guardrails for AI in High-Risk Settings
Amstelveen welcomes the opportunity to provide feedback on the proposed Mandatory Guardrails for AI in High-risk Settings.
Amstelveen is a specialist risk and compliance consultancy which operates across Australia and New Zealand. Our clients include public and private sector organisations at the forefront of AI deployment and which generally have a high degree of exposure to technology and data-related risks, such as those in financial services, government, telecommunications and energy.
In this submission, we have responded to a subset of the questions listed in the proposals paper titled “Proposals paper for introducing mandatory guardrails for AI in high-risk settings”.
1. Do the proposed principles adequately capture high-risk AI? Are there any principles we should add or remove?
To further improve the breadth and specificity of the principles we suggest the following:
1. Consideration of intention, in addition to outcome:
It is suggested that the purpose and intent behind the development of AI models and systems should be specifically referenced in a principle. This could be achieved by adding a principle which designates an AI system as high-risk, when its use is in a high-risk context, e.g. facilitating or enabling a high-risk activity, such as human safety, defence, national security, etc. In doing so, the framework would:
- Demonstrate recognition of the importance and impact of key goals and objectives beyond outcomes; and
- Ensure greater alignment with global AI standards and regulations. For example, the EU Artificial Intelligence Act places obligations on developers of high-risk AI systems to have robust risk management, data governance, and technical documentation.
This reflects recent industry developments, including findings from the 2023 Royal Commission into the Robodebt Scheme, that demonstrated the way in which relatively simple automated decision-making can drive widespread and severe consequences, which cannot be predicted but could have been better governed at the planning and design stages.
2. Protection of vulnerable communities
While Principle D references “groups of individuals” and “cultural groups” (p. 19), we suggest considering the explicit inclusion of particular vulnerable communities, on the basis of age (particularly children and the elderly), disability and socio-economic situation. This would support additional protections for specific vulnerable community groups, at higher risk of influence or impact as a result of adverse AI solutions.
3. Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed?
In order to clarify what constitutes high-risk AI and how this may evolve in future, both approaches may be necessary. While a principles-based approach supports the identification of current and potential high-risk AI scenarios, this may introduce a degree of subjectivity. The provision of examples to support the principles may mitigate this, however a list-based approach would provide more precise guidance commensurate with the low-trust behaviour Australians demonstrate with AI (p. 3; Ipsos, 2023; Roy Morgan, 2023).
It is worthwhile to note that the Security of Critical Infrastructure (SOCI) Act 2018 takes a list-based approach through the recognition of 11 critical sectors. However, a list-based approach alone is more likely to be outpaced by the rate of AI’s development and will fail to capture emergent high-risk uses.
A dual approach can be presented as a comprehensive list which clearly defines and captures current understandings of high-risk AI domains, supported by a list of the principles, allowing emerging use cases of high-risk AI not explicitly mentioned to be captured, assessed and managed in accordance with these principles.
In addition to our suggestions in Q1, we suggest consideration of the following changes to the Domain Areas list provided in Table 1 (p. 26):
- Standardise the definition of “Critical Infrastructure”: The list would benefit from coverage of the 11 critical infrastructure sectors identified in the SOCI Act 2018, providing clarity that “critical digital infrastructure” (p.26) applies to the supply chains, information technologies and communication networks of “Communication, Financial services and markets, Data storage or processing, Defence industry, Higher education and research, Energy, Food and grocery, Healthcare and medical, Space technology, Transport, Water, and sewerage” organisations. Further, the separation of Systems of National Significance (SONS) as its own high-risk use case would provide asset owners with existing enhanced cyber security obligations a better understanding of their AI-adjacent requirements under the guardrails.
- Update “Access” to “Decisioning” in “Access to essential public/private services”: While the use of AI to determine eligibility and access to public and private services is indeed a high-risk use case, AI is also being used to make decisions outside of access. As examples, Insurers are increasingly using AI to make decisions on Claims outcomes, and the use of AI in medical decisioning is a powerful emerging use case.
- Capture “Access to housing”: As outlined in Article 11 of the ICESCR, this right should be explicitly protected for both public and private access. Although implied in Principles A and C, explicitly stating this would provide certainty and reassurance to Australians of its consideration as a topical priority.
- Add “Citizenship process”: In alignment with Article 24 of the ICCPR and Attachment D (p. 63) which covers Migration/Border protection, as well as the overall protection of National Security for a country whose population and economic growth has been reliant on immigration, AI-use in determining access to citizenship should also be captured.
- Add “Access to internet”: Amstelveen acknowledges that while internet access is not explicitly recognised as a human right, Communication is a recognised critical infrastructure sector, with the internet undoubtedly facilitating “the economic development and enjoyment of a range of [other] human rights” (AHRC, n.d.). As such, AI’s use in determination of an individual’s access to internet should be explicitly captured. Further, in the interest of the Digital Divide as a nationwide issue of equity, the Australian government should “be cognisant of [the way in which] individuals’ access to digital technology has quality of life implications” (Bentley & Naughtin, 2024).
4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)? If so, how should we define these?
Given AI technologies are rapidly changing, accurately determining a complete listing of use cases posing unacceptable risk levels is complex. Amstelveen’s determination of prohibited cases is through evaluation of our proposed “high-risk domain areas” and considers niche use cases that represent the most significant level of harm to the safety and security of individuals and the nation. We recognise the limitation of only prohibiting cases with current awareness and international case studies, rather than potential future use cases; however, it is crucial for the Government to legislate on issues that are actively occurring and have material impact.
To further the interest of achieving “International Engagement” (p.7), Amstelveen proposes the Australian government adopt the listing of the detailed and practical Prohibited AI Practices detailed within Article 5 of the EU Artificial Intelligence Act. These bans are narrowed versions of domain areas listed in high-risk AI use cases and would be executed to affirm Australia’s alignment with international prohibitions on:
- Using subliminal, manipulative, or deceptive techniques to distort behaviour and impair decision-making, causing significant harm;
- Exploiting age, disability, or socio-economic vulnerabilities to distort behaviour, causing significant harm;
- Inferring sensitive attributes (race, political opinions, etc.) from biometric data, except for lawful purposes or law enforcement;
- Social scoring based on behaviour or traits, leading to negative treatment;
- Assessing criminal risk based solely on profiling or personality traits, except to support human assessments with verifiable facts;
- Creating facial recognition databases by scraping images from the internet or CCTV;
- Inferring emotions in workplaces or schools, except for medical or safety reasons; and
- Using real-time remote biometric identification in public spaces for law enforcement (with some exceptional use cases).
In addition to the use cases identified and listed within the EU Artificial Intelligence Act, Amstelveen further recognises scope for banning the application of Generative AI relating to Politically Exposed Persons’ (PEPs) faces and voices, given the risk and impact of such portrayals in comparison to realised and potential benefits. Amstelveen proposes the determination of PEPs in this scenario to align with definitions prescribed by AUSTRAC. It is further noted that various international jurisdictions are already responding to high-profile cases of PEPs likenesses being altered by AI in comprising situations. The US and South Korea treat these cases as criminal offences, with the latter introducing additional regulatory reforms. The presence of AI-enabled deception has led to severe challenges in citizens’ democratic engagement of federal elections due to its impact on perceptions of truth.
We recognise the challenges in identifying all high-risk AI use cases and suggest that the Australian government might consider adopting a similar approach to the EU’s Prohibited AI Practices to help ensure alignment with international regulations for safety and ethical standards. Additionally, Amstelveen sees value in banning Generative AI applications involving PEPs to protect the sanctity of democracy and public trust in the nation’s democratic processes, mainly election campaign material.
8. Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings? Are there any guardrails that we should add or remove?
While each guardrail addresses key concerns of AI in both high and low risk settings, there is further opportunity to improve adoption while minimising misunderstandings in their interpretation. Amstelveen would consider providing further guidance in the following areas:
- Definition of reasonable and accountable persons to monitor and oversee end-to-end AI development and deployment (as relates to Guardrail 1).
- Clarification of qualified persons to interpret output and understand the core capability and limitations of an AI model (to accurately assess the accuracy of algorithmic advice) (as relates to Guardrail 5).
1. Definition of reasonable and accountable persons to monitor and oversee end-to-end AI development and deployment (as relates to Guardrail 1).
Amstelveen proposes the inclusion of reference to a conflict-of-interest clause, as part of Guardrail 1’s definition of reasonable and accountable persons, to ensure independence and integrity over organisational appropriateness and completeness of end-to-end AI development and deployment. We recommend alignment with the EU’s General Data Protection Regulation, which requires a clear segregation of duties for reasonable and accountable persons through the following:
A responsible and accountable person should, for example:
- Not also be a controller of processing activities (e.g. Chief Technology Officer);
- Report directly to senior management; and
- Have the authority to investigate.
2. Clarification of qualified persons
To ensure commensurate human oversight in the development and deployment of AI, Guardrail 5 can be further strengthened with more prescriptive criteria surrounding ‘sufficiently qualified to interpret output (…) to accurately assess the accuracy of algorithmic advice’. In its current state, the experience and competency of human oversight is not governed by any formal qualification criteria or metrics. To provide greater assurance over the skills and competency with respect to AI systems, Amstelveen suggests prescribing more detailed guidance for organisations to independently determine the appropriateness of qualified persons across the AI lifecycle, commensurate with organisational maturity, capability and capacity. Considering that the guardrail places a high degree of importance on human oversight, assuring and upholding human skills and competency reinforces the mitigation of risks associated with AI’s speed, scale and increasing autonomy (p. 38).
12. Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?
The guardrails should apply to all uses of AI in high-risk contexts, regardless of the size of the entity to which this applies, however regulators should be encouraged to apply consideration of the complexity and context of that entity in enforcing guidelines. This is similar to the approach of the Australian Prudential Regulatory Authority in enforcing Prudential Standards, where small-to-medium sized regulated entities are still subject to the same Prudential Standards as large institutions, however there is a recognition that the application of these will vary dependent on the complexity and context of the relevant entity.
13. Which legislative option do you feel will best address the use of AI in high-risk settings? What opportunities should the government take into account in considering each approach?
Amstelveen would encourage the pursuit of a Framework Approach (p. 6), actively amending legislation where high-risk AI use is relevant and creating a separate framework to provide AI-specific regulatory guidance.
Opportunities in Present Legislation
It would be remiss to not utilise the effectiveness of existing regulatory infrastructure beyond the listed “regulatory arrangements” (pp. 43-45). The following is a list of regulations that should be reviewed and updated to better address high-risk AI scenarios:
- The SOCI Act: As identified in responses to Q1 and Q3, 11 sectors are obliged to ensure protections over systems that are critical to the nation. The communication of how AI impacts these obligations from a cyber perspective is required to achieve the regulatory clarity necessary.
- The Privacy Act and Bills: Current and future legislation around the data lifecycle hold implications on how AI systems can be developed, and already provides individuals certain levels of protection and rights regarding their privacy.
- Commonwealth Prudential Standards: While only applying to Financial Services, there are legislative requirements which may intersect with AI use, especially from an operational risk perspective. Here, AI risks relevant to the listed domain area “access to essential private services” can be better captured to ensure appropriate governance on the use of AI systems by organisations in this sector.
- Crimes Act(s): In July of this year, a man in Victoria was sentenced for the production of AI-generated child abuse images under pre-existing criminal legislation (S51c of the Crimes Act 1958 (Victoria)) (Australian Federal Police, 2024). This example was chosen to demonstrate there is already some level of regulatory protections over high-risk AI domain areas, that are significant to Australians that do not reference AI and were certainly not written with AI in mind.
- Discrimination Act(s): At the federal level, unlawful discrimination is prohibited through the four legislative pieces of Age Discrimination Act 2004 (Cth), Disability Discrimination Act 1992 (Cth), Racial Discrimination Act 1975 (Cth), and Sex Discrimination Act 1984 (Cth). Here, some of the principles and listed high-risk AI uses cases would benefit from enhanced guidance for matters relevant to the protection of individuals’ rights.
- Broadcasting Services Act 1992 and Commonwealth Electoral 1918: Amending these pieces of legislation to empower the Australian Communications and Media Authority and the Australian Electoral Commission with the means to prevent the use of deepfakes in election campaign material per Q4’s response exploration.
It would be practical and reasonable for organisations to be compliant with AI legislation through the lens and enforcement of frameworks and regulatory bodies (e.g. the Courts, OAIC, APRA/ASIC, AUSTRAC, Cyber and Infrastructure Security Centre) in which they are already familiar. Many of the high-risk use cases would likely be somewhat included with mature organisations’ pre-existing management of emerging technologies more broadly. Existing regulatory bodies should be leveraged where possible for their expertise on specialised compliance matters and provide sufficient coverage of various legislative areas.
Justification of a Dedicated AI Framework
Dedicated AI-based legislation would help organisations make more informed decisions on how their current compliance obligations could be leveraged to better manage AI, as well as provide coverage on governance decisions idiosyncratic to AI usage. The following reasons support specialised regulatory oversight beyond current regulations and laws:
- Governance is inherently process-oriented, thereby creating an emphasis on the traceability of decision-making to determine how incidents eventuated. People and Machine Learning systems can be interviewed and/or audited, whereas AI cannot be; demonstrating the key risk that there are significant challenges to monitor and govern AI;
- Our increasing reliance on AI systems increases the likelihood that there are negative impacts from its use; and
- To align with global standards, as other jurisdictions have or are exploring dedicated AI legislation.
Conclusion
Thank you for providing us with the opportunity to provide input into this discussion paper. Please feel free to contact us to discuss any of these items in further detail.
Sincerely,
Amstelveen
Email: info@amstelveen.com
Address: Level 11, 570 George Street, Sydney NSW 2000
Web: http://www.amstelveen.com
Submission to the Mandatory Guardrails for AI in High-Risk Settings
Read the articleCanva’s Generative AI Tool, “Magic Media”, was used to create the attached cover image.