The explosion of generative AI over the past few years has generated (no pun intended) significant waves for both industry and policymakers.
For policymakers worldwide, the EU has led the charge with the EU AI Act, which went into force in August 2024. With increasing calls for global regulatory alignment to ensure both safety and competitive fairness, the EU AI Act is poised to serve as a global benchmark for other governments and regulators.
What are the implications for risk managers? How should the EU AI Act and other developing global regulatory frameworks be navigated? What are the no-regrets actions we can take now to prepare?
In this blog, we explore:
- The key components of the EU AI Act
- Looking beyond regulatory compliance
- Using model risk management (MRM) to manage the risks
- AI controls management
Watch Protecht's AI risk controls webinar on demand for a further exploration of the regulatory drivers shaping AI governance:
Key components of the EU AI Act
The EU AI Act[1] takes a risk-based approach to regulation, classifying AI systems based on their potential impact. For risk managers, the critical question is, "Does this apply to my organisation?" The Act covers both ‘narrow’ AI – systems built for a specific purpose, like fraud detection – and general-purpose AI models, which includes most forms of Generative AI such as large language models or image generators.
The Act defines two major roles: providers, who create AI models, and deployers, who use these models. Your organisation may be both. For example, using a commercially available fraud detection model makes you a deployer, whereas developing and deploying your own model may make you both a provider and a deployer.
Whether you are a provider or deployer, it is important to consider: 'What are we using AI for?' The Act includes prohibited and high-risk use-cases.
Prohibited |
High risk |
Manipulative AI |
Limited biometric identification |
AI that exploits vulnerable groups |
AI used as safety features in components |
Social scoring models |
Critical infrastructure |
Crime prediction based on profiling |
Education & training (e.g., admissions) |
AI to expand facial recognition databases |
Employment & worker management |
Emotion inference in workplaces or education |
Access to essential services |
Biometric categorisation |
Law enforcement |
|
Migration, asylum, and border control |
|
Justice and democratic processes |
While not all of these may apply to your organisation, risk managers should scrutinise recruitment practices (employment and worker management) and any AI used for assessing creditworthiness or insurance pricing (captured under access to services). These could fall under "high-risk" applications, demanding additional compliance actions. If they are, what do you need to know?
Requirements for high-risk systems
For high-risk systems, the obligations differ depending on whether you’re a provider or a deployer.
Providers |
Deployers |
Risk management system |
Human oversight |
Data governance |
Ensure data input is relevant |
Technical documentation |
Use AI in accordance with instructions |
Record-keeping |
Notify provider of serious harm and cease use |
Transparent information to deployers |
Create and retain logs |
Human oversight |
Data protection impact assessment |
Accuracy, robustness, and cybersecurity |
Fundamental rights impact assessment |
Implement a quality management system |
|
Automatically generate logs |
|
Deployers can’t simply point to the provider if things go wrong (as in vendor risk management, you can’t transfer risk to the vendor) – the deployer has to demonstrate they have used it in accordance with instructions (e.g. have not tried to circumnavigate the providers safeguards), generate logs of activity and output, and perhaps most important – include human oversight.
Transparency requirements
You may not have any high-risk systems. However, if you plan to integrate AI into your products or services, transparency requirements may still apply, such as:
- Providers must inform users they are interacting with AI (e.g. not pretending a chat window is human).
- Providers of systems generating synthetic content must ensure it is machine-readable and detectable as AI-generated.
- Deployers creating synthetic image, audio, or video content (e.g., deepfakes) must disclose that it is artificially created.
- Deployers creating synthetic text or content on public interest matters must disclose it is artificially generated or manipulated.
Australia released a consultation paper in September, with proposals on mandatory guardrails. The proposal is clearly influenced by and references the EU AI Act. It poses questions such as whether to adopt a principles-based approach, or a ‘list-based’ approach on what constitutes high risk, similar to the EU AI Act.
While consultation on the proposal is currently closed and being debated, organisations should also consider other benefits of implementing safeguards.
Looking beyond regulatory compliance
The EU AI Act defines risk as "the combination of the probability of harm occurring and the severity of that harm." This definition primarily focuses on reducing harm to individuals and society, aligning with a public risk perspective.
However, ISO 31000 defines risk as "the effect of uncertainty on objectives", which includes an organisation’s value creation objectives, not just societal risk appetite[2]. Regardless of the regulatory framework, your organisation must manage the risks associated with AI implementation to ensure both compliance and alignment with business goals.
Key steps for AI risk management include:
- Clearly articulate objectives related to the use of AI.
- Understand how AI integrates with your operating model.
- Identify risks that could hinder achieving these objectives.
- Implement effective controls to mitigate those risks.
Let’s further explore model risk governance to manage these risks, as well as controls assurance over specific AI risks.
Using model risk management to manage the risks
Effective AI model governance requires overseeing the full lifecycle of your AI models, from development through deployment to monitoring. Some questions risks managers should ask:
- Do we have a comprehensive library of our models? Who is accountable for the library and for each individual model?
- Are there models in use that aren’t documented in our library? What gives us confidence that we have identified all models?
If those questions scare you, you may have some work to do in strengthening model risk governance. You may have existing model risk management practices for algorithmic or rules-based models, which can be extended to include AI models.
Consider these components as part of your model risk management:
- Governance framework: Establish clear roles and responsibilities in a model risk policy. Accountability for oversight must be distinct, including responsibility for each model's performance and outcomes.
- Model purpose and alignment: Ensure the intended beneficiaries of each model are well-defined, and that outcomes are equitable for all impacted.
- Validation and testing: Conduct rigorous pre-deployment testing. Evaluate data quality, ensure transparency, and assess the need for guardrails or safety features.
- Deployment transparency: Clarify deployment methods and ensure all roles and responsibilities are defined during implementation.
- Continuous monitoring: Monitor the model’s performance and its integration with real-time data sources. Track changes in data compatibility and revalidate as needed.
For AI models in particular, you may need to capture which products and services they are integrated with, regulatory classifications if they apply, and assessment of related minimum regulatory requirements (such as watermarking) the model needs to be met.
AI controls management
Beyond model risk management, you may want to map specific controls in your organisation to manage the risks arising from your AI implementation. An effective controls management framework includes:
- Designing and implementing controls
- Controls assurance and testing
- Metrics and reporting
- Continuous improvement
While it will depend on your specific implementation, you may need to consider controls such as:
- Data clean-up activities if you are integrating a provider’s model with your own internal data sources
- Access controls on who can commit changes to the model to production
- Access controls on who can modify data sources used to train the model or in ongoing retrieval
- Security controls to prevent prompt injections or other adversarial attacks on your model
- Sampling and validation of model outputs
These are not set-and-forget. If control tests are developed at the same time the control is implemented, it can help refine the controls design. Ideally these tests are then issued on an automated basis.
Where applicable, key control indicators can be applied to support this assurance, which can provide early warning indicators that controls are not performing as expected. The outcomes of controls testing can be incorporated into reporting – either over model risk specifically, or integrated into broader enterprise risk management.
Outcomes of testing then drive continuous improvement, adjusting controls in a dynamic environment.
Conclusions and next steps for your organisation
The EU AI Act brings specific requirements for high-risk and prohibited use cases for organisations operating within the EU, and is likely to be a good predictor of how regulatory developments will evolve in Australia and elsewhere.
However, regulatory compliance alone is not enough. Effective model governance and robust controls assurance are critical for organisations to leverage AI responsibly, maximise value creation, and manage associated risks effectively.
To understand these challenges in more depth, join our AI Risk Controls: Is Your AI Under Control or Running Wild? webinar on demand and watch me and David Tattam explore the regulatory drivers shaping AI governance, the risks posed by AI implementation, and the critical controls frameworks needed to manage these risks effectively.
You’ll gain actionable insights on:
- Regulatory trends, including the EU AI Act and its implications for AI governance.
- Mitigating AI risks through robust model governance and assurance practices.
- Integrating AI risk management into your broader enterprise risk framework.
Whether you're a Chief Risk Officer, compliance officer, or governance professional, this session will equip you with the knowledge to align AI strategies with your organisational goals while maintaining compliance.
Watch the webinar on demand now: