The US Department of Treasury has released a report on managing artificial intelligence (AI)-related risk in the financial services sector, prompted by an Executive Order. The outcomes of the report were informed by interviews with industry stakeholders in financial services and make up a comprehensive set of recommendations - even if you're not in the US or in financial services. The good news is that you most likely have the enterprise risk management capabilities to meet them already.
In this blog, we will cover:
- Observations about the AI risks that organisations face
- How interviewees are responding to these risks
- Leveraging existing enterprise risk management and model risk management capabilities
Protecht’s Information technology risk management eBook has a comprehensive section on AI and IT risk management. Find out more:
Using AI to monitor and manage cybersecurity and fraud risk
The report is clear that it considers AI broadly, with generative AI as a subset. Most financial institutions are using – and have been for some time – AI tools as part of their cybersecurity or fraud programs. Of course, maturity varies across the sector, with ongoing uplift in capability.
Of note is that many institutions use AI models that are built by third parties – or even by fourth parties. For example, an organisation might specialise in cybersecurity, but outsource the build of AI models. These tools may then be tuned with the bank’s in-house data before deployment.
A cautious approach is being used to incorporating generative AI into business operations. While the report doesn’t state cybersecurity or fraud specifically, it’s implied through commentary on limited adoption for activities that require high levels of assurance. This aligns with the Executive Order’s requirement to minimise risk in AI deployments.
Dealing with AI threats
Proactive use of AI is one side of the coin; the third section turns to threats to the organisation. It covers two quite different types of threats:
- Threat actors leveraging AI to conduct cyberattacks or fraud
- Threat actors attacking the organisation’s AI systems
The first applies equally to all, while the latter scales with the organisation’s internal adoption of AI.
No doubt you’ve been on the receiving end of countless phishing attempts. The use of AI is making these social engineering attempts harder to spot, and generative AI can help tailor messages to individual targets, making them more authentic while also allowing for scale.
The use of AI by threat actors is not a new risk in and of itself; it simply changes the way that existing cybersecurity, fraud or disruption risks can occur, and perhaps most importantly the speed at which they can escalate. In particular, the use of AI or automation may more quickly identify and exploit vulnerabilities.
Attacks on AI systems are more nuanced. If you (or your third parties) are implementing AI systems, you need assurance that they will achieve the expected outcomes and have a high level of integrity. While we cannot blindly trust technology, many end users of AI (whether specialised like tuned cyber threat tools, or generative AI models) will have no or limited knowledge on how the model achieves its results or outputs. Unless something is obviously ‘off’ or they are trained to look for anomalies, they will likely trust the model.
Threat actors, including insiders, might modify the parameters of a model directly to manipulate how the model operates and the outcomes it produces to serve their own purposes. Another method of attack is data poisoning: modifying the data that the model is trained on. Depending on the intentions of the threat actors, this may result in AI that might compromise personal privacy or safety, or discreetly introduce interactions and outputs that might be harmful.
The use of third parties also comes with its own risks. Not just from malicious cyber threats that might impact their model, but how they might change their models over time. You may need additional assurance over how they govern their models.
Leveraging existing enterprise risk management capabilities
The report next considers existing regulatory requirements that might cover the risks of AI. And while regulatory in nature for financial services, they are good practice for anyone:
- Risk management
- Model risk management
- Technology risk management
- Data management
- Third-party risk management
While not specified, we interpret the first to be Enterprise Risk Management, which ultimately includes the rest. Some risk types may require specific processes or requirements, but ultimately the goal is to manage risk to the enterprise.
To that end, organisations likely already have the processes required to manage these risks. This aligns with those interviewed for the report – they were embedding the management of AI risks into their ERM programs. Existing risk processes are sufficient – you just need to understand how existing risks are changing. Business lines are responsible for managing their risks and may require some education on AI and how threat actors can use and exploit them; however existing approaches to risk mitigation and controls management are the same.
At Protecht, we adopt a process we call Risk in Motion to help bring critical risk information to the surface. This brings together related risk processes and components, including risk assessments, attestations, key risk indicators, controls assurance, incidents, and action management.
Financial institutions will already perform model risk management, including model risk governance, risk management and reporting. For any model, its important to review data quality, how bias is monitored and managed, and explainability of the model. While this approach is typically for financial models, those same requirements apply to any AI application, including those for cybersecurity, fraud, or integration with products and services. This includes regular testing and validation of the models.
Conclusions and next steps for your organisation
If you aren’t already adopting them, here are some actions to consider:
- Deliver general awareness training on AI, which will improve existing risk assessment processes
- Review the existing risks you face that may have change due to the evolving nature of AI
- Integrate AI-related risks, and the controls to manage them into your existing enterprise risk management systems with commensurate controls assurance
- Integrate your use of AI models into existing model risk management processes
If you want to know more about managing AI risks within an enterprise risk management framework, then Protecht’s Information technology risk management eBook has a comprehensive section on how AI risk fits into broader information technology risk management for risk and IT professionals alike, including a risk management checklist for specific AI projects. Find out more and download the free eBook now: