AI Risk Assessment & Mitigation
What is exciting about AI, scary about, what do the regulations say, and how to manage AI risk
August 5th, 2023: In my part time work, I got the chance to explore the space of A.I. governance. For the role, I had to write a blog post about the topic, which I really enjoyed since A.I. governance is a topic I was hoping to learn more about. Through this newsletter, I am hoping to create in public what I am learning and what projects I am working on. In that spirit here is the post I worked on:
For over 30 years CIMCON Software has helped empower financial institutions to take control of their E.U.Cs and manage their risk. We have helped hundreds of companies create up to date inventories of their files and models, automatically assess their risk profiles, and then enabled teams to add the proper controls and approval workflows on top of them. Our core principles have always centered around providing controls and risk management that does not restrict or add bureaucracy to your organization, but instead empowers its members through insight, intelligent automations, and flexible, but powerful rules.
We now feel that it is imperative to leverage our years of experience and our principles to provide controls and risk assessment for a wider array of complex models, and more specifically models that leverage A.I..
In this post, we want to cover:
What excites us and terrifies us about A.I.
What the regulatory landscape around A.I. models look like
How to best manage the risk profile from A.I. models and comply with regulations
What excites us and terrifies us about A.I.
From determining credit lending risk to fraud detection and prevention, trading algorithms, financial market analysis, and more, models have become a fundamental tool in the arsenal of banks and other companies providing financial services. But with their widespread adoption have come great risks. Errors produced by inadequate controls and oversight of models have caused even experienced institutions to face disastrous and even deadly consequences.
For example, a glitch in a trading algorithm cost Knight Capital $440 million when it accidentally bought and sold hundreds of stocks during a 30 minute period. Its stock price fell by 75% and a year later it was bought by its rival Getco. Goldman Sachs similarly had a glitch in its trading model that could cost the company $100 million and most recently a lack of proper controls and errors in risk measurement models were found to play a big part in the collapse of Silicon Valley Bank.
These models were relatively simple. More complex models that leverage A.I. are now the ones becoming widespread. In fact, McKinsey estimates that A.I. adoption among businesses was 2.5x higher in 2022 than in 2017. If your business area does not already leverage this technology, chances are it soon will. But with the great potential benefit in terms of accuracy and capability, comes an even greater increase in risk. It is much more difficult to understand how models that leverage A.I. generate their predictions which can make errors hard to find and even harder to fix. This is why Gartner predicts that 85% of A.I. projects will deliver erroneous results. With more organizations using more complex models that experts estimate have an 85% chance of failure in successful implementation, the amount of risk we are seeing today with A.I. is unprecedented.
What the regulatory landscape around A.I. models look like
Due to the great potential benefit and risk of A.I., financial institutions now face a difficult choice:
If you do not leverage A.I. models, you risk being left behind your competitors
If you do leverage A.I. models, experts estimate it is very likely it will produce errors, and as we have seen these errors can come at a disastrous cost
But even beyond the great risk that comes with models that produce errors, comes the risk of regulatory penalties and the risk to your reputation. All firms that leverage A.I. models are subject to the Model Risk Management regulations and new regulations, specifically about A.I. models are very likely coming soon. In fact the E.U. just passed the A.I. Act less than 2 months ago on June 14th, 2023. The PRA released a supervisory statement, SS 1/23, a month before that on May 17th, 2023 around Model Risk Management Principles for Banks that goes into effect a year later on May 17th, 2024.
U.S. President Biden just had a roundtable with civil society leaders about mitigating the risks of A.I. in June just around the same time that the U.K. Prime Minister Sunak announced that the UK will host a global summit on safety in artificial intelligence in autumn.
So a lot is happening, but as of now, what are the major regulations that you need to know about?
SS 1/23: This Supervisory Statement is the most recent one from the PRA that sets out to define what is a model, how to categorize its risk level, and what the standards for proper model validation and controls are. Models that leverage artificial intelligence are specifically called out in this statement.
SR 11-7: This Supervisory Guidance on model risk management was jointly developed by the Federal Reserve System as well as the O.C.C. and has been in effect since 2011.
CP 6/22: This consultation paper also from the PRA was published on June 21st, 2022 and serves as an earlier outline of the expectations for identifying and addressing model risk within banks.
The E.U. AI Act: This legislation passed by the E.U. aims to be a global standard for explicitly banning A.I. applications that are deemed to have an unacceptable or high risk such as the use of facial recognition in specific ways. This legislation is less directly related to banks and model risk management, but could be important to keep an eye out for.
Regulators are appropriately taking the risk of A.I. seriously with on-going legislation that is even starting to come into effect soon. Staying on top of these regulations and recommendations is a way not just to avoid regulatory penalties, but also just follow best practices that will reduce errors and mitigate risk.
What can firms do about mitigating this risk?
As we have discussed, firms are in a really tough spot and, without proper standards and controls, are honestly set up to fail when attempting to implement A.I. models within their organization. To fully comply with upcoming regulations and minimize risk is going to be an uphill battle, but luckily CIMCON is here to help.
Based on our experience with 500+ clients and recommendations from regulations, we have developed a nuanced approach to assessing the risk profile of a model that takes into consideration 3 primary factors:
Model Complexity: Leveraging our knowledge of the A.I. model landscape we can determine model complexity based on the libraries used and the type of model, as well as how the code for the model is structured.
Model Impact: A key component, even explicitly called out in regulations such as SS 1/23, of model risk is model inter-dependence. How many models depend on the outputs of a model can play a big role in determining how risky the model is.
Frequency of Use: The more a model is accessed, the more important it is for the model to have proper controls and be error free.
Combining all three signals as well as other inputs can be a great place to start in order to determine a risk score for each model. At CIMCON, our solutions automatically scan your models to create an up to date Model Inventory of your system as well as automatically calculate this risk score to help highlight models that may deserve more focus. Our solution also allows you to visually see the model inter-dependencies we use in the risk score to help traceback errors to the source model that may be producing them. Lastly our products allow you to set up customizable approval workflows to control who can make changes to any models and E.U.C.s.
Large language models such as Chat GPT, deep learning text to image models such as DALL-E, as well as many others are transforming what is possible for us to accomplish as a society. It is going to be key to be able to be a part of that future and not be left behind to explore and creatively implement and leverage these new complex models. But errors can cause irreversible damage to our bottom-line and more importantly to our reputation so keeping up with regulations and best practices is not a nice to have, but a must to do and that is why we are so excited about our heavy investment in helping firms such as yours do just that.