Managing Artificial Intelligence Technology Risk

Certified Third Party Risk Management Professional (C3PRMP) Program from SIG University covered all areas of third-party risk management. It provided practical tips and guidance to tackle today’s challenges from Third-party risks. I have more than 15+ years of experience in the Banking & Financial Services industry. During this time, I have been involved in various regulatory projects and worked closely with Third Party Governance, Procurement, and Legal teams. In this essay, I would like to examine the technology risk area in general, and Artificial Intelligence (AI) risks in specific. Also, review the sources of AI risks, challenges faced by firms, specifically in Financial Services, in managing AI risks, and a practical approach to managing those risks.

Understanding the Problem

AI technologies and solutions are used in most industries, especially in financial services. Banks and financial firms use AI-based solutions or models to optimize their operations and remain competitive. However, many third-party vendors supply these services, which directly impact the AI-governed systems of banks or other businesses. Therefore, firms must understand how their business can be affected by AI-driven systems and how to manage the new and varied risks posed by third parties.

AI solutions are increasingly embedded in vendor-provided software, hardware, and software-enabled services deployed by individual business units, potentially introducing new, unchecked risks. Risks that need to be more well-defined and adequately understood are challenging to measure. Firms may be assessing the AI systems/models using the traditional technology risk assessment methods could increase operational risk.

Failure to adequately manage AI risks can result in adverse outcomes for the bank or its customers, impacting systems and business operations, monetary loss, reputation damage, or regulatory compliance. Therefore, firms need effective risk management and control specifically for AI systems developed internally or purchased from vendors.


AI’s inherent nature: AI systems/models, in general, is complex due to the inherent uncertainties and statistics in nature, which makes it challenging to explain its working and complicates risk measurement. AI models learn from data and human feedback, which is different from traditional systems.

Regulatory: Any system that takes data as input and produces predictions or classifications using quantitative, mathematical tools and techniques is considered a model. Some applications and use cases, such as chatbots, Robo-advisors, automated trading, natural-language processing, and HR analytics, can qualify as “models” under banking regulatory definitions

Complex product/development: Vend SaaS offerings are often embedded with AI and Machine learning algorithms that make it more complicated and opaque than traditional systems. Properly assessing these systems requires significant coordination between IT, vendor, Procurement, Third-Party Risk Management, Legal, and business.

Operational Process: Having processes in place to identify and monitor changes to the models is another big challenge.

Expertise: Insufficient or dedicated SMEs within the organization. For example, the data science team may need to fully understand the regulatory requirements for the solution the business is trying to implement.

Documentation: Lack of documentation that meets the skill-levels

Mitigating Approaches

As part of an effective third-party risk management program, firms are expected to have full due diligence, effective contract management, and ongoing monitoring of third parties based on the criticality of the services provided. They apply to the risks posed by both AI applications and the functions that AI supports, regardless of whether the tool was developed internally or obtained through a third party. As covered in the course, the associated risk management should be commensurate with the level of criticality of the services/function that the AI supports.

Consider risk thresholds and ensure they align with the corporate risk appetite when adopting third-party AI solutions. Procurement can coordinate between the vendor, legal, risk management, and business teams to ensure this is happening. While business users ensure incorporating those threshold factors into their operational procedures, Procurement and Legal should consider structuring them into the contract structure as appropriate.

For example, if it falls beyond a set limit, accuracy should trigger an escalation (operational procedure). The frequency of such escalations may point to time to refine the model. Will those model refinement or management be part of the maintenance agreement or treated as a change order? Those considerations should be spelled out in the contract.

The approach proposed by NIST (2) in their latest draft, though still evolving, provides an overarching policy that can be adopted to evaluate the new source of AI risks regardless of internally developed or vendor-built AI systems. It uses a simple taxonomy to identify and manage AI system risks. They are:

Technical characteristics include Accuracy, Reliability, Robustness & Resilience, or Security.

Socio-technical characteristics include Explainability, Interpretability, Privacy, Safety, and Managing bias.

Guiding Principles include: Fairness, Accountability, Transparency

Once we map risks arising from AI systems in this approach, quantitative and qualitative metrics can be identified. Consider if any metrics need to be built into the contract structure or operational procedures.

Explainability, privacy, security, and guiding principles are particularly important because regulators examining models look for details around these areas. Explainability (1) refers to the user’s perception of how the AI system/model work. Risks due to explainability may arise if we incorrectly infer a model’s operation. This risk may be managed by explicit descriptions of how the model works in plain English. This is an area to look for during the risk assessment before deployment

Since AI systems/models use large volumes of data, understanding data origins, use, security, data classification, and data governance is critical. InfoSec should review the applicable privacy and information security requirements during risk assessment. Robust data management creates quality data which helps AI systems make better predictions or classifications and helps manage bias risk.

Responsible disclosures about the AI system/model about how the model works, key assumptions, and responsibilities should be made mandatory regardless of internal or external third-party involvement in the development. It is prudent to develop in-house subject matter expertise and cross-training as AI systems will continue to grow in complexity and be widely adopted. Defining the roles and responsibilities of managing the AI life cycle will bring more clarity and produce expected outcomes.


AI is here to stay. It will continue to grow in complexity and will be widely adopted in many business functions. To better identify and manage AI risks, some key considerations include:

  • Firms should consider adopting a simple AI risk taxonomy to assess AI better-related risks, whether developed internally or using third parties.
  • Communication about how to understand and manage the risks of AI systems will help create awareness within the organization and realize the full potential of AI.
  • To evaluate AI risks when buying AI solutions from vendors, the following can be considered:

External expertise / consulting firms to assess the AI risks on an on-demand basis or Developing dedicated in-house expertise within the organization. Larger banks already have a Model Risk Management group, so starting there may be quick as they already have experience and are well-versed with regulations. (Though their focus is on static models)

  • Update vendor risk management processes to identify, assess, and manage AI risks separately from traditional technology risks.
  • Develop proper roles and responsibilities for the AI life cycle with sufficient staffing.
  • Develop proper AI-specific risk metrics and incorporate them in the contracting as required.
  • Frequently review the risk controls and processes for continuous improvement. Collaborate with peers from other institutions and regulators to learn best practices and guidelines for improvement.

Based on my learnings from this course, I plan to initiate a conversation on this topic within my organization to reflect on our current practices and how we can build more robust AI risk management practices to increase overall performance and trustworthiness.

SIG University’s Certified Third-Party Risk Management Professional (C3PRMP) program is a globally recognized certification that is the “gold standard” in terms of relevance, scope and content. The C3PRMP program was created by Linda Tuck Chapman, an advisor, educator, author and expert.