Generative AI Transformation
Thursday, October 31, 2024
(0 Comments)
A rising risk awareness
Artificial Intelligence (AI) introduces a complex and multidisciplinary set of risk factors that demand new depth, expertise, and leadership from agency risk functions. Recognizing the urgency of these risks, the Office of Management and Budget (OMB) memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (M-24-10) rightfully emphasizes the importance of an integrated, agency-wide risk
management function for most types of AI usage. This paper briefly discusses the benefits of seeing this risk function as more than a cybersecurity exercise and rather viewing it through a cross-functional, sociotechnical lens. We also discuss an
intuitive, probabilistic methodology for quantifying risks despite the uncertainty inherent to AI, thereby enabling more strategic decision-making for AI risk management and portfolio governance at the enterprise level.
Classifying AI risks

One of the requirements in OMB’s memo is to conduct periodic risk reviews of any safety-impacting or rights-impacting AI. Indeed, a clear understanding of the risks is the only way to know the true costs of an AI solution and whether its purported benefits
are desirable considering those costs. Since the risks inherent to AI stem as much from their technological implementation as from their (mis)use, we’ve found it helpful to ground AI risk assurance in cross-functional, sociotechnical thinking about
how an AI model fits into a business process. When you shift your attention from the bits and bytes of one particular AI model and instead conceptualize how that model integrates into a business process, four risk categories emerge:

When you think about a model in the context of a specific business process, it’s easier to have a tactical conversation about the risks no matter their provenance, whether they be technical (e.g., cybersecurity), sociotechnical (e.g., misuse), systems-related
(e.g., software supply chain), or organizational (e.g., reputational harm). For instance, the ethical issues inherent to using a using a Large Language Model (LLM) for an HR function, like recruitment, will be different from those inherent to another
function, like finance, even if both use cases leverage the same LLM.
Quantifying AI risks
While it’s clear that AI risk assessments need to be tailored to each individual AI application, OMB—as well as the National Institute of Standards and Technology (NIST), Government Accountability Office (GAO), and other authorities on risk management—have
not been prescriptive on how agencies ought to quantify AI risk exposure. Although useful as a heuristic, qualitative risk registers make it difficult to perform simulations or perform apples-to-apples comparisons between use cases, which is crucial
for AI portfolio governance.
Enter KPMG Probabilistic Risk Assessment methodology. Built atop Bayesian Networks, the methodology goes well beyond providing an understanding of the potential likelihood and impact of risks (i.e., traditional heat map) and combines probabilistic methods
with graph data science to model complex, potentially interdependent risk factors in a fashion that can handle uncertainty, incorporate both quantitative and qualitative data, and provide visualizations of the complex interdependencies among risk
factors and their potential impacts (Exhibit 2). For decision-makers in the AI risk management space, this unlocks the ability to better understand technical risks, such as your traditional cybersecurity vulnerabilities, alongside the more amorphous
sociotechnical risks, such as (un)intended (mis)use.
For instance, the methodology permits integration of expert knowledge from the business/mission function, which can be useful where empirical data is lacking, as well as updates when new information comes to light, which is crucial in such a fast-changing
field as AI. For decision-makers, this flexibility enables what-if analyses to determine the impact of changes in information, assumptions, risk appetite, or all the above. The methodology also enables your most senior leaders to compare/contrast
the risk postures of otherwise dissimilar use cases, which can facilitate those tough go or no-go decisions. By enhancing your understanding of the complex, multidisciplinary risk factors at play, the methodology ultimately helps all stakeholders
uncover the right set of risk mitigation strategies and monitoring plans.
How KPMG can help
With our rich pedigree in assurance functions, KPMG has developed the Trusted AI Framework to help ensure fairness, transparency, explainability, accountability, data integrity, reliability, security, safety, privacy, and sustainability in AI adoption
(Exhibit 3). While the Trusted AI framework was designed for use across all AI activities—establishing safe and ethical practices for machine learning and GenAI teams as well as defining data quality standards—its tenets also guide our approach to
AI risk management and its components align to the requirements of OMB memo M-24-10. Through this Exhibit 3 framework, our cross-functional professionals can help you quantify risk exposure for each AI use case and then leverage probabilistic simulations
to model scenarios and their impacts. That transparency can help you cut through the complexity of AI adoption and make more confident, data-driven decisions when assessing and prioritizing AI use cases, helping ensure not just the advancement of
technology but also your mission. 
|