Risk Empowering AI Leadership


Introduction | Examples | Responsibilities | Oversight | Agenda | Resources | Endnotes

View this module as a PDF here

View this module in Spanish

Other modules:

Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | CybersecurityEthics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable DevelopmentTechnology Strategy | Glossary


Governance experts around the world have come to recognize the importance of overseeing IT risk. “IT is essential to manage the transactions, information and knowledge necessary to initiate and sustain a company,” notes the Institute of Directors in southern Africa’s King Report on Governance for South Africa. “The risk committee should consider IT risk as a crucial element of the effective oversight of risk management of the company.”[1] As companies take advantage of artificial intelligence (AI), boards – particularly risk committees – will have to pay particular attention to the uncertain outcomes of AI and the various ethical issues that arise.

AI can create a multiplicity of risk issues, including strategic, operational, financial, ethical, legal and reputational risks across the whole organization.

AI can be a means for digital giants and upstarts to challenge incumbents with new services and structurally low-cost operating models, or a disappointment when products that use AI fail to win customers. When machine-learning models err due to flawed algorithms or data, they can trigger poor pricing and production decisions or flash stock-market crashes, cause discriminatory recommendations, medical misdiagnoses, injuries and even deaths. As a result, companies may be driven out of promising new businesses and face severe damage to reputations, lawsuits and fines.[2] Senior executives see accelerating privacy regulations, and the fines and reputation risk of violating them, as a top risk.[3] And justifiably so: The increase in reputation-linked losses increased by 461% between 2011 and 2016, according to a study by reinsurer Steel City Re.[4] AI increases the thirst for consumer data for machine-learning training and operations. AI itself can be a threat when used to spread misinformation, steal data or attack vulnerable infrastructure software. All in all, many of the risks that cause severe drops in market capitalization – new competitors, price wars, operational snafus, litigation and regulatory conflicts – can be triggered by AI.[5] Additionally, there are risks to society from the undermining of democratic processes through misinformation, employment insecurity, surveillance and the misuse of personal data.

The risk and value of AI should be included in a robust enterprise risk management programme.

Both COSO (the Committee of Sponsoring Organizations of the Treadway Commission) and ISO (the International Organization of Standards), creators of two leading international risk management standards, recognize risk is an inherent part of the pursuit of value. Both COSO’s enterprise risk management (ERM) model and ISO’s risk management framework start with the organization’s mission and values, and continue by defining objectives and risk appetite – the amount and type of risk that an organization is prepared to pursue or retain.[6] Only then do companies have the context to identify, assess and respond to risk; review and improve their risk management performance; and report on risk.

Many companies already have a risk management programme or framework. As a significant source of value and risk, AI should be considered in every phase of their programmes. When ethical values are affirmed and governance is established, the company’s stand and governance of AI should be included. AI’s risks, and the value it can provide, should be part of the discussion of objectives and risk appetite. Likewise, when a plan for assessing and responding to risk is developed, that plan should cover AI. Similarly, AI risk and value should be included when that plan is reviewed, and risk is reported.

Directors should see that AI is given proper attention as they and management develop and execute their risk management plans. That includes championing the proper culture: affirming the organization’s commitment to its ethical values, cultivating a climate of openness about risk appetite and preparedness, and encouraging collaboration between business units and functions.[7]

Figure: COSO enterprise risk management model

Boards and risk management leaders should consider how to include AI in every step of its risk management plan.

Source: “Enterprise Risk Management: Integrating with Strategy and Performance – Executive Summary”, COSO (Committee of Sponsoring Organizations of the Treadway Commission), June 2017
Use of AI to identify risks and create value.

AI can be a means to evaluate risk and value. Financial firms already use AI inter alia to evaluate the risk of extending credit to borrowers and identify fraud. With enough data, boards can also use AI for risk analysis and decision support. In 2014, the board of directors of Deep Knowledge Ventures (DKV), a Hong Kong-based venture capital firm, began to use an AI algorithm to evaluate biotech investment risks before making acquisition decisions. The AI system identified more than 50 risk factors for biotech investing.[8] Similarly, Finland’s Tieto Corporation’s management team uses AI to support decision-making.[9] DKV and Tieto are in the vanguard, but are likely to be joined by others. A World Economic Forum study of IT executives found that 45% expected an AI machine to sit on a corporate board of directors by 2025.[10] The need for help in identifying risks certainly exists: 57% of senior executives surveyed by the Financial Executives Research Foundation said they were too late in recognizing the “significant changes and unknowns” that can disrupt their business.[11]


A leading bank in Europe

A leading bank in Europe that used artificial intelligence to optimize front and back office operations significantly decreased the risk of their AI solutions by first understanding and prioritizing the types of risks that may ensue from using AI in their operational scenarios. After gaining this deep understanding of risks, the bank developed strong policies, procedures, worker training, and contingency plans. For example, a wide array of monitoring, oversight, and human override is set into action when reviewing the ai algorithm that calculates a customer's financial health.[12]

A leading financial institution in Europe

A leading financial institution in Europe reviewed their algorithms' explainability and concluded that the current models are extremely complex to be explained. The institution decided to simplify its algorithms. While some predictive power was lost, the institution gained explainability which resulted in higher buy-in from its employees. Simpler models also made it simpler to detect biases early on in data that algorithms required.[12]


UPS, the package delivery, transportation and logistics company, built an online platform that combines machine learning and advanced analytics. The app—called Network Planning Tools, or NPT for short—lets the company’s engineers view activity at UPS facilities around the world and route shipments to the ones with the most capacity. The app gets some of its smarts from AI, which it uses to create forecasts about package volume and weight based on analysis of historical data. The machine-learning algorithms also analyze decisions the company’s engineers made and assess how they affected customer satisfaction and internal costs. That kind of insight is crucial during the frenetic holiday season. In preparation, the company has used the NPT app to identify and eliminate bottlenecks. UPS expects the program to save it $100 million to $200 million a year.[13]


In its description of board responsibilities, the G20/OECD Principles of Corporate Governance emphasizes oversight of a company’s risk management system, policies and procedures. Risk oversight is “an area of increasing importance for boards” that is “closely related to corporate strategy”. Boards “should retain final responsibility for oversight of the risk management system” and “demonstrate a leadership role to ensure that an effective means of risk oversight is in place”. Depending on the company’s size and risk profile, the principles recommend that companies consider establishing a risk committee to allow the full board and the audit committee more time for other responsibilities.

The board’s risk oversight duties are also pertinent to AI. They include:

  • Holding executives accountable for ethics. “The board has a key role in setting the ethical tone of a company” (Principle VI:C). Failure to adhere to ethical principles when using AI puts companies at risk. “High ethical standards are in the long-term interests of the company as a means to make it credible and trustworthy, not only in day-to-day operations but also with respect to longer-term commitments.”
  • Oversight of “the risk management system and systems designed to ensure that the corporation obeys applicable laws” (Introduction to section VI). These duties apply to using AI to manage risk, including AI in the risk management plan and ensuring no laws are broken while using AI systems. The board should also see that its company’s system for spotting, managing, tracking and acting on AI risks is robust, continually improving and well-coordinated. Boards also have overall responsibility for the cybersecurity risk to AI models and the data they use.[14]
  • “Oversight of the accountabilities and responsibilities for managing risks, specifying the types and degree of risk that a company is willing to accept in pursuit of its goals, and how it will manage the risks it creates through its operations and relationships” (Principles, section VI.D.1). Boards hold management accountable for managing AI risk. That can include appointing and overseeing a chief risk officer. Boards should also oversee the creation and execution of their organization’s risk management plan, including setting the risk appetite for AI. Board oversight should be ongoing and active, given the board’s leadership role called for by the OECD Principles.
  • Ensuring the appropriate control systems for risk management, as part of “ensuring the integrity of the essential financial reporting and monitoring systems” (Principle VI.D.1, 7). This includes oversight of the use of AI in control and financial systems.

As with its oversight of strategy, boards should also:

  • Be fully informed about risk and risk management in order to act in good faith, with due diligence and care. Directors should have access to accurate, relevant and timely information about the regulations affecting the use of AI. They should also be informed about the risks associated with AI, including risks involving security, models, algorithms, data, ethics and bias, and how the company manages them.
  • Consider risk as they review and guide corporate performance, expenditures and acquisitions. This includes considering AI risk indicators as the board looks at the value and effectiveness of AI and examines AI investments. Boards should also review whether the company’s acquisitions and partnerships introduce new risks, and how well management are implementing their strategy for risk.

The analysis in this section is based on general principles of corporate governance, including the G20/OECD Principles of Corporate Governance, 2015. It does not constitute legal advice and is not intended to address the specific legal requirements of any jurisdiction or regulatory regime. Boards are encouraged to consult with their legal advisers in determining how best to apply the principles discussed in this module to their company.


This section includes three tools to help directors oversee AI risk management. The knowledge assessment tool helps board members rate whether they possess, or have access to, the knowledge required to independently judge management’s knowledge and leadership on AI risk management.

View Appendix 1 for the knowledge assessment tool here

The performance review tool consists of questions boards can ask management about their knowledge of AI and risk, and the progress and performance of their actions. It offers the SCEPTIC framework to help directors assess the answers they receive.

View Appendix 2 for the performance review tool here

The guidance tool offers possible suggestions for further action in an “If, then” format.

View Appendix 3 for the guidance tool here


The following suggestions can help the individual who prepares the board discussion and sets the agenda on discussing AI risks and including them in the company’s risk management regime.

Before leading the first meeting

  • Prepare yourself: Become familiar with AI, the value organizations can derive from it and its risks. Separate the hype about AI risk from reality. Speak to senior finance, risk, IT and security executives, about the risk and ethical issues that are on their minds. The Resources section provides readings and frameworks on AI and risks.
  • Gauge board member interest in AI risk: Speak with other board members. Learn what importance they place on AI and the concerns they have about AI risks. Identify the board members who are most interested in moving forward with new AI investments, and those who have concerns or lack interest.
  • Set goals: Think ahead about the desired outcomes from the board discussion.

Set the initial agenda

Discuss the risk appetite for AI. Agenda items can include:

  • Review: Recapitulate the organization’s current risk appetite in its business strategy and operations, and how risk appetite and tolerance are decided.
  • Presentation: Arrange for a briefing on the risks and rewards of AI. The presentation can include the company’s use of and plans for AI, examples from competitors and potential use cases uncovered by researchers. They should include revenue and other quantified benefits when possible. The presentation should also discuss the major risks and responsibilities the company will have to manage, the consequences if risks are not managed and the requirements for addressing those risks.
  • Discussion: Consider how AI risks affect the company’s appetite and tolerance for risk. Also discuss whether the process of gauging risk appetite and tolerance needs to be amended to include AI.
  • Delegate: Decide which members of the executive team, and which board committee, will be responsible for reviewing risk appetite and risk tolerance in the light of AI risk.
  • Engage: Decide how the board will stay current with developments in AI risk.

Set follow-up or alternative agenda items. These can include:

  • AI risk review: Which AI risks pose the most serious dangers to the organization? What has the company done to manage those risks to date, and what more needs to be done?
  • AI risk responsibility: Which board members and members of the executive team have primary responsibility for overseeing and managing AI risk?
  • AI risk in the ecosystem: Examine how AI will introduce new risks as companies work together and share information.
  • AI risk awareness and culture: Review whether the individuals developing, using and overseeing AI are sufficiently aware of its risks and engaged in managing them and have an incentive to achieve the right balance of risk-taking and risk avoidance.
  • AI and the enterprise risk management plan: Review the ERM plan, its effectiveness at managing AI risks, and how it may need to be changed.


(All links as of 10/8/19)

Risk management frameworks and guides

Cybersecurity frameworks and guides


  • George Westerman and Richard Hunter, IT Risk: Turning Business Threats into Competitive Advantage, Harvard Business Review Press, 2007.
  • Jim DeLoach, Enterprise Wide Risk Management: Strategies for Linking Risk and Opportunity, Financial Times Prentice Hall, 2000.




(All links as of 10/8/19)

Other modules:

Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | Cybersecurity | Ethics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable DevelopmentTechnology Strategy | Glossary