AI can create a multiplicity of risk issues, including strategic, operational, financial, ethical, legal and reputational risks across the whole organization.
AI can be a means for digital giants and upstarts to challenge incumbents with new services and structurally low-cost operating models, or a disappointment when products that use AI fail to win customers. When machine-learning models err due to flawed algorithms or data, they can trigger poor pricing and production decisions or flash stock-market crashes, cause discriminatory recommendations, medical misdiagnoses, injuries and even deaths. As a result, companies may be driven out of promising new businesses and face severe damage to reputations, lawsuits and fines. Senior executives see accelerating privacy regulations, and the fines and reputation risk of violating them, as a top risk. And justifiably so: The increase in reputation-linked losses increased by 461% between 2011 and 2016, according to a study by reinsurer Steel City Re. AI increases the thirst for consumer data for machine-learning training and operations. AI itself can be a threat when used to spread misinformation, steal data or attack vulnerable infrastructure software. All in all, many of the risks that cause severe drops in market capitalization – new competitors, price wars, operational snafus, litigation and regulatory conflicts – can be triggered by AI. Additionally, there are risks to society from the undermining of democratic processes through misinformation, employment insecurity, surveillance and the misuse of personal data.
The risk and value of AI should be included in a robust enterprise risk management programme.
Both COSO (the Committee of Sponsoring Organizations of the Treadway Commission) and ISO (the International Organization of Standards), creators of two leading international risk management standards, recognize risk is an inherent part of the pursuit of value. Both COSO’s enterprise risk management (ERM) model and ISO’s risk management framework start with the organization’s mission and values, and continue by defining objectives and risk appetite – the amount and type of risk that an organization is prepared to pursue or retain. Only then do companies have the context to identify, assess and respond to risk; review and improve their risk management performance; and report on risk.
Many companies already have a risk management programme or framework. As a significant source of value and risk, AI should be considered in every phase of their programmes. When ethical values are affirmed and governance is established, the company’s stand and governance of AI should be included. AI’s risks, and the value it can provide, should be part of the discussion of objectives and risk appetite. Likewise, when a plan for assessing and responding to risk is developed, that plan should cover AI. Similarly, AI risk and value should be included when that plan is reviewed, and risk is reported.
Directors should see that AI is given proper attention as they and management develop and execute their risk management plans. That includes championing the proper culture: affirming the organization’s commitment to its ethical values, cultivating a climate of openness about risk appetite and preparedness, and encouraging collaboration between business units and functions.
Figure: COSO enterprise risk management model
Boards and risk management leaders should consider how to include AI in every step of its risk management plan.
Use of AI to identify risks and create value.
AI can be a means to evaluate risk and value. Financial firms already use AI inter alia to evaluate the risk of extending credit to borrowers and identify fraud. With enough data, boards can also use AI for risk analysis and decision support. In 2014, the board of directors of Deep Knowledge Ventures (DKV), a Hong Kong-based venture capital firm, began to use an AI algorithm to evaluate biotech investment risks before making acquisition decisions. The AI system identified more than 50 risk factors for biotech investing. Similarly, Finland’s Tieto Corporation’s management team uses AI to support decision-making. DKV and Tieto are in the vanguard, but are likely to be joined by others. A World Economic Forum study of IT executives found that 45% expected an AI machine to sit on a corporate board of directors by 2025. The need for help in identifying risks certainly exists: 57% of senior executives surveyed by the Financial Executives Research Foundation said they were too late in recognizing the “significant changes and unknowns” that can disrupt their business.
The following suggestions can help the individual who prepares the board discussion and sets the agenda on discussing AI risks and including them in the company’s risk management regime.
Before leading the first meeting
- Prepare yourself: Become familiar with AI, the value organizations can derive from it and its risks. Separate the hype about AI risk from reality. Speak to senior finance, risk, IT and security executives, about the risk and ethical issues that are on their minds. The Resources section provides readings and frameworks on AI and risks.
- Gauge board member interest in AI risk: Speak with other board members. Learn what importance they place on AI and the concerns they have about AI risks. Identify the board members who are most interested in moving forward with new AI investments, and those who have concerns or lack interest.
- Set goals: Think ahead about the desired outcomes from the board discussion.
Set the initial agenda
Discuss the risk appetite for AI. Agenda items can include:
- Review: Recapitulate the organization’s current risk appetite in its business strategy and operations, and how risk appetite and tolerance are decided.
- Presentation: Arrange for a briefing on the risks and rewards of AI. The presentation can include the company’s use of and plans for AI, examples from competitors and potential use cases uncovered by researchers. They should include revenue and other quantified benefits when possible. The presentation should also discuss the major risks and responsibilities the company will have to manage, the consequences if risks are not managed and the requirements for addressing those risks.
- Discussion: Consider how AI risks affect the company’s appetite and tolerance for risk. Also discuss whether the process of gauging risk appetite and tolerance needs to be amended to include AI.
- Delegate: Decide which members of the executive team, and which board committee, will be responsible for reviewing risk appetite and risk tolerance in the light of AI risk.
- Engage: Decide how the board will stay current with developments in AI risk.
Set follow-up or alternative agenda items. These can include:
- AI risk review: Which AI risks pose the most serious dangers to the organization? What has the company done to manage those risks to date, and what more needs to be done?
- AI risk responsibility: Which board members and members of the executive team have primary responsibility for overseeing and managing AI risk?
- AI risk in the ecosystem: Examine how AI will introduce new risks as companies work together and share information.
- AI risk awareness and culture: Review whether the individuals developing, using and overseeing AI are sufficiently aware of its risks and engaged in managing them and have an incentive to achieve the right balance of risk-taking and risk avoidance.
- AI and the enterprise risk management plan: Review the ERM plan, its effectiveness at managing AI risks, and how it may need to be changed.