Contents
Introduction | Responsibilities | Oversight | Agenda | Resources | Endnotes
View this module as a PDF here
Other modules:
Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | Cybersecurity | Ethics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable Development | Technology Strategy | Glossary
Introduction
Artificial intelligence (AI) is a high-stakes technology. Analysts project AI will generate trillions of dollars of value via new business models and processes, and through innovative products and services.[1] But the unintended consequences of AI can have reputation-damaging outcomes, reducing brand equity and causing tension with shareholders and the increasingly ethically conscious and informed consumer. Additionally, underperforming AI systems lead to costly delays as they need to be re-examined and corrected. According to a July 2018 survey by SAS, Accenture Applied Intelligence, Intel and Forbes Insights, 24% of respondents completely overhauled an AI system due to inconsistent outcomes, a lack of transparency and/or biased results.[2]
The question then becomes: How do boards oversee the leaders who make major decisions about AI? The answer must clarify who has accountability for oversight and decision-making, and what those responsibilities are. The oversight process should result in AI that has been through a rigorous ethical review without stifling innovation, and that is effective, efficient and inclusive.
This is the province of board-level AI governance. AI governance requires attention because:
- Artificial intelligence creates new technology governance challenges and amplifies existing ones. AI introduces a new issue to the world of governance: How to oversee systems that can learn and independently make complex decisions but are vulnerable to human weaknesses such as bias and to criminality. Apart from setting ethics standards (see Ethics module), leaders must decide how to select the decision-making models at the heart of machine-learning systems and make decisions about transparency and human control. At the same time, AI raises the importance of data governance because training machine-learning systems requires enormous datasets, and workforce governance, since many employees will see their work changed or eliminated by AI systems. Boards must also consider whether today’s governance processes and accountabilities are able to address the questions AI brings to strategy, risk and other control issues.
- To meet a changing regulatory landscape. Laws are moving quickly to protect the privacy and liberty of individuals against misuse of data. Given the wide scope of artificial intelligence and machine-learning technology, organizations will benefit from taking a proactive approach to the ethical governance of artificial intelligence, as opposed to redesigning and developing technologies to comply with legal standards at a later date. (See sidebars on regulations in Europe and North America below.)
- To align AI with values and ethics. Explicability, transparency, accountability, fairness, handling data responsibly, guarding against criminal misuse, aligning AI with values and being mindful of the societal implications of AI are all essential pillars of ensuring responsible use of data and AI.
To enable AI systems to be best used while remaining anchored to a company’s core values, companies must create the right framework to ethically govern artificial intelligence technology. The framework should set out decision-making rights and accountability for the board and the management team in regard to how ethics standards are set and by whom they are enforced. The framework should also extend governance to the design and operation of AI systems.
This module is intended to help corporate boards decide what governance responsibilities need to be assigned for AI, which board committees, executives and ethics boards participate in the governance process and how accountability will be shared.
In Europe
- The General Data Protection (GDPR) is a regulatory framework that set forth legal protections regarding use of the data of all EU subjects regardless of the country in which they reside or the platform on which their data resides.
- The EU AI High Level Expert Group (AI HLEG) was formed to advise the European Commission and make recommendations on AI ethics guidelines as well as policies for funding and infrastructures for AI in Europe.
- In the UK, the government released an "AI Sector Deal", which includes the establishment of a government-wide Office for AI to oversee governance. In addition, the House of House of Lords Select Committee on Artificial Intelligence produced a comprehensive report on artificial intelligence.
In North America
- The California Consumer Privacy Act (CCPA) will take effect in 2020. Because there is no federal privacy standard, California may become the de facto privacy law for the US unless Congress passes its own standard.
- On 16 May 2018, Mayor Bill de Blasio of New York City announced the creation of the Automated Decision Systems Task Force. The task force will recommend criteria and procedures for reviewing and assessing algorithmic tools purchased and deployed by the city. These recommendations may include procedures for explaining algorithmic decisions, public appeals of such decisions, ensuring algorithms are not biased against certain highly sensitive groups and how to make technical information about the algorithm being deployed publicly available.
Responsibilities
Boards have a responsibility to ensure their organization has a robust governance structure for setting objectives and monitoring performance. That structure must provide for how the board, management, shareholders and other stakeholders participate in decision-making, and ensure the board has the information required to provide oversight, and that shareholders have information about the strategy, risks and performance of the company. Boards are also responsible for making sure management’s plans, actions and policies involving the use of AI are responsible and consistent with the company’s ethical standards and legal obligations.
According to the G20/OECD Principles of Corporate Governance:
- Corporate governance... provides the structure through which the objectives of the company are set and the means of attaining those objectives and monitoring performance are determined. (About the Principles, p. 9)
- The board has a key role in setting the ethical tone of a company, not only by its own actions, but also in appointing and overseeing key executives and consequently the management in general. (Principle VI:C)
- The board should fulfil certain key functions, including reviewing and guiding corporate strategy, major plans of action, [and] risk management policies and procedures. (Principle VI.D.1)
- Basic shareholder rights should include the right to... obtain relevant and material information on the corporation on a timely and regular basis... [and] elect and remove members of the board. (Principle II.A)
There are a few key areas that a corporate board can examine to determine if adequate governance structures are in place for AI:
- Board oversight – clarifying which board committee(s) provide oversight, which AI activities they oversee, and whether AI necessitates establishing a new board committee.
- Ethics board – deciding on the responsibilities of the ethics board overseeing AI, how it will maintain its independence and selecting and confirming its members.
- Risk assessment – identifying AI-specific risks that should be considered while providing AI governance.
- Monitoring, audit and response – evaluating the effectiveness of compliance and ethics assurance programmes, and the response to allegations of violations.
- Training – providing education for employees, contractors and other third parties about compliance risk and programmes.
- Reporting to shareholders – as required by law, ensuring reports to shareholders and filings to regulators include information about the risks of AI. They should also report on the use of AI in financial reporting and auditing, as part of verification of the accuracy of financial statements.
See the Ethics module for a more detailed explanation of the board’s ethics responsibilities.
The analysis in this section is based on general principles of corporate governance, including the G20/OECD Principles of Corporate Governance, 2015. It does not constitute legal advice and is not intended to address the specific legal requirements of any jurisdiction or regulatory regime. Boards are encouraged to consult with their legal advisers in determining how best to apply the principles discussed in this module to their company.
Oversight
This section includes a single tool:
The AI governance responsibilities tool helps boards of directors create a governance structure for AI. It lists AI activities that may require governance, helps directors decide whether to keep or reassign governance responsibilities and provides a worksheet to assign specific oversight responsibilities to board committees and senior management. The tool also includes setting the responsibilities and qualifications for an independent ethics board.
While using the tool, consider these questions:
Does the governance-setting process:
- Capture a diverse range of views?
- Provide opportunities for people outside of leadership to provide input?
- Follow the organization’s ethics principles?
- Protect innovation as well as ethics?
- Include people with the expertise to advise on governance policy?
- Consider motivations for complying with governance processes?
Does the resulting governance structure:
- Support the organization’s ethical principles?
- Follow the organizational structure?
- Keep pace with rapid technological change?
- Support innovation?
- Provide oversight of applications?
- Provide guidelines on how frequently governance is updated?
View Appendix for the AI governance responsibilities tool here
Agenda
Before the first meeting
Before setting the agenda for the first board meeting on AI governance, the individual who leads the discussion can prepare by:
- Studying the issues: Become familiar with the governance and ethics issues created by AI, and the most urgent issues for the board to address. The Resources section provides sources on AI and technology standards and ethics codes, examples of ethics boards and other helpful materials. Speak to executives – in particular the senior IT executive, the chief operating officer and the senior executives responsible for data management and ethics – about the governance issues that are on their minds.
- Gauging board-member interest in AI governance: Talk to other board members. Learn what importance they place on developing an AI governance framework and what concerns they have. Identify the board members who are most interested in rapidly moving forward on addressing AI governance and ethics issues, and those who have concerns or lack interest.
- Setting goals: Identify the desired outcomes from the board discussion.
Topics to consider for the initial agenda include:
- The need for governance: Discuss why establishing ethics principles for AI, ensuring they are followed and providing oversight of AI decisions require a governance mechanism at your company.
- Gap analysis: Review existing compliance, ethics and technology governance programmes. Are they adequate to govern AI and manage its ethics and risk challenges?
- Principles and frameworks: Launch the process of establishing guiding ethical principles and updating governance structures. The Oversight section provides questions to ask.
Other suggested topics for discussions include:
- Employee attitudes: Employees have a personal, prevailing sense of right and wrong, honed and adjusted by experience. How do the ethics and governance guidelines square with employees’ moral sense?
- Employee safeguards: Review whether there are appropriate forums and channels that enable employees to identify and raise issues or concerns around ethical questions.
- Governance and ethics tools: Review ways to make it easier for employees to obtain ethics and governance guidance, such as chatbots and hotlines.
- Frameworks vs. realities: Does the way AI decisions are made in reality match the framework? Are management incentives aligned with the framework? Is the framework solving the problems it is designed to solve?
Resources
(All links as of 3/8/19)
Articles and reports on AI governance
- “AI Governance and Its Future,” The Aspen Institute.
- “A Proposed Model Artificial Intelligence Governance Framework”, Personal Data Protection Commission, Singapore, January 2019.
- “Perspectives on Issues in AI Governance,” Google.
- Rumman Chowdhury, “An AI Governance Approach to Support Innovation”, 1776.vc, 5 April 2019.
- Urs Gasser and Virgilio A.F. Almeida, “A Layered Model for AI Governance”, IEEE Internet Computing, 2017.
- “Wresting with AI Governance around the World, “ Forbes.com.
Research centres
- Center for the Governance of AI, Future of Humanity Institute and the University of Oxford.
- Ethics and Governance of Artificial Intelligence Initiative, Berkman Klein Center for Internet & Society at Harvard University and the MIT Media Lab.
Ethics and data governance assessment tools
- European Commission – Independent High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI”, April 2019.
- NYU Governance Lab, “Introducing Contracts for Data Collaboration”.
- “The Ethics of Data Sharing: A Guide to Best Practices and Governance”, Accenture, 2016.
- “Universal Principles of Data Ethics: 12 Guidelines for Developing Ethics Codes”, Accenture, 2016.
Risk and privacy assessment
- Centre for Information Policy Leadership, “A Risk-Based Approach to Privacy: Improving Effectiveness in Practice”, 2014 (contains a draft risk matrix for data).
- Data Protection Impact Assessment (DPIA) by UK Information Commissioner’s Office.
- Privacy Impact Assessment (PIA) by US Federal Trade Commission.
Examples of AI ethics boards and panels
- Australian Computer Society’s AI Ethics Committee.
- Axon’s AI Ethics Board.
- DeepMind’s Ethics & Society fellows (AI research arm of Alphabet).
- DeepMind’s Health Advisory Board, Clinical Advisers and Patient Advisers.
- Lucid’s Ethics Advisory Board.
Endnotes
(All links as of 3/8/19)
- [1] McKinsey Global Institute, “Notes from the AI Frontier: Applications and Value of Deep Learning”, April 2018; PwC, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?”, 2017.
- [2] SAS, Accenture Applied Intelligence and Intel with Forbes Insight, “AI Momentum, Maturity & Models for Success”, 2018.
Other modules:
Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | Cybersecurity | Ethics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable Development | Technology Strategy | Glossary