This is the province of board-level AI governance. AI governance requires attention because:
- Artificial intelligence creates new technology governance challenges and amplifies existing ones. AI introduces a new issue to the world of governance: How to oversee systems that can learn and independently make complex decisions but are vulnerable to human weaknesses such as bias and to criminality. Apart from setting ethics standards (see Ethics module), leaders must decide how to select the decision-making models at the heart of machine-learning systems and make decisions about transparency and human control. At the same time, AI raises the importance of data governance because training machine-learning systems requires enormous datasets, and workforce governance, since many employees will see their work changed or eliminated by AI systems. Boards must also consider whether today’s governance processes and accountabilities are able to address the questions AI brings to strategy, risk and other control issues.
- To meet a changing regulatory landscape. Laws are moving quickly to protect the privacy and liberty of individuals against misuse of data. Given the wide scope of artificial intelligence and machine-learning technology, organizations will benefit from taking a proactive approach to the ethical governance of artificial intelligence, as opposed to redesigning and developing technologies to comply with legal standards at a later date. (See sidebars on regulations in Europe and North America below.)
- To align AI with values and ethics. Explicability, transparency, accountability, fairness, handling data responsibly, guarding against criminal misuse, aligning AI with values and being mindful of the societal implications of AI are all essential pillars of ensuring responsible use of data and AI.
- The General Data Protection (GDPR) is a regulatory framework that set forth legal protections regarding use of the data of all EU subjects regardless of the country in which they reside or the platform on which their data resides.
- The EU AI High Level Expert Group (AI HLEG) was formed to advise the European Commission and make recommendations on AI ethics guidelines as well as policies for funding and infrastructures for AI in Europe.
- In the UK, the government released an "AI Sector Deal", which includes the establishment of a government-wide Office for AI to oversee governance. In addition, the House of House of Lords Select Committee on Artificial Intelligence produced a comprehensive report on artificial intelligence.
In North America
- The California Consumer Privacy Act (CCPA) will take effect in 2020. Because there is no federal privacy standard, California may become the de facto privacy law for the US unless Congress passes its own standard.
- On 16 May 2018, Mayor Bill de Blasio of New York City announced the creation of the Automated Decision Systems Task Force. The task force will recommend criteria and procedures for reviewing and assessing algorithmic tools purchased and deployed by the city. These recommendations may include procedures for explaining algorithmic decisions, public appeals of such decisions, ensuring algorithms are not biased against certain highly sensitive groups and how to make technical information about the algorithm being deployed publicly available.
(All links as of 3/8/19)
Articles and reports on AI governance
- “AI Governance and Its Future,” The Aspen Institute.
- “A Proposed Model Artificial Intelligence Governance Framework”, Personal Data Protection Commission, Singapore, January 2019.
- “Perspectives on Issues in AI Governance,” Google.
- Rumman Chowdhury, “An AI Governance Approach to Support Innovation”, 1776.vc, 5 April 2019.
- Urs Gasser and Virgilio A.F. Almeida, “A Layered Model for AI Governance”, IEEE Internet Computing, 2017.
- “Wresting with AI Governance around the World, “ Forbes.com.
- Center for the Governance of AI, Future of Humanity Institute and the University of Oxford.
- Ethics and Governance of Artificial Intelligence Initiative, Berkman Klein Center for Internet & Society at Harvard University and the MIT Media Lab.
Ethics and data governance assessment tools
- European Commission – Independent High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI”, April 2019.
- NYU Governance Lab, “Introducing Contracts for Data Collaboration”.
- “The Ethics of Data Sharing: A Guide to Best Practices and Governance”, Accenture, 2016.
- “Universal Principles of Data Ethics: 12 Guidelines for Developing Ethics Codes”, Accenture, 2016.
Risk and privacy assessment
- Centre for Information Policy Leadership, “A Risk-Based Approach to Privacy: Improving Effectiveness in Practice”, 2014 (contains a draft risk matrix for data).
- Data Protection Impact Assessment (DPIA) by UK Information Commissioner’s Office.
- Privacy Impact Assessment (PIA) by US Federal Trade Commission.
Examples of AI ethics boards and panels
- Australian Computer Society’s AI Ethics Committee.
- Axon’s AI Ethics Board.
- DeepMind’s Ethics & Society fellows (AI research arm of Alphabet).
- DeepMind’s Health Advisory Board, Clinical Advisers and Patient Advisers.
- Lucid’s Ethics Advisory Board.