The potential of artificial intelligence (AI) to cause harm as well as create a greater good has aroused great interest in the field of ethics and made it one of the most publicized areas, as well as an emerging topic of focus within a board’s oversight responsibilities.
Any information technology can be used for good or ill. But AI requires special attention because it increases the hazards of IT by enlarging both its scope and the scale of its impact. Moreover, with AI it is not just a matter of use once a product is deployed, as the use of an AI system is tightly linked to its design. Setting ethics standards for AI systems is necessary to avoid harm and increase well-being. Standards include the principles that AI systems and their developers, operators and managers should follow as they develop, operate and use AI systems and the data on which they depend. Standards also include the expectation of compliance with those principles by organizations and the law. AI ethics standards are necessary, but they are not enough: standards need to be taught, facilitated, absorbed, and possibly enforced in an organization’s culture and workflow.
This module provides tools and information to help boards oversee the setting of ethics standards, and the establishment of an ethics board. For content on who and how boards oversee ethics and other AI decisions, please review the Governance module.
AI’s ethics hazards
AI presents new hazards because it has capabilities lacked by previous IT systems. Counterfeiters can now create realistic fake videos, and companies can build chatbots that are nearly indistinguishable from human interactions. Autonomous systems could make life and death decisions without human oversight. In general, AI systems are able to make or recommend decisions to human operators, some with very high stakes.
Because AI systems learn, they are vulnerable to being trained by deliberately or inadvertently biased data, and to developing and following decision-making models with hidden biases. For example, AI systems have been accused of rejecting female candidates for jobs and recommending disproportionate criminal sentences and policing of minority groups due to such biases.
Many AI techniques resist explainability, rendering it difficult to pinpoint the reasons for a specific decision, and also to assess if the decision path crossed an ethical line. Certain kinds of machine learning-based AI systems can make decisions without human oversight, based on complex patterns beyond human comprehension, and thus make decisions that cannot be predicted. Humans may not be capable of overriding such AI systems when they make instantaneous decisions in real time, as with autonomous vehicles and aircraft.
How human–machine interactions take place affects the risk management process. Accuracy improves as AI systems learn. There is a risk, however, that decisions made solely by technology are accepted by users without proper human oversight.
Defining ethical imperatives
Technology companies, professional associations, government agencies, NGOs and academic groups have already developed many AI codes of ethics and professional conduct. While these can be helpful resources for organizations developing their own codes, they do not provide a universal solution. Organizations must develop their own.
While people may agree on broad concepts, the specifics of those concepts may differ. Ethics codes for AI systems may focus on transparency, privacy and benefitting society at large, but our review of existing codes shows that the definitions, or what is included under these terms, can vary.
Public codes provided by technology companies are often aspirational rather than helpful guidelines. Some leave out specific commitments or define what a stated aspiration means. Academic codes are often aspirational in another way: They make specific commitments to public good, without spelling out how.
As organizations wrestle with developing codes, they will discover what moral philosophers already know: Ethical decision-making is often difficult. What is considered right and wrong can differ markedly by culture. People can view rights and legitimate interests from very different and sometimes conflicting perspectives, based on their experience and milieu. Some ethical dilemmas may be presented as a choice between lesser evils rather than of right from wrong. Ethics codes may be about making informed choices within guardrails that set limits of acceptable behaviour, and not simply about doing the right thing.
Codes address different audiences with different needs. Internally, codes must be the core of guidance mechanisms for making ethical decisions as organizations develop, deploy and use AI, such as codes of conduct and causes for dismissal. They must also be guarantors of employee rights and protections as well as their responsibilities. Externally, they serve as assurances to the public, as guarantees to customers and guidelines for vendors and partners. Effective AI ethics codes fulfil all these functions.
Defining the ethical principles that AI systems and their human creators and users should follow requires thoughtful analysis and consensus-building. So too does the implementation of those principles.
These issues must be thought through by leaders of each organization that deploys AI. While certain broad principles are found in nearly all codes, no one code is sufficient and appropriate for every organization.
The costs of ethical failure
Any failure to consider and address these issues and concerns could drive away clients, partners, and employees. Their trust in both the technology and the company can easily be diminished and lost if the company is not clear about its the policies around AI issues, and if the delivered technology acts as a “black box”.
Moreover, there may also be legal and regulatory consequences. Many AI application domains, such as financial systems, are heavily regulated in regard to fairness, diversity and inclusion. Enterprises need to ensure that compliance with these regulations is maintained when they inject AI systems into these domains. Companies should understand the origin of the data used to train AI models and what it is possible to do with it. Several regulations, such as the General Data Protection Regulation (GDPR) in Europe, put restrictions on how to treat data, especially personal data. Furthermore, accountability, redress and liability of AI systems are likely to feature in future litigation. Litigators and legal decision-makers will require clarity around such issues.
This section includes five tools:
The AI ethics principles development tool helps boards of directors and AI ethics boards develop an AI ethics code. It can also be used to guide board discussions on ethics principles. This tool contains eight broad principles, with more specific principles that follow on from them, and a way to evaluate their relevance.
Goals and guidance for the AI ethics board provides questions to consider before establishing an AI ethics board. It contains suggestions for goals for the AI ethics board to accomplish and guidance for it to follow. It also contains issues for the board of directors to consider in advance.
Selecting the members of the AI ethics board suggests requirements to consider when appointing members to the AI ethics board. It contains four major requirements, along with questions to ask and actions to undertake during the search and evaluation process.
Assessing the draft AI ethics code provides questions to help directors evaluate the draft code presented by the AI ethics board.
Assessing implementation, monitoring and enforcement of the AI ethics code includes questions to help boards evaluate whether they are receiving the information they require to carry out their oversight responsibilities, and whether the management team of the AI ethics board is effectively carrying out these responsibilities.
(All links as of 18/8/19)
- Anastassia Lauterbach, The Artificial Intelligence Imperative: A Practical Roadmap for Business, Praeger, 2018.
- H. James Wilson and Paul R. Daugherty, Human + Machine – Reimagining Work in the Age of AI, Harvard Business School Press, 2018.
- Paula Boddington, Towards a Code of Ethics for Artificial Intelligence, Springer, 2017.
- Markkula Center for Applied Ethics, Santa Clara University, “An Ethical Toolkit for Engineering/Design Practice”.
Articles and reports
AI ethics for boards
- Deloitte, “The Board’s Role in Ethics and Compliance”.
- Peter Collins, “We Need a More Open Debate on AI and Ethics in the Boardroom”, Centre for Ethical Leadership.
- Sabine Vollmer, “The Board’s Role in Promoting an Ethical Culture”, Journal of Accountancy.
- Trooper Sanders, “How a Strong Board of Directors Keeps AI Companies on an Ethical Path”, VentureBeat.
Other articles and reports
- Accenture, “Facilitating Ethical Decisions throughout the Data Supply Chain”.
- Accenture, “Is Explainability Enough? Why We Need Understandable AI”; “Responsible AI and Robotics, an Ethical Framework”; “The Responsible AI Imperative”.
- Annette Zimmermann and Bendert Zevenbergen, “AI Ethics: Seven Traps”.
- European Economic and Social Committee, “Ethics of Big Data”.
- European Group on Ethics in Science and New Technologies, “IBE, Business Ethics and Big Data”.
- IEEE, “Ethically Aligned Design”.
- IBM, “Everyday Ethics for Artificial Intelligence: A Guide for Designers & Developers”.
- OECD, “Going Digital”.
- PwC, “Responsible AI and National AI Strategies”, for the EU Commission.
- World Economic Forum, "Top 9 Ethical Issues in AI", 2016.
Panels and boards
- Australian Computer Society’s AI Ethics Committee.
- Axon’s AI Ethics Board.
- DeepMind’s Ethics and Society fellows (AI research arm of Alphabet).
- Lucid’s Ethics Advisory Board.
- Microsoft’s AETHER Committee.