Loading

Ethics Empowering AI Leadership

Introduction

The potential of artificial intelligence (AI) to cause harm as well as create a greater good has aroused great interest in the field of ethics and made it one of the most publicized areas, as well as an emerging topic of focus within a board’s oversight responsibilities.

Any information technology can be used for good or ill. But AI requires special attention because it increases the hazards of IT by enlarging both its scope and the scale of its impact. Moreover, with AI it is not just a matter of use once a product is deployed, as the use of an AI system is tightly linked to its design. Setting ethics standards for AI systems is necessary to avoid harm and increase well-being. Standards include the principles that AI systems and their developers, operators and managers should follow as they develop, operate and use AI systems and the data on which they depend. Standards also include the expectation of compliance with those principles by organizations and the law. AI ethics standards are necessary, but they are not enough: standards need to be taught, facilitated, absorbed, and possibly enforced in an organization’s culture and workflow.

This module provides tools and information to help boards oversee the setting of ethics standards, and the establishment of an ethics board. For content on who and how boards oversee ethics and other AI decisions, please review the Governance module.

AI’s ethics hazards

AI presents new hazards because it has capabilities lacked by previous IT systems. Counterfeiters can now create realistic fake videos, and companies can build chatbots that are nearly indistinguishable from human interactions. Autonomous systems could make life and death decisions without human oversight. In general, AI systems are able to make or recommend decisions to human operators, some with very high stakes.

Because AI systems learn, they are vulnerable to being trained by deliberately or inadvertently biased data, and to developing and following decision-making models with hidden biases. For example, AI systems have been accused of rejecting female candidates for jobs and recommending disproportionate criminal sentences and policing of minority groups due to such biases.[1]

Many AI techniques resist explainability, rendering it difficult to pinpoint the reasons for a specific decision, and also to assess if the decision path crossed an ethical line. Certain kinds of machine learning-based AI systems can make decisions without human oversight, based on complex patterns beyond human comprehension, and thus make decisions that cannot be predicted. Humans may not be capable of overriding such AI systems when they make instantaneous decisions in real time, as with autonomous vehicles and aircraft.[2]

How human–machine interactions take place affects the risk management process. Accuracy improves as AI systems learn. There is a risk, however, that decisions made solely by technology are accepted by users without proper human oversight.

Defining ethical imperatives

Technology companies, professional associations, government agencies, NGOs and academic groups have already developed many AI codes of ethics and professional conduct. While these can be helpful resources for organizations developing their own codes, they do not provide a universal solution. Organizations must develop their own.

While people may agree on broad concepts, the specifics of those concepts may differ. Ethics codes for AI systems may focus on transparency, privacy and benefitting society at large, but our review of existing codes shows that the definitions, or what is included under these terms, can vary.

Public codes provided by technology companies are often aspirational rather than helpful guidelines. Some leave out specific commitments or define what a stated aspiration means. Academic codes are often aspirational in another way: They make specific commitments to public good, without spelling out how.

As organizations wrestle with developing codes, they will discover what moral philosophers already know: Ethical decision-making is often difficult. What is considered right and wrong can differ markedly by culture. People can view rights and legitimate interests from very different and sometimes conflicting perspectives, based on their experience and milieu. Some ethical dilemmas may be presented as a choice between lesser evils rather than of right from wrong. Ethics codes may be about making informed choices within guardrails that set limits of acceptable behaviour, and not simply about doing the right thing.

Codes address different audiences with different needs. Internally, codes must be the core of guidance mechanisms for making ethical decisions as organizations develop, deploy and use AI, such as codes of conduct and causes for dismissal. They must also be guarantors of employee rights and protections as well as their responsibilities. Externally, they serve as assurances to the public, as guarantees to customers and guidelines for vendors and partners. Effective AI ethics codes fulfil all these functions.

Defining the ethical principles that AI systems and their human creators and users should follow requires thoughtful analysis and consensus-building. So too does the implementation of those principles.

These issues must be thought through by leaders of each organization that deploys AI. While certain broad principles are found in nearly all codes, no one code is sufficient and appropriate for every organization.

The costs of ethical failure

Any failure to consider and address these issues and concerns could drive away clients, partners, and employees. Their trust in both the technology and the company can easily be diminished and lost if the company is not clear about its the policies around AI issues, and if the delivered technology acts as a “black box”.

Moreover, there may also be legal and regulatory consequences. Many AI application domains, such as financial systems, are heavily regulated in regard to fairness, diversity and inclusion. Enterprises need to ensure that compliance with these regulations is maintained when they inject AI systems into these domains. Companies should understand the origin of the data used to train AI models and what it is possible to do with it. Several regulations, such as the General Data Protection Regulation (GDPR) in Europe, put restrictions on how to treat data, especially personal data. Furthermore, accountability, redress and liability of AI systems are likely to feature in future litigation. Litigators and legal decision-makers will require clarity around such issues.

Responsibilities

Oversight of ethical standards for AI is a fundamental responsibility of the board. Our analysis of the G20/OECD Principles of Corporate Governance provides three main reasons:

The board ensures that ethics matter by hiring ethical executives and holding them accountable.

According to Principle VI.C: “The board has a key role in setting the ethical tone of a company, not only by its own actions, but also in appointing and overseeing key executives and consequently the management in general. High ethical standards are in the long-term interests of the company as a means to make it credible and trustworthy, not only in day-to-day operations but also with respect to longer-term commitments.”[3]

Boards cannot execute their responsibilities without ethical standards.

Boards cannot effectively judge whether the company’s AI strategy, plans and performance are in alignment with their companies’ core values and ethical standards unless those standards are articulated. Without knowing what ethical standards should guide them or the dangers of violating them, boards cannot consistently “act on a fully informed basis, in good faith, with due diligence and care” (Principle VI.A). Nor can they “review corporate strategy, major plans of action, risk-management policies and procedures” for alignment to ethical standards, whether “setting performance objectives” includes meeting ethical standards, or whether ethical standards are met when “monitoring implementation and corporate performance, and overseeing major capital expenditures, acquisitions and divestitures” (Principle VI.D.1). Boards must therefore make sure these standards are set. For more details about the ethical responsibilities pertaining to a specific strategy topic, please see the responsibility sections of the individual modules. For ethical responsibilities that are relevant across modules, please see the Responsibility module.

Boards protect whistleblowers.

Ethics standards are toothless if employees and other stakeholders cannot report violations to the highest authorities. “Stakeholders, including individual employees and their representative bodies, should be able to freely communicate their concerns about illegal or unethical practices to the board and to the competent public authorities and their rights should not be compromised for doing this” (Principle IV.E). To protect stakeholders’ ability to communicate ethics concerns regarding AI and data practices, the board should ensure the company establishes procedures and safe harbours for complaints by employees, their representative bodies and others outside the company, concerning illegal and unethical behaviour. The board should also give them direct and confidential access to an independent board member.

The analysis in this section is based on general principles of corporate governance, including the G20/OECD Principles of Corporate Governance, 2015. It does not constitute legal advice and is not intended to address the specific legal requirements of any jurisdiction or regulatory regime. Boards are encouraged to consult with their legal advisers in determining how best to apply the principles discussed in this module to their company.

Oversight

This section includes five tools:

The AI ethics principles development tool helps boards of directors and AI ethics boards develop an AI ethics code. It can also be used to guide board discussions on ethics principles. This tool contains eight broad principles, with more specific principles that follow on from them, and a way to evaluate their relevance.

View Appendix 1 for the principles development tool here

Goals and guidance for the AI ethics board provides questions to consider before establishing an AI ethics board. It contains suggestions for goals for the AI ethics board to accomplish and guidance for it to follow. It also contains issues for the board of directors to consider in advance.

View Appendix 2 for the AI ethics board goals and guidance tool here

Selecting the members of the AI ethics board suggests requirements to consider when appointing members to the AI ethics board. It contains four major requirements, along with questions to ask and actions to undertake during the search and evaluation process.

View Appendix 3 for the AI ethics board member selection tool here

Assessing the draft AI ethics code provides questions to help directors evaluate the draft code presented by the AI ethics board.

View Appendix 4 for the AI ethics code assessment tool here

Assessing implementation, monitoring and enforcement of the AI ethics code includes questions to help boards evaluate whether they are receiving the information they require to carry out their oversight responsibilities, and whether the management team of the AI ethics board is effectively carrying out these responsibilities.

View Appendix 5 for the implementation, monitoring and enforcement tool here

Agenda

The following suggestions can help individuals who prepare board discussions and set the board’s agenda on ethics and AI:

Before leading the first meeting

  • Prepare yourself: Become familiar with the ethics issues created by AI, and the most urgent ethics issues for the board to address. The ‘Resources’ section provides reading on ethics issues. Speak to executives, in particular senior IT executive and security officers, about the ethics issues that concern them.
  • Gauge board member interest in AI ethics: Speak to other board members. Learn what importance they place on AI ethics and what concerns they have. Identify the board members who are most interested in achieving rapid progress on addressing AI ethics issues, and those who have concerns or lack interest.
  • Set goals: Think ahead about the desired outcomes from the board discussion.

Set the initial agenda items

These may comprise:

Creating an AI ethics code. Steps include:

  • Discuss: Review the relevant AI ethics issues for your company, and your organization’s readiness to address it (the Ethics principles development tool can help spark the discussion). The discussion should be informed not only by the organization’s legal responsibilities and risks, but also by its ethical values and employee expectations.
  • Frame: Discuss how the AI ethics code will be created: whether by a new or existing ethics board or committee, the resources it will be given, and how to assure its independence.
  • Set goals: Consider the goals and guidelines for the board that will develop AI ethics guidelines.
  • Delegate: Discuss who on the board will be responsible for establishing an AI ethics board, the qualifications of the AI ethics board members, and the goals and guidelines to set before them. These results should help meet the board’s responsibilities, as recommended by the G20 OECD Principles.
  • Engage: Discuss how the board will support the ethics code creation process without compromising the independence of the AI ethics board.

Set follow-up agenda items

These can include:

  • Review progress: Discuss the work of the AI ethics board, including evaluating a draft AI ethics code, its implementation and its impact.
  • Access to information: Discuss with fellow board members what information they require in order to stay current with emerging AI ethics issues.
  • Employee safeguards: Discuss what steps to take to ensure the board is aware of illegal or unethical practices.
  • Review ethics programmes: Periodically review the effectiveness of the ethics code and whether it needs to be updated.
  • Awareness of ethics issues: Periodically review emerging and potentially significant ethics issues at your organization, the partners on whom the company relies and across your industry.
  • Broaden board thinking: Invite an expert on AI ethics to present to the board. These could include academics or leaders from the organizations devoting themselves to AI ethics issues.

Resources

(All links as of 18/8/19)

Books

  • Anastassia Lauterbach, The Artificial Intelligence Imperative: A Practical Roadmap for Business, Praeger, 2018.
  • H. James Wilson and Paul R. Daugherty, Human + Machine – Reimagining Work in the Age of AI, Harvard Business School Press, 2018.
  • Paula Boddington, Towards a Code of Ethics for Artificial Intelligence, Springer, 2017.

Toolkits

Articles and reports

AI ethics for boards

Other articles and reports

Panels and boards

  • Australian Computer Society’s AI Ethics Committee.
  • Axon’s AI Ethics Board.
  • DeepMind’s Ethics and Society fellows (AI research arm of Alphabet).
  • Lucid’s Ethics Advisory Board.
  • Microsoft’s AETHER Committee.

Endnotes

(All links as of 18/8/19)

Other modules:

Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | Cybersecurity | Ethics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable Development | Technology Strategy | Glossary