Contents
Introduction | Examples | Responsibilities | Oversight | Agenda | Resources | Endnotes
View this module as a PDF here
Other modules:
Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | Cybersecurity | Ethics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable Development | Technology Strategy | Glossary
Introduction
Recruiting talent is one of the ways in which human resources can use AI. However, use cases coming to light show that this can cause brand risk and less diverse hiring strategies. Additionally, many people view AI as likely to take their job, and so human resources departments have a role to play in reskilling workers who are displaced by AI and in helping workers to work beside AI tools.
As Jeff Erhardt, vice-president of intelligent systems at GE Digital, said: “People tend to worry, thinking, ‘This is going to take away my job,’ ‘This is going to make a bad decision’, ‘This is not going to be useful to me’. If that happens, they’ll simply not use it, or put up such a high barrier to success that they’ll never allow it to go into production.”[1]
Boards, through their oversight of executive teams, establish the culture and catalyse the practices that are necessary for success with AI. Three sets of issues are at play:
1
Augmentation, automation and acceptance.
Some 70% of Fortune 500 companies are using AI predominantly to cut costs, often by eliminating labour through automation.[2] But AI strategists argue that the technology can improve productivity and produce more value when it gives people new capabilities. AI can help knowledge workers improve their judgement, enable manufacturers to increase their factories’ flexibility and allow executives to create innovative processes that can support new business models. For instance, designers at Airbus and General Motors are using AI-enabled “generative software” to help them find new design possibilities. They have already produced lighter, stronger automotive and aircraft parts.[3] Boards can press management to focus on opportunities to help workers and create employment, rather than threaten jobs.
2
The culture for AI success.
The use of AI systems requires trust, eagerness for experimentation and mindfulness of ethics, responsibilities and risks by all involved. Employees must be confident that the algorithms will lead to better decisions and actions, and executives need to be assured that use of algorithms won’t lead to legal troubles, inaccurate financial reports or other problems. AI must earn that trust with accurate, unbiased and explainable models, sufficient and accurate data, system reliability, and human control and accountability.[4] A productive AI culture also requires a willingness to experiment and innovate with AI, balanced by an appreciation of the risks, responsibilities and ethics concerns in the use of AI. Directors should see that all of these cultural requirements are being targeted.
3
Inclusion and diversity.
Diversity is a weapon to be used against the risk of bias in algorithms. Inclusive and diverse AI teams can more readily recognize biases in AI data and models, and spot insensitivities in AI applications, quashing them before they get out the door. A diverse AI workforce also leads to better-functioning teams and more profitable companies. A recent report found a significant correlation between gender, racial and ethnic diversity and three financial metrics – revenues, operating margins and market value – among technology companies. Why? Diversity helps decision-makers and technology teams to avoid blind spots and groupthink, question assumptions, and be aware of differences of perspective. Diversity, combined with equality of opportunity and support for all employees, promotes innovative thinking and cultures, results in fewer mistakes, and leads to products that are better suited for different kinds of consumers.[5] So, while inclusion can help close the gender gap in AI[6] and prevent algorithmic biases that harm minority communities, even boards concerned only with maximizing profits and shareholder value should support a diverse, inclusive, barrier-smashing culture.
HR leaders must also look at how their own functions use AI. AI chatbots supported by machine learning are now in use for employee training.[7] AI vendors sell systems to recruit and evaluate employees. Systems that predict employee behaviour are in the works; IBM has created a system that foresees employees that are likely to quit with 95% accuracy.[8] These systems should be used responsibly and ethically, and boards need to see that they are.
Examples
Johnson & Johnson
Johnson & Johnson is training IBM’s Watson to rapidly read and analyse scientific literature to save scientists time in the drug discovery process. Here, machine learning algorithms process large volumes of data to “unlock” information hardly accessible by human beings. This exemplifies the great promises of AI in knowledge acquisition at scale.
Saudi Aramco
Saudi Aramco, the Saudi Arabia-based energy oil & gas company, has deployed multiple AI solutions in one of its gas treating facilities to make work safer and more efficient. A machine learning based solution was deployed to monitor flares and detect various issues from the flare's shape and color. This solution relieved inspectors from inspecting the flare up-close. In addition, Saudi Aramco deployed an augmented reality-based solution that enables operators to stream what they see to a remote command centre and receive instructions. Thus, reducing the probability of mistakes and costs of mobilizing experts to remote areas.[9]
Walmart
One way to entice employees to embrace AI is using it to improve workers’ work quality & skills. Walmart, the retail giant, has deployed AI-based co-bots that train workers in how to collaborate with them. The workers have more time now to assist customers as co-bots overtook mechanical jobs such as shelf scanning and floor cleaning. Furthermore, workers can concentrate on higher cognitive skills, social & emotional skills, all of which will be in greater demand in the future.[10] [11]
Responsibilities
Many of the same board responsibilities that the G20/OECD Principles of Corporate Governance assigns to strategy and ethics are also applicable to AI adoption, culture and responsible use by HR.
To set the ethical tone for the company, boards should champion ethics, hold executives accountable for ethical behaviour, and see that guidelines for the responsible use of AI are developed and followed throughout the organization.
To act in good faith, with due diligence and care, boards should be fully informed about plans to apply AI in their strategy, AI’s alignment with core values and ethical standards, the risks associated with the company’s AI strategy and regulations affecting the use of AI. Directors should have access to accurate, relevant and timely information.
To oversee corporate strategy, major plans of action, risk management, and budgets and business plans, boards should review and guide management’s vision, goals, actions and expenditures for AI, their support for innovation and using new AI resources, management’s awareness and plans for legal compliance and ameliorating AI risk, and competitors’ use and plans for AI.
To oversee corporate performance, expenditures and acquisitions, boards should review and guide the alignment of AI with strategy, shareholder values, ethics, performance and risk indicators, and implementation of AI plans. Also falling under board purview: oversight of the effectiveness of AI to accelerate processes and improve productivity; major investments in AI systems and talent, and acquisitions.
To carry out these responsibilities, boards should also review and guide these concerns:
Set an ethical tone for the company.
- Creation and enforcement of guidelines for responsible and ethical AI use.
- Ensuring a culture that embraces AI ethics and takes care to develop and use AI responsibly.
- Training in AI ethics and responsibilities.
Act in good faith, with due diligence and care.
- Strategies for establishing a culture that supports AI innovation in a responsible, ethical way.
- Employee attitudes towards AI.
- Strategies for earning employee trust in AI systems and achieving awareness and engagement in AI risk management.
- Strategies for improving productivity with AI through augmentation and automation
- Regulations that affect use of AI by HR departments and managers.
- How AI is being used to acquire, develop, evaluate and manage talent; its effects on employee engagement; and best practices.
Oversee corporate strategy, major plans of action, risk management, and budgets and business plans.
- Management strategy to augment employee performance with AI.
- Management’s approach to using AI for talent acquisition and development.
- Management strategy to achieve a diverse and inclusive AI workforce.
- Major actions and expenditures for the use of AI in HR management, and progress towards successful implementation.
- Management’s plans and actions to encourage HR professionals to adopt AI solutions in their daily activities.
Oversee corporate performance, expenditures and acquisitions.
- Performance of AI used for HR management.
- Success in creating a culture that supports AI innovation, use and responsibility.
- Management compliance with data protection regulations (e.g. GDPR in the EU) and anti-discrimination laws (e.g. Civil Rights Act in the US).
In addition, board members are “expected to take due regard of, and deal fairly with, other stakeholder interests, including those of employees”. To fairly deal with employees’ interests, boards should also see that:
- AI systems used in recruiting, retaining and evaluating employees, and for other purposes by HR departments, are fair and unbiased.
- Employees have equal access to the benefits of AI.
- Employees’ personal data is protected and processed in accordance with the law (e.g. GDPR), kept secure and available only on a right-to-know basis.
- Systems used in HR management, and the decisions they make, are explainable and transparent.
The analysis in this section is based on general principles of corporate governance, including the G20/OECD Principles of Corporate Governance 2015. It does not constitute legal advice and is not intended to address the specific legal requirements of any jurisdiction or regulatory regime. Boards are encouraged to consult with their legal advisers in determining how best to apply the principles discussed in this module to their company.
Oversight
This section includes three tools to help directors oversee people and culture strategy for AI.
The knowledge assessment tool helps board members rate whether they possess, or have access to, the knowledge required to independently judge management’s knowledge and leadership on people and culture issues regarding AI, such as the future of work, promoting innovation, ethics and trust, staffing, and using AI in HR management.
View Appendix 1 for the knowledge assessment tool here
The performance review tool consists of questions boards can ask management about their knowledge of people and culture strategy for AI, and the progress and performance of their actions. It offers the SCEPTIC framework to help directors assess the answers they receive.
View Appendix 2 for the performance review tool here
The guidance tool offers possible suggestions for further action in an “If, then” format.
Agenda
The following suggestions can help the individual who prepares the board discussion and sets the agenda on discussing the people and culture side of AI strategy.
Before leading the first meeting
- Prepare yourself: Become familiar with AI, the HR management and cultural requirements for AI success, and AI’s ethics, recruiting and risk management challenges. Speak to senior HR, legal and IT executives, particularly those responsible for developing AI systems. Hear from line managers about employee attitudes towards AI. The Resources section provides further reading and frameworks on AI and risks.
- Gauge board member interest in inclusion, culture, ethics and staffing issues involving AI: Speak to other board members. Learn what importance they place on the people and culture strategy for AI, which issues are top of their minds, and which are not well understood. Identify the board members who are most interested in moving forward with new AI investments, and those who have concerns or lack interest.
- Set goals: Think ahead about the desired outcomes from the board discussion
Set the initial agenda clarifying the cultural requirements for AI success
Agenda items can include:
- Review: Discuss the company’s strategy goals and intentions for AI.
- Discussion: What are the people and culture requirements for achieving that strategy?Look at the issue from different angles: innovation, risk, ethics and trust. Are the requirements for achieving the company’s goals in place? What work needs to be done?What work is already being done?
- Delegate: Decide on next steps and priorities. These can include directing management to focus on issues such as ethics training, to develop plans or to report on progress.
- Engage: Decide how the board will continue to follow management’s work.
Set follow-up or alternative agenda items
These can include:
- AI innovation: Are product developers and process leaders using AI to create more innovative products, services and processes? Look at how to achieve more innovation with AI by providing resources, removing obstacles and establishing expectations.
- Diversity and inclusion: Discuss how to achieve a more diverse workforce among the teams developing AI systems, and include a broader range of perspectives by supporting inclusion.
- Augmenting employees through AI: Discuss opportunities to provide managers, professionals and staff with AI systems that enable them to be more productive, do more valuable work and make better decisions.
- Creating an ethical AI culture: What are the ethics issues on which management and employees must focus as they work with AI? Review the steps being taken to develop ethical thinking, habits and resources for employees.
- Overcoming the AI talent shortage: Review skill and job needs for AI and plans to meet them.
Resources
(All links as of 11/8/19)
Books
- Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, W.W. Norton and Company, 2014.
- Jacob Morgan, The Future of Work: Attract New Talent, Build Better Leaders, and Create a Competitive Organization, Wiley, 2014.
- Paul R. Daugherty and H. James Wilson, Human + Machine: Reimagining Work in the Age of AI, Harvard Business Review Press, 2018.
- Thomas A. Kochan, Shaping the Future of Work: A Handbook for Action and a New Social Contract, MIT Press, 2017.
Reports and papers
- Ellyn Shook and Julie Sweet, “Equality = Innovation: Getting to Equal 2019, Creating a Culture that Drives Innovation”, Accenture, 2019.
- Ellyn Shook and Mark Knickrehm, “Reworking the Revolution”, Accenture, 2018.
- Haiyan Zhang, Sheri Feinzig and Hannah Hemmingham, “Making Moves Internal Career Mobility and the Role of AI”, IBM Smarter Workforce Institute research, 2017.
- Paul Daugherty, Eva Sage-Gavin and Madhu Vazirani, “Missing Middle Skills for Human-AI Collaboration”, Accenture, 2018.
- Peter Cappelli, Prasanna Tambe and Valery Yakubovich, “Artificial Intelligence in Human Resources Management: Challenges and a Path Forward”.
- Till Alexander Leopold, Saadia Zahidi and Vesselina Ratcheva, “The Future of Jobs Report 2018”, World Economic Forum, 2018.
- “Decoding Diversity: The Financial and Economic Returns to Diversity in Tech”, Intel, 2016.
- “Shaping an Ethical Workplace Culture”, Society for Human Resource Management Foundation, 2013.
- “The Future of Human Resources: A Glimpse into the Future”, Deloitte.
Articles
- Ben Dattner, Tomas Chamorro-Premuzic, Richard Buchband and Lucinda Schettler, “The Legal and Ethical Implications of Using AI in Hiring”, HBR.org, 25 April 2019.
- Nicholas Epley and Amit Kumar, “How to Design an Ethical Organization”, Harvard Business Review, May-June 2019.
- Miranda Bogen, “All the Ways Hiring Algorithms Can Introduce Bias”, HBR.org, 6 May 2019.
Endnotes
(All links of 11/8/19)
- [1] Dan Woods, “Three Critical Success Factors For Avoiding AI and ML Failure”, Forbes.com, 16 August 2018.
- [2] Fortune CEO Daily newsletter, 15 May 2019.
- [3] “Future of Making Things: Customer Innovation Spotlight”, Autodesk.com.
- [4] Berkeley J. Dietvorst et al., “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err”.
- [5] “Decoding Diversity: The Financial and Economic Returns to Diversity in Tech”, Intel, 2016; Ellyn Shook and Julie Sweet, “Equality = Innovation: Getting to Equal 2019, Creating a Culture that Drives Innovation”, Accenture, 2019.
- [6] Lauren D’Ambra Faggella, “Women in Artificial Intelligence – a Visual Study of Leadership Across Industries”, TechEmergence, 15 September 2017; World Economic Forum Global Gender Gap Report 2018.
- [7] Taryn Oesch, “Our ‘Digital Friends’: Using Chatbots in Corporate Training”, Trainingindustry.com, 5 July 2018.
- [8] Eric Rosenbaum, “IBM Artificial Intelligence Can Predict with 95% Accuracy which Workers Are about to Quit Their Jobs”, CNBC, 3 April 2019; Sascha Eder, “Should You Use AI for Performance Review?”, LinkedIn Pulse, 31 July 2018.
- [9] Saudi Aramco, "Saudi Armco recognized as a leader in the Fourth Industrial Revolution", 2019.
- [10] Jacques Bughin, James Manyika, "Your AI Efforts Won’t Succeed Unless They Benefit Employees", Harvard Business Review, July 2019. Elizabeth Walker.
- [11] "#SquadGoals: How Automated Assistants are Helping Us Work Smarter", Walmart, April 2019.
Other modules:
Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | Cybersecurity | Ethics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable Development | Technology Strategy | Glossary