Augmentation, automation and acceptance.
Some 70% of Fortune 500 companies are using AI predominantly to cut costs, often by eliminating labour through automation. But AI strategists argue that the technology can improve productivity and produce more value when it gives people new capabilities. AI can help knowledge workers improve their judgement, enable manufacturers to increase their factories’ flexibility and allow executives to create innovative processes that can support new business models. For instance, designers at Airbus and General Motors are using AI-enabled “generative software” to help them find new design possibilities. They have already produced lighter, stronger automotive and aircraft parts. Boards can press management to focus on opportunities to help workers and create employment, rather than threaten jobs.
The culture for AI success.
The use of AI systems requires trust, eagerness for experimentation and mindfulness of ethics, responsibilities and risks by all involved. Employees must be confident that the algorithms will lead to better decisions and actions, and executives need to be assured that use of algorithms won’t lead to legal troubles, inaccurate financial reports or other problems. AI must earn that trust with accurate, unbiased and explainable models, sufficient and accurate data, system reliability, and human control and accountability. A productive AI culture also requires a willingness to experiment and innovate with AI, balanced by an appreciation of the risks, responsibilities and ethics concerns in the use of AI. Directors should see that all of these cultural requirements are being targeted.
Inclusion and diversity.
Diversity is a weapon to be used against the risk of bias in algorithms. Inclusive and diverse AI teams can more readily recognize biases in AI data and models, and spot insensitivities in AI applications, quashing them before they get out the door. A diverse AI workforce also leads to better-functioning teams and more profitable companies. A recent report found a significant correlation between gender, racial and ethnic diversity and three financial metrics – revenues, operating margins and market value – among technology companies. Why? Diversity helps decision-makers and technology teams to avoid blind spots and groupthink, question assumptions, and be aware of differences of perspective. Diversity, combined with equality of opportunity and support for all employees, promotes innovative thinking and cultures, results in fewer mistakes, and leads to products that are better suited for different kinds of consumers. So, while inclusion can help close the gender gap in AI and prevent algorithmic biases that harm minority communities, even boards concerned only with maximizing profits and shareholder value should support a diverse, inclusive, barrier-smashing culture.
HR leaders must also look at how their own functions use AI. AI chatbots supported by machine learning are now in use for employee training. AI vendors sell systems to recruit and evaluate employees. Systems that predict employee behaviour are in the works; IBM has created a system that foresees employees that are likely to quit with 95% accuracy. These systems should be used responsibly and ethically, and boards need to see that they are.
Many of the same board responsibilities that the G20/OECD Principles of Corporate Governance assigns to strategy and ethics are also applicable to AI adoption, culture and responsible use by HR.
To set the ethical tone for the company, boards should champion ethics, hold executives accountable for ethical behaviour, and see that guidelines for the responsible use of AI are developed and followed throughout the organization.
To act in good faith, with due diligence and care, boards should be fully informed about plans to apply AI in their strategy, AI’s alignment with core values and ethical standards, the risks associated with the company’s AI strategy and regulations affecting the use of AI. Directors should have access to accurate, relevant and timely information.
To oversee corporate strategy, major plans of action, risk management, and budgets and business plans, boards should review and guide management’s vision, goals, actions and expenditures for AI, their support for innovation and using new AI resources, management’s awareness and plans for legal compliance and ameliorating AI risk, and competitors’ use and plans for AI.
To oversee corporate performance, expenditures and acquisitions, boards should review and guide the alignment of AI with strategy, shareholder values, ethics, performance and risk indicators, and implementation of AI plans. Also falling under board purview: oversight of the effectiveness of AI to accelerate processes and improve productivity; major investments in AI systems and talent, and acquisitions.
To carry out these responsibilities, boards should also review and guide these concerns:
Set an ethical tone for the company.
- Creation and enforcement of guidelines for responsible and ethical AI use.
- Ensuring a culture that embraces AI ethics and takes care to develop and use AI responsibly.
- Training in AI ethics and responsibilities.
Act in good faith, with due diligence and care.
- Strategies for establishing a culture that supports AI innovation in a responsible, ethical way.
- Employee attitudes towards AI.
- Strategies for earning employee trust in AI systems and achieving awareness and engagement in AI risk management.
- Strategies for improving productivity with AI through augmentation and automation
- Regulations that affect use of AI by HR departments and managers.
- How AI is being used to acquire, develop, evaluate and manage talent; its effects on employee engagement; and best practices.
Oversee corporate strategy, major plans of action, risk management, and budgets and business plans.
- Management strategy to augment employee performance with AI.
- Management’s approach to using AI for talent acquisition and development.
- Management strategy to achieve a diverse and inclusive AI workforce.
- Major actions and expenditures for the use of AI in HR management, and progress towards successful implementation.
- Management’s plans and actions to encourage HR professionals to adopt AI solutions in their daily activities.
Oversee corporate performance, expenditures and acquisitions.
- Performance of AI used for HR management.
- Success in creating a culture that supports AI innovation, use and responsibility.
- Management compliance with data protection regulations (e.g. GDPR in the EU) and anti-discrimination laws (e.g. Civil Rights Act in the US).
In addition, board members are “expected to take due regard of, and deal fairly with, other stakeholder interests, including those of employees”. To fairly deal with employees’ interests, boards should also see that:
- AI systems used in recruiting, retaining and evaluating employees, and for other purposes by HR departments, are fair and unbiased.
- Employees have equal access to the benefits of AI.
- Employees’ personal data is protected and processed in accordance with the law (e.g. GDPR), kept secure and available only on a right-to-know basis.
- Systems used in HR management, and the decisions they make, are explainable and transparent.
The analysis in this section is based on general principles of corporate governance, including the G20/OECD Principles of Corporate Governance 2015. It does not constitute legal advice and is not intended to address the specific legal requirements of any jurisdiction or regulatory regime. Boards are encouraged to consult with their legal advisers in determining how best to apply the principles discussed in this module to their company.