Brand Strategy Empowering AI Leadership


Introduction | Examples | Responsibilities | Oversight | Agenda | Resources | Endnotes

View this module as a PDF here

View this module in Spanish

Other modules:

Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | CybersecurityEthics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable Development | Technology Strategy | Glossary


Nurturing trusted brands will be a vital requirement for firms wishing to use artificial intelligence (AI).

In the age of AI, the opportunity to strengthen a brand is huge. AI enables companies to offer services and products built on deep, personalized relationships. The benefit of operating as if “by magic” will create strongly loved brands – it’s not by accident that strong AI players such as Google are measured among the world’s most valuable brands. Moreover, a host of AI-powered tools are being created to enable marketing teams to better develop, support and monitor branding initiatives. Using AI as a force for social good, if part of a consistent pattern of supporting public well-being and ethical behaviour, will be one way in which firms deploying the technology can build goodwill.

But if AI offers an unparalleled opportunity to build an entirely new order of customer relationship, it also creates great risks that could destroy brands at terrifying speed. Should the AI malfunction and, for example, put human lives at risk or be seen to be used unethically, then the reaction may be severe. Issues that are not necessarily about AI – customer data privacy, data breaches and ethical management – will be thrown in to even sharper contrast by the technology. A growing host of media, regulators and campaign groups will be willing to step in and take public positions.

This means that the capacity of a firm to operate in the world of AI is going to be dependent on building, and sustaining, deep public trust. AI-powered propositions depend on often highly sensitive personal data, and stakeholders will be increasingly discerning about whom they share such data with and what the company is using the data for. Having a deeply trusted brand will become a clear differentiator – without sustainable trust it will be hard to persuade either customers or other partners in the value chain to want to share data with the firm.

The board, as the custodian of a company’s long-term ethical approach to business and the overseer of strategy, has a critical agenda-setting and oversight role in this area.

To balance the rewards and the risks, companies need to nurture their brands by:

  • Investing in and testing new tools to manage brands: AI is the force behind a new generation of tools to help companies monitor and grow their brands. Examples include employing image recognition to ensure appropriate logo use on a global basis or using natural language processing (NLP) to track social media conversations and sentiment about the brand in real time.
  • Having an absolute focus on building public trust: This requires that high ethical standards be set, communicated and monitored. Watching for changes in external expectations is crucial: The horizontal nature of AI technologies (e.g. NLP or image recognition) means that issues will quickly cross sectoral barriers within an industry. Behaving well, and being seen to behave well, will require willingness to respond to mistakes with a degree of transparency and humility. In certain crisis situations, board members may need to help set the approach.
  • AI for social good: There are many ways to use AI tools and resources to deliver better outcomes for the wider social good, including education, environment, health and hunger, equality and inclusion, and crisis response. Companies with strong AI capabilities can partner with NGOs and humanitarian organizations to lend their expertise in developing AI-based projects with a focus on social good.


US Open Tennis

US Open Tennis used machine learning to analyse social media activity around brands associated with the 2017 US Open. This enabled them to determine the value generated by each sponsor, placement and network. It found more than $3 million of total media value and more than 18 million engagements – with Mercedes-Benz and Emirates receiving the most value.[1]

A large technology company

A large technology company piloted a chatbot that aimed to experiment with learning through conversations. Targeted at 18–24-year-old Americans, the bot was designed to engage with people on social media platforms. However, the bot was fed discriminatory, socially inappropriate and offensive conversational data by online users, which it then learned to respond with. The chatbot was quickly taken offline and the experiment shut down – a swift response that probably limited longer-term brand damage.

A large retail technology company

It was recently revealed that a large retail technology company had to shelve an automated CV screening system when it became clear that applicants were discriminated against due to historic gender biases within the training data. This posed a clear risk to the company’s brand and social standing. But in quickly removing the system from use the company was able to avoid longer-term brand damage.

Microsoft's AI for Earth programme

Microsoft's AI for Earth programme offers grants providing access for environmental sustainability projects to the company's AI tools and services. This is part of Microsoft’s AI for Good suite – a growing $115 million, five-year commitment to work to unlock solutions to some of society’s biggest challenges using AI. Investment in corporate social responsibility benefits brands through positive public associations and increases market penetration of AI tools and know-how.[2]


While the G20/OECD Principles of Corporate Governance do not specify technology in the list of responsibilities, boards cannot carry out their oversight duties without considering how their company’s use of AI, and management of its consequences, will affect the company brand.

Many responsibilities that apply to other modules pertain to brand and reputational management:

  • To act in good faith, with due diligence and care, boards should be fully informed about plans for applying AI in their strategy, AI’s alignment with core values and ethical standards, the risks associated with their AI strategy, and regulations affecting the use of AI. Directors should have access to accurate, relevant and timely information.
  • To oversee corporate strategy, major plans of actions, risk management and budgets and business plans, boards should review and guide management’s vision, goals, actions and expenditure on AI, their support for innovation and use of new AI resources, management’s awareness and plans for legal compliance and ameliorating AI risk, and competitors’ use of and plans for AI.
  • To oversee corporate performance, expenditures and acquisitions, boards should review and guide AI’s alignment with strategy, shareholder values, ethics, performance and risk indicators, implementation of AI plans, the effectiveness of AI to accelerate processes and improve productivity, major investments in AI systems and talent, and acquisitions.

See the Responsibility module for more details about the G20/OECD Principles and the above responsibilities.

To carry out these responsibilities, boards should also review and guide these brand-specific concerns:

Act in good faith, with due diligence and care.

As it relates to AI, board members should make good faith efforts to be fully informed about:

  • The importance of brand trust: without it, customers and business partners could be unwilling to share the often-sensitive information that AI systems rely upon to be effective.
  • The use of AI for social good and its potential impact on the company brand.
  • The risk of AI negatively impacting the company brand if AI delivers a poor customer experience or causes harm to person or property.
  • The risk of a public scandal if the company is accused of using AI unethically or irresponsibly.
  • The use of AI to help create, measure and monitor brand development.
  • News about companies suffering damage to their brand due to wayward AI systems.
Oversee corporate strategy, major plans of action, risk management and budgets and business plans.

AI brand management topics that boards should review and guide may include:

Management’s plans to:

  • Increase the brand’s reputation for trustworthiness.
  • Use AI as a tool to develop, measure and monitor brand development and evolution.
  • Use AI to enhance the customer experience and offer new or enhanced products or services.
  • Use AI for social good.

Management’s major actions and policies to:

  • Minimize the risk of poor customer experiences that could harm the brand. Risks to the customer experience include poor conversational agents, inaccurate AI predictions, flawed decisions made by AI systems and robots, and misguided human interactions with AI systems.
  • Minimize the risk of employees damaging the brand by disregarding ethical and responsible AI practices. This includes preventing biased or discriminatory AI decisions, avoiding safety issues, and managing significant job displacement through automation by ensuring that good AI practices such as transparency, explainability and accountability are followed.

Testing for new scenarios that might emerge from the deployment of AI – as an example, how would the firm manage the response to an AI-powered video of the firm’s CEO?

Oversee corporate performance, expenditures and acquisitions.
  • Follow consumer and customer perception of the brand especially in relation to trust and likelihood-to-recommend metrics.
  • Ensure that ethical and responsible AI practices are being followed and any potential brand risks are identified.
  • Prepare a PR crisis response plan should the company be seen to be negatively using AI – and use the exercise to test whether the company is indeed operating in an appropriate fashion.
  • Check whether any acquisitions could potentially risk the company brand due to a poor AI-driven customer experience or a lack of responsible AI practices.


This section includes three tools to help directors oversee management’s engagement with the brand’s AI aspects.

The knowledge assessment tool helps board members rate whether they possess, or have access to, the knowledge required independently to judge management’s knowledge and leadership of AI and brand strategy.

View Appendix 1 for the knowledge assessment tool here

The performance review tool consists of questions boards can ask management about their knowledge of AI and competitive strategy and the progress and performance of their actions. It offers the SCEPTIC framework to help directors assess the answers they receive.

View Appendix 2 for the performance review tool here

The guidance tool offers possible suggestions for further action in an “If, then” format.

View Appendix 3 for the guidance tool here


The following suggestions can help the individual who prepares the board discussion and sets its agenda on ethics and artificial intelligence:

Before leading the first meeting:

  • Prepare yourself: Ensure you are familiar with what it means to develop a trusted brand and the potential opportunities and risks from AI to that trusted company brand. Identify the most urgent issues for the board to address. The Resources section lists further reading about potential brand development, risks and opportunities. Speak to executives, in particular the senior IT, data science and brand executives, about potential risks to the customer experience and about upcoming AI ethics questions that will need to be considered.
  • Gauge board member interest in AI and brand management: Speak to other board members. Learn what importance they place on AI and brand strategy and what concerns they have. Identify the board members who are most interested in rapidly moving forward on addressing AI and brand issues, and those who have concerns or who lack interest (and clarify whether this is based on other issues such as an educational gap).
  • Set goals: Think ahead about the desired outcomes from the board discussion.
Set the initial agenda item: building and protecting brand trust.

Agenda items may comprise:

  • Discuss: Review the perceived importance of trust for building and protecting brand value in the age of AI. Review the risks and opportunities to the company brand through the use of AI within the company. The risk conversation should not only draw upon internal aspects and issues but also consider important market examples (see the Resources section) for a fuller perspective. Opportunities include the use of AI for social good and the ability to better manage, measure and monitor the brand development.
  • Compare: Clarify how trust is being measured and the ways in which this might be strengthened or improved given the potential questions posed by the use of AI. Clarify what internal controls there are to manage risks that have been exposed, as well as considering what opportunities might positively affect the measurement.
  • Delegate: Check that the chief marketing officer, or similar front-line brand owner, has sufficient expert, educational support in place to be ready on AI-related issues that arise. However, ultimately ensuring that the necessary public trust is built and maintained will be a chief executive responsibility.
  • Engage: Decide how the board will participate in the brand monitoring process.

Set follow-up agenda items. These can include:

  • Access to information: Discuss with fellow board members what information they wish to have so that they can stay current with brand status and potential issues. You may, for example, want access to broader research on consumer views on data trust issues.
  • Employee safeguards: Discuss what steps to take to ensure the board is aware of practices or scenarios that might damage the brand.
  • Review ethics programmes: Regularly review the effectiveness of the ethics code and whether it needs to be updated. Being ethical, and being seen to be so, is a critical part of brand protection.
  • Awareness of ethics issues: Periodically review emerging and potentially impactful AI-related ethics issues at your organization, at partners on whom the company relies and across your industry. The reality of complex supply chains means that brand risks can emerge from different places.
  • Broaden board thinking: Invite experts on data and consumer trust issues to present to the board. These could include academics or leaders from the organizations that are devoting themselves to consumer data trust issues.
  • Avoiding “ethics washing”: Companies that use AI for altruistic purposes can be accused of using AI for good as a facade while other immoral activities continue. Discuss with fellow board members and outside experts why using AI for good requires ethical AI behaviour to build trust.


(All links as of 5/8/19)


  • Katie King, Using Artificial Intelligence in Marketing, Kogan Page, 2019.
  • Paula Boddington, Towards a Code of Ethics for Artificial Intelligence, Springer, 2017.

Reports and articles


(All links as of 5/8/19)

Other modules:

Home | Audit | Brand Strategy | Competitive Strategy | Customer Strategy | Cybersecurity | Ethics | Governance | Operations Strategy | People and Culture | Responsibility | Risk | Sustainable DevelopmentTechnology Strategy | Glossary