While AI may present unique use cases and concerns, its cyber-risk and resilience implications arise in the context of complex and highly interconnected ecosystems. The AI Oversight Toolkit provides guidance based on a general set of standards, including the G20/OECD Principles of Corporate Governance, 2015. With regard to the necessity for board governance of cybersecurity risks and resilience, the World Economic Forum and its partners have developed a comprehensive framework of ten core principles intended to enable boards to fulfil their responsibilities within that context. In order to ensure responsible oversight of cyber-risk and resilience related to AI, boards can draw upon this same framework to incorporate the cybersecurity implications of AI into their overall approach to cyber-risk and resilience.
Principle 1: Responsibility for cyber-resilience
- Based on the company’s current and anticipated future use of and interaction with AI, does the board’s existing structure and process for reviewing cyber-risk and resilience provide sufficient oversight of AI?
- What skills and attributes do current and future board members need in order to understand cyber-risk and resilience with respect to AI?
- Do existing board members possess the requisite skills and experience to effectively oversee cyber-risk and resilience with respect to AI?
Principle 2: Command of the subject
- Does the board receive initial orientation and regular updates on cyber-resilience, including threats and trends related to the current and anticipated future use of AI within the organization?
- Does the board ensure that internal updates regarding the organization’s cyber-resilience, risk exposure and risk stance, as well as any independent assessments and benchmarking of the organization’s cyber-risk and resilience approach, incorporate the organization’s current and anticipated future use of AI?
Principle 3: Accountable officer
- Is the assigned corporate officer accountable for cyber-risk and resilience also authorized and accountable for reporting on cyber-risk and resilience related to AI?
- Does the corporate officer accountable for reporting on cyber-risk and resilience related to AI have sufficient visibility over all areas of the organization where AI may arise?
- Are the organization’s decision-making authorities, processes and distribution of resources consistent with the structure and processes for responsibility over cyber-risk and resilience related to AI?
Principle 4: Integration of cyber-resilience
- Does the board ensure that management integrates cyber-resilience and cyber-risk assessment – as related to the organization’s acquisition, use of or interaction with AI – into an overall business strategy and business-wide risk management?
- Does the board review annually the organization’s strategic plan, ensuring that cyber-risk and resilience of current or anticipated use of AI is appropriately incorporated and represented in the plan?
- Does the board consider the organization’s current and anticipated use of AI in its review of the organization’s cyber-resilience strategy, including risk-management options such as insurance?
Principle 5: Risk appetite
- Does the board have an understanding and visibility of how the organization’s cyber-risk appetite is being applied in business decision-making related to AI?
- Is risk exposure related to the organization’s current and anticipated future use of AI included when the board is advised on current and future cyber-risk exposure, regulatory requirements and industry/societal benchmarks for risk appetite?
Principle 6: Risk assessment and reporting
- Does the board hold management accountable for providing balanced reporting regarding the present and future organizational and ecosystem-wide cyber-risk situation – including those affected by the development of AI – as a standing agenda item during board meetings?
- Does the board receive updates from management regarding specific threats and trends associated with the use of AI by third parties?
- Does the board ensure that management’s action plans regarding the organization’s cybersecurity culture and awareness take into account current or future use of AI?
Principle 7: Resilience plans
- Does the board ensure that management’s cyber-resilience plans – including business continuity, communications, disaster recovery and incident response plans – take full account of the existing use of AI across the organization (including supporting the execution of the resilience plans themselves)?
- What is the organization’s policy regarding the board’s role in relation to cyber-resilience plans – including those related to the deployment of AI as part of those plans – and has this been clearly and explicitly communicated to the board and executive management?
- Does the board ensure that management has adopted an appropriate approach to cyber-resilience related to the organization’s use of and interaction with AI?
Principle 8: Community
- Does the board encourage management to collaborate with other stakeholders, as consistent with overall business strategy, related to current and potential enterprise-level use of AI as well as global developments in AI that may affect the broader environments in which the organization operates?
- Does the board receive updates from the corporate officer accountable for reporting on cyber-risk and resilience regarding potential opportunities for community collaboration to address cyber-risk and resilience related to AI, as well as the benefits and risks of such collaborations on a broad basis and with respect to specific stakeholders?
- Does the corporate officer accountable for reporting on cyber-risk and resilience ensure internal coordination by all relevant parts of the organization on the cyber-risks arising from AI?
Principle 9: Review
- Does the board ensure that there is an annual independent cyber-resilience review of the organization that includes the organization’s use of, and interaction with, AI capabilities, as overseen by the accountable corporate officer in collaboration with other relevant corporate officers?
- Does the board ensure there is a process in place to evaluate third-party cyber-risk and resilience as related to AI?
Principle 10: Effectiveness
- Does the board periodically review its own performance in the implementation of these principles?
- In light of the fast-developing potential uses of AI, does the board need to seek independent advice for continuous improvement of its own performance with respect to cyber-risk and resilience oversight of AI?