Effective oversight enables responsible use of technology

Like technology innovations before it, AI is transforming our business and personal lives in many ways. But what makes AI somewhat different is the staggering speed at which it is being adopted, which presents a unique challenge for boards to balance the risks, opportunities and ethical concerns that arise alongside an emerging technology that historically has had loose governance.  

In a recent roundtable moderated by Grant Thornton National Managing Partner for Geography Nichole Jordan, a panel of senior executives and board members had a practical discussion on how boards can promote ethical use of AI throughout the organizations they serve.

How can the board maintain its “noses in, fingers out” role while encouraging responsible corporate and board use of AI?

“First, create the space in every management conversation to ask how AI is connected to decision-making,” said Grant Thornton Chief Strategy Office Chris Smith. “Second, introduce a systematic approach and structure on the board so these conversations become a discipline. And third (but not least), implement secure AI technologies for board-level performance improvement in order to model the behaviors you’d like management to adopt. 

“Because AI is a significant disruption, it is appropriate for boards to go a little deeper than usual. When there’s a massive disruption that can have such a major impact, it’s still ‘trust but verify,’ but verify a little deeper.” 

The landscape is littered with examples of negative brand impact related to AI, highlighting the importance of achieving alignment on AI ethics with management. Problems can arise when an AI project is evaluated against a single priority and ends up causing unseen risks to others.

“If a company plans to implement AI to improve customer service, but has also made a public commitment to reduce energy consumption, it’s important to address these potentially conflicting ethical priorities,” Smith said. “The energy required to power AI could create tension points for certain companies based on the brand philosophy or the ‘E’ in their ESG mission.” 

 

Set a positive tone

The wide potential scope and disruptive impact of AI can be unsettling for some people, but boards can foster a sense of confidence by setting a positive tone.

“It’s very natural to be worried about new technologies, especially foundational ones that could be very transformative,” Grant Thornton Advisory Services Partner Ethan Rojhani said. “Popular media can overemphasize the fear factor. But in our experience, AI’s impact on employee morale and culture has been tremendously positive.”

The board can help calm fears by showing its commitment to the ethical use of AI. It is not imperative to have AI-specific expertise on the board to accomplish this, but board members should be educated and trained appropriately on the capabilities and ethical issues related to the technology. “There is a huge difference between being an AI practitioner and being able to oversee AI governance,” said Deborah Dunie, CEO of DBD Insights and member of numerous boards. “You do not need to be an expert in the technology to make the types of ethical and other decisions that we're making with respect to meeting the strategic plan.”

Adopt an AI framework 

Using a well-constructed governance framework to guide oversight of AI can help boards take a thorough approach, consider all the relevant aspects of the technology, and prevent potentially harmful gaps in governance.

“Intuitively, we think that we launch a new system, and we will be more efficient and more effective, but we need to evaluate what is really happening,” Dunie said. “It’s really important to use a good framework…. It shows you are looking at gating factors that are meaningful, and tracking results like increased performance or reputation.”

infográfico

6 important questions for management

Asking management the right questions on AI ethics can set the organization on a course to productive use of the technology without negative consequences. For each proposed AI initiative, board members should consider asking:

  • How have you addressed the ethical considerations of this AI initiative?
  • Have you considered the potential second- and third-order effects of the initiative?
  • How are you protecting employee, client and customer data from a privacy perspective?
  • What are the processes for auditing the AI algorithm to ensure it is operating as designed or to switch it off in case it isn’t? (Note that such auditing is possible only for in-house-designed AI and open-source large
  • language models and not for large language models such as ChatGPT, whose algorithm is closed and not accessible.)
  • Does the initiative comply with laws and regulations around AI ethics and fairness, including intellectual property rights, both foreign and domestic?
  • How are you preparing for the regulation on the horizon?
  • How will you keep the board informed on the progress of the initiative?

A steadying influence

The board’s charge to provide oversight in the midst of the transformational power of AI can be challenging, but boards have helped guide organizations through many significant changes over the years. From the Y2K crisis to the advent of digital business to the pandemic, boards have provided a steadying influence that has kept businesses thriving.

As the future unfolds, unexpected events and outcomes will require swift, appropriate action. Boards that have consensus with management on ethical AI use will be best positioned to provide valuable guidance.