The Promise and Peril of AI: How Can Boards Respond to Emerging Technologies?

Sam Altman, CEO of OpenAI, reports that he was “caught off guard” following his termination. But he was all the more surprised when he was rehired just days later. The whole incident had been surrounded in controversy — originally citing Altman’s lack of consistent communication with the board, then later realizing that the board itself had not been clear in its communications with Altman or its stakeholders.

Altman’s experience reveals something about the way that governing boards approach their CEOs, but it also reveals something even more important: that many governing boards don’t fully appreciate the implications of AI technology and its impact on corporate America. Here are some lessons that executive leaders should draw from the OpenAI incident. 

The Need for Effective Governance Strategies

The rapid deployment of AI technologies reveals the need for companies to adopt governance strategies to keep pace with emerging technologies. This includes the regular implementation of these technologies (whatever that looks like in your corporate setting) as well as the ability to navigate through crises and areas of stakeholder concern. 

An effective governance strategy should seek to accomplish the following:

  • Communicate clearly with directors, management, and key stakeholders
  • Lead through crisis and change
  • Identify potential risks and plan for future contingencies
  • Highlight opportunities for improvement and/or profit

Sophisticated contingency planning will help boards consider every possible scenario and develop responses to challenges as well as opportunities. Making this a regular strategic focus will likewise help boards remain familiar with the evolving capabilities of AI-based applications as well as the risks associated with them. 

These exercises will likewise empower you to devise a communications strategy to aid key stakeholders in understanding your posture toward AI technology.

The Need to Train AI Applications

AI-based technologies have countless applications, but they require some adaptation and “training” to be effective within your company or industry. Companies will need to spend time nurturing the program to reach its fullest potential.

This process requires careful consideration. For example, what internal datasets are you willing to devote to training an AI-based application? 

On the one hand, the more data you can feed your AI technology, the more effective it will be. However, providing internal data to an AI system can expose you to risks in the form of privacy concerns, cybersecurity dangers, or threats to your intellectual property. Corporate teams will therefore need to make decisions regarding how best to handle these datasets to maximize the power of your AI tools.

The same applies to external data. Here, the question is less about privacy or security and more about ensuring that your AI programs receive the most accurate information to improve decision-making capabilities. As machine learning algorithms become more advanced, boards will need to properly train AI technology for accuracy and consistency.

The Need for Data Privacy and Security

Every board should be thinking about ways to improve data privacy and security in the face of AI. This starts with the data you use to “train” the application, but it also extends to the access that others may have through the application itself.

Federated learning is the practice of training your AI without allowing others to see the data that you’re using. This creates something of a barrier between your AI interface and the program itself. Similarly, homomorphic encryption allows AI programs to work with an encrypted dataset without others being capable of “reading” the data that AI is using.

Of course, no one single measure will ever be foolproof. Boards will therefore need to remain agile and adapt to new technologies and risk protocols to protect against both current and future risks. 

Furthermore, boards should establish policies regarding generative AI applications. How do such applications align with your commitment to security? Intellectual property rights? Devising a policy for these programs will ensure total protection of both your data and your legal rights.

The Need to Clarify the Purpose of AI

Before you really even roll out a major AI transition, it’s important for your governing board to be on the same page about the purpose of these new technologies. As is, AI-powered tools can be used for anything from creating marketing content to automating your core business processes — even providing automated customer service.

How does your company intend to use AI? Answering this question will not only bring your team into alignment but also provide answers to stakeholders and others who are curious about how you’re adapting to AI.

More specifically, boards should work hard to connect AI to a specific business goal. For instance, will AI-powered chatbots lead to higher customer satisfaction scores? Will automated AP/AR software lead to higher sales volume? These sorts of goals can help you evaluate the ROI of your AI rollout.

The Need for Further Education on AI

It’s completely unrealistic to expect your board to possess the latest knowledge regarding AI and its most recent developments. But it’s vital to invest time in learning as much as you can about the pros and cons of these emerging technologies.

At least once each year, spend time reviewing recent developments in the field of AI and machine learning. Where does your company fall compared to other companies in your industry? The better your board understands how AI is changing your industry landscape, the more you’ll be able to make well-informed decisions about how to proceed in the future.

The Promise and Peril of AI

Every governing board should think of AI as both a challenge and an opportunity. As more companies integrate AI-powered tools into their business models, it will become essential for companies to adapt quickly to maintain a competitive edge. 

At every step in the process, boards should exercise careful oversight to ensure that AI solutions are brought in line with the company’s corporate vision. Navigating these risks will allow you to continually adapt to future changes and provide you with the technological infrastructure you need to thrive in today’s business landscape.