By now, most companies have accepted an uncomfortable truth: if they delay adopting emerging technologies for too long, they risk extinction. Of course, every advancement that promises to increase productivity, build resilience, improve customer experience, and so on, is fraught with peril. (Think big data, machine learning, and cloud computing.) But the history of death by disruption proves that arguably the greatest existential risk ever is not acting quickly enough.
“As a board, as a management team, we need to recognize that one of the biggest risks in AI is not trying it,” Donna Wells, a director at Walker & Dunlap and Mytech Systems, told directors gathered at a CBM roundtable on digital transformation and AI, held in collaboration with Google. “If you look at the pace of business model change with new technologies in the past, there are signs that AI will be just as fast or even faster. We really need to focus on the risks of doing nothing.”
Simply put, wait-and-see is not an option. It's also unrealistic to be cautious about a technology that's so readily available to everyone. AI is very different from cloud computing and other recent advancements in terms of its accessibility, points out David Homovich, a solutions consultant in Google Cloud's CISO office. “With cloud, there was some hesitation from people wanting to know what their competitors were doing, especially in industries like financial services, healthcare, and life sciences,” he says. “With AI, in just about every sector, everyone is rushing into it, or they may not even realize they're already using a lot of AI in their companies.”
“Boards and executives would be wise to take an inventory to understand where the company is currently experimenting with AI,” Wells agreed. “Everyone I know who has been through that process has been amazed at the scope, depth and breadth of experimentation that is actually happening within their organizations.”
“What's interesting, and really new, is that generative AI has really democratized the technology,” Ludwig agreed. “Whether we like it or not, generative AI is being used everywhere in the company. The question is, how do we guide the use of AI and get the most benefit and the least risk for the company?”
Rampant fraudulent experiments also increase many other risks closely linked to AI, from loss of intellectual property to poor decision-making due to faulty data input. As an example of AI going big wrong, Helmut Ludwig, a board member at Hitachi and Humanetics, pointed to the Samsung incident, where information shared with a large language model by an R&D employee was leaked to the public.
Confidentiality Management
Several directors agreed that how to best defend against attempts to harness the power of AI to cause the loss of valuable intellectual property is a frequent topic of discussion in boardrooms today. “The insurance industry has been using things like machine learning and deep learning for decades to identify fraud, set pricing and other purposes, so we are extremely familiar with the applications and potential damages,” said Gene Connell, director at Erie Insurance. “But large language models (LLMs) are something else, and they are raising concerns throughout organizations and in boardrooms.”
For example, using AI to process loan applications could have unintended consequences, such as diversity issues in the approval process. And retailers using vendors' AI-based consumer service chatbots could run into issues with inappropriate responses being generated, says Glenn Marino, a managing director at Upbound Group. “Will this chatbot embarrass me? Or worse, will it create some kind of bias issue?”
While acknowledging that most companies will likely access AI capabilities through a platform provider, several directors expressed concerns about mitigating data risks when entering into contracts. “If a company is entering into a contract with a platform provider, [provider]”Is that data protected?” asked Nigel Travis, a director at Abercrombie & Fitch.
“Just like any other technology solution within your organization, you need to be able to control your data,” said Alicja Cade, director of financial services in Google's CISO office, urging companies to put guardrails around their employees' use of consumer AI. “The security of consumer AI is very different than the security of the enterprise version. It's important to make sure you're using something that's designed entirely from an enterprise perspective to lock down your users and meet the rules.”
Cade described Google's approach as an AI partner company as a “layered cake” architecture that builds security into every level of the company's AI platform. “First, there's the Google Cloud infrastructure, which is secure by default by design and during deployment, and is the foundation on which everything runs,” she explained. “Then there's the Vertex AI platform, which gives us the templates for developing a model garden. How you design that garden, the IP and data in that garden are yours. We don't use any data that you input into the model, and access to the data and IP is limited to your organization.”
Protecting data and intellectual property starts with non-disclosure agreements included in enterprise AI platform contracts. “At Google, our data non-disclosure agreements say we can't use customer data for any purpose other than what we've stated in advance,” says Homovich, who advises board members to question their company's contracts. “We can't use customer data to train models, we can't use the prompts we use in our large language models. It starts with defining what data you're using and how you're using it. This will help define the scope of controls you need, including security, privacy, compliance, risk management, and resiliency.” [purposes].”
Downstream Data Risk
Directors also need to help management ensure that appropriate protections are in place not only with platform providers and within their own organizations, but also by suppliers involved in the process. “We're finding that in many companies, supplier and vendor risk is one of the last risks that isn't fully understood within the enterprise,” notes Deidre Evens, director at Regency Centers. “We think about risk management and assessment for Google and Microsoft, but this is an area that's often forgotten. We need to be thinking about other vendors who are using AI themselves.”
It's a reality of our increasingly connected world that every time data flows downstream, the risks associated with data sharing flow back upstream. “When cybersecurity was in its infancy, we thought about protecting our own home first. It took us a while to look at our connection points to suppliers, vendors and partners,” Wells notes. “Hackers took full advantage of the entry points into the safe home environment we thought we had built. So we need to learn from that experience and think about upstream and downstream risk areas in the evolution of AI, even earlier than cyber.”
After all, a security gap anywhere in the supply chain will leave a company facing compliance issues. “From a regulatory perspective, you’re not only responsible for your own actions, but also the actions of your suppliers,” Cade points out. “So how do you get visibility into that in terms of protecting your data and responsible AI? Are you working with suppliers who aren’t thinking about things that could potentially cause harm, even if they’re not intending to?” Red teaming—a structured testing effort that generates malicious prompts to uncover flaws or vulnerabilities in the AI, testing the system’s ability to generate harmful output or leak information—can help assess this risk, she added.
Directors described the challenge of protecting data privacy and meeting regulatory requirements in different markets while also enabling the pursuit of innovation as daunting. “You have to think about the impact on your stakeholders: employees, customers, shareholders, the environment,” explained Sarah Mathews, a director at Freddie Mac, State Street, Carnival Cruise Lines, and Dropbox. “We tried to isolate the data because we didn't want to violate GDPR, but then we gave the company freedom to work with just one or two customers. It's been really encouraging to see what people have come up with.”
In the early days of AI, it's also important to actively participate in shaping the regulatory environment. “Just like with cyber, it's really important for organizations to get involved with industry associations and talk directly with regulators to make sure that regulation is risk-based, not prescriptive or control-focused,” Cade says. “So engagement is key.”
Human Factors
Accountability policies are another way companies can mitigate risk and minimize the likelihood of incorrect outcomes. For example, to address concerns that flaws in AI-generated information could lead to inaccurate predictions and poor decision-making, companies can make it clear that the responsibility lies with the individual employees who use AI as a tool, not with the AI itself.
“At one of the companies where I work, experienced team members simply believe what comes out of the LLM without having deep expertise,” says Ludwig, who sees a lack of awareness of the hallucination problem associated with generative AI as a major risk. “Our experienced colleagues understand the need for validation, and that makes all the difference in successfully applying LLM.”
“Traditionally, the user community for technology was IT,” agrees Evans. “Now it's everyone, so having good policies around responsible use and notification of use goes a long way in managing and governing that. But as a board, we also need to make sure that employees are properly trained and know what questions to ask management about people operating in an environment where AI is part of the company's framework.”
Just like with cybersecurity, Cade says employee education and training on the risks and dangers of AI is essential. “Your users are always going to be human, and you need to reduce the chances they fall victim to risk, danger and fraud. This isn't just a risk for the CISO, it's a risk for everyone in the organization. So if I were an executive, I would say to every executive across any major business unit or function: What is your risk exposure, not just in cyber but in AI? Do you actually know what your exposure is? And what efforts are you making to mitigate that risk, including user awareness? Don't treat it as a bonus. It's part of the business.”
It's equally important that all board members understand AI capabilities, best practices, and industry standards like ISO 42001 and the NIST AI Risk Management Framework. Gone are the days when one board member was appointed as an expert on technologies like AI, Ludwig says. “You need to build capacity on the board side,” he says. “At the end of the day, most boards realize that this is a very important board decision and they need to make sure that the entire board is well-informed on the subject, because this is really foundational for our company.”
“The ideal approach for boards is to ensure that everyone on the board has some understanding of not only the role AI plays in their business and strategy, but also the risks,” advises Cade. “Recognize that cyber AI is everyone's business, and think and act seriously from that perspective.”