As artificial intelligence implements operations, strategies and infrastructure more deeply, directors are being asked to provide monitoring on systems that do not exist when most board structures are built.
meanwhile CBMGames Global Director Sheila Bangalore, Director of Paloma Health and Stoneage Waterblast Tools and Director of Credit Suisse Funds, Samantha Kappagoda, Chief Data Scientist at Nugherati Partners, shared his strategy for adapting to this moment of AI supervision. Some practices worth considering:
Be careful when monitoring. AI monitoring does not always fit well with existing committees. Some boards have moved AI to audit or risk, while others have created standalone monitoring channels. Most importantly, intentionality and adjustment.
“In one of our boardrooms, we were quite intentional about the design of the risks and compliance committee that differs from the audit because of our great aspirations,” Bangalore said. “We have designed a complete risk framework that is unique to our business. This framework addresses not only quantitative but qualitative components, as reputational risk is a serious issue.”
Avoid silos. AI should not silence oversight as it touches multiple risk domains, including technology, operations, reputation, and talent. Cross-Committee membership and coordinated reporting cycles help to create alignments and reduce gaps.
“The chairman of the Audit Committee is sitting on the committee. He chairs the Risk and Compliance Committee and participates in the audit. This also ensures fluid communication between the boards.”
Business anchor monitoring. Surveillance should be tailored to the size of the company, its regulatory exposure and strategic goals. For some, AI is the core of our business model. For others, it's a tool for efficiency. Risk tolerance, data sensitivity, and sector maturity are all factors. “It's very context-specific,” Kappagoda said. “There are no sizes here that fit all the sizes.”
Push AI talent and compensation strategies. The board does not employ technical staff, but is responsible for ensuring that resource allocations are consistent with strategic goals. This includes understanding how technical talent is procured, managed and retained.
“The board is not going to get granular in every detail, but what we need to understand is a series of questions we ask,” Bangalore said. “What do you think about the talent management strategies around the AI teams we employ? Why are we ready to spend that kind of money?
Kappagoda agreed. “The board can ask questions about key metrics and KPIs for this.”
Understand the tools yourself. As generative AI grows, boards and board members themselves need to understand how these tools are used correctly and how they are used within organizations, including directors and executives. Usage policies must address IP, confidentiality, data processing and training risks.
Kappagoda knows his friends who put their own information into ChatGpt without really thinking about the potential risks. “I asked the question, is it your phone version? Is it the enterprise version? Is there a guardrail or does it seem like the free version? What is it? This is important non-public information you're putting in.
Ask all the biggest questions. Bangalore should be the most important question committee to ask management what is the most basic about AI monitoring. Why are you using it? “Is it a lever trying to compete? Is it confusing? What are you using it for?” Bangalore said. She emphasizes playing tabletop exercises and war games.
“There are so many opportunities,” Kappagoda said. “But every company has to do a small bite. Not every company is a big tech company. Everyone else can test it and have sandboxes and proof of concept. You have to consider the risks. But this space is moving very quickly.