In less than three years, generative AI has grown from a curiosity to a central part of board discussions. If you think that's too soon, wait another three years.
But if the opportunities are huge, so is the risk of getting it wrong. This concern is growing as so many companies focus head-on on development and deployment. The key to winning with AI, as everyone reading this knows, is striking the balance and creating the right governance structures now so your company can move forward with confidence.
To help, corporate director has assembled a distinguished panel of technology and governance experts on the sidelines of its third annual Digital Trust Summit in Washington, DC, to provide practical insights and expertise to help navigate the complex AI landscape. Contact: Dominique Shelton Leipzig, CEO of Global Data Innovation and founder of Digital Trust Summit. Siobhan McFeeney, former Chief Technology Officer of Cole Corporation. Will Lee, CEO of Adweek. Bhawna Singh, CTO of Okta. Dr. Christopher DiCarlo, philosopher and ethicist and author of Building a God: The Ethics of Artificial Intelligence, and Christine Heckart, founder and CEO of Xapa and board member of SiTime and Contentful.
“Over the next 15 to 18 months, the number of experimental AI use cases will explode,” said Shelton Leipzig, who has worked with boards around the world to create the right governance structures to mitigate AI risks. “The window of opportunity to ensure that our values, mission, and purpose as a company are embedded in AI is rapidly closing. Technology will not work if we wait. We must proactively control and prioritize this.”
“We're at a halfway point in history between the way things have always been done and the way things are going to be done,” DiCarlo said. “This is such a unique thing that we're living through right now, and to echo Dominic's words, we better get it right. We might only have one shot to get this right, so let's do it now.”
Still, Singh said this is a hopeful moment, simply because it hasn't passed yet. “Right now, we're all moving into a benefit or outcomes stage. Our leaders are listening, they're looking for information. They're looking for data. They're looking for guidance. We're in that stage, so it's a good time to really get the word out and get the word out, because they're absolutely ready to listen now that they're trying to get things into production.”
legal risk
As AI systems become more integrated into business operations, the panel highlighted growing considerations for companies that fail to establish adequate governance. Although the regulatory landscape is still evolving, other mechanisms are already creating accountability, including reputational concerns, as well as legal liability concerns.

“I’ve talked to CEOs of large agencies, small agencies, and they all have a lot of really good ideas about how to optimize their business with AI,” Lee said. “I think 99 out of 100 of them have no understanding of governance and putting guardrails around it. They're like, 'Oh, yeah, that's the CTO's job. That's the CIO's job, right?'
Companies developing AI systems can be held accountable for their impacts in a way similar to other products on the market, and perhaps in a way not seen with the rise of social media, thanks to legal protections that stem from the early days of the commercial internet, protections that do not exist for AI. “Like anything else, if you're developing a product for consumer use, you're likely also responsible for any negative implications or impacts it may have,” Singh said.
Shelton Leipzig pointed to a lawsuit against an AI company that alleges that children became overly attached to AI, leading to suicide.
This example shows that ultimately the responsibility lies with the company's leadership, not the technology itself. Shelton Leipzig explained that at one retailer, the vendor's AI was misidentifying paying customers as criminals. The presence of CIOs, CTOs, and vendors is nowhere to be found in government investigations, subsequent lawsuits, shareholder derivative actions, or subsequent bankruptcy protection. “The reality is that when an AI incident occurs, it is not the CTO or CIO who is required to testify before Congress or answer questions.”
Singh said he expects AI to rapidly gain attention from CEOs and boards of directors as it enters a more mature customer-facing era. “When you land in a place where you're making money on something, all the other aspects like governance and compliance land right there, because my leadership looks at it, security looks at it, your regular cadence looks at it.”
ride the speed
Panelists agreed that this requires a new approach to corporate oversight. Like cybersecurity, this is a complex and rapidly changing technology challenge that can quickly damage a company's reputation if something goes wrong. But unlike cyber, AI can impact and permeate many parts of a business simultaneously. Rather than treating AI as a compliance issue, it's much more important to think holistically about how AI will impact many parts of your business, from productivity to culture to brand to business model.
The challenges in doing so are especially acute for traditional businesses that aren't regulars on Sandhill Road, McFeeney said. “Most industries are not technology companies,” she says. “They're not Okta or Salesforce or Google. They're a retailer, they're a manufacturing company. And they're not at the same pace. And they have a very thoughtful board of directors who aren't even tinkering with the AI themselves, who are trying to learn.”
All agree that the learning curve can be challenging, with an industry-wide lack of hands-on training for directors and senior leaders working in all areas of artificial intelligence. Directors, especially those without a deep technical background in artificial intelligence, may want to take the time to experiment with non-critical public information using off-the-shelf products like ChatGPT or Claude, just to see the functionality.

“What boards lack is the right questions to ask and the ability to make some sense out of the answers when they get them,” Heckert said. “For most of the board members, it's too complicated to make sense. The average public board is not made up of a bunch of engineers. It's made up of a bunch of industry people who have no idea what's going on technically and therefore have no idea of the risks involved or how the foot bone connects to the ear bone. Because that's what's going on here. There are so many connections that are invisible, misunderstood, or not understood.”
Mc Feeney took on the challenge of familiarizing the board and senior executives with the AI using simple “games,” such as asking board members to take a photo of the contents of their refrigerator and asking ChatGPT what they could make for dinner from the contents. “Make it accessible so it’s not scary, so people can touch it and use it, and all of a sudden there’s a conversation around it.
“Frankly, we've certainly found that a lot of people feel embarrassed. They don't really understand it,” McFeeney said. “We paired up with seniors and taught them how to use it.” [in ways] That was important to them. For our CFO, we built one very simple chatbot that educates them on everything they need to meet with investors and allows them to do it themselves. ”
Lee said AI capabilities are also a critical element for leaders who want to be successful in the coming years, even if many of their colleagues don't find the time. His advice is to make sure you make time to play around, to avoid the pitfall of ending up with someone far more knowledgeable about AI than you are. “It creates a lot of tension, it creates suspicion. It creates a lot of unintended consequences.”
At Adweek, Lee made AI a central topic of conversation during monthly lunches with people across the organization, briefly asking them what they were experimenting with with AI. He's also been transparent about his use of technology, working to remove the stigma that many creative people have about publicly admitting to using generative AI as a tool. Hiding the use of AI only increases the risks.
“We're a business publication, so we have a fair amount of analysis and data, and we do the number crunching. Let's be transparent. And don't misrepresent. I told my staff, 'When you come up with an idea, or as a copywriter partner, When you use a code, be transparent about it. Say, 'So, we did this through Claude,' so it just becomes part of our workflow and we don't have to apologize for it.”
building a culture
Building such a culture is essential now if we want to deliver on the promise of what AI will become. Because, as Heckert reminded everyone, for all its promise and power, generative AI is still just a machine and a long way from becoming anything like sentience. The key to getting the most out of AI is, and continues to be, helping the humans working with it.

This is forcing organizations to rethink leadership development at all levels, especially at a time when powerful AI is becoming part of daily operations in nearly every industry. “Just concentrating leadership at the top is an outdated model,” she says. “If you're graduating college right now and you're lucky enough to get hired somewhere, there aren't that many entry-level jobs, so from the day you walk in the door you're a manager. You're a manager of an AI agent.”
Her biggest advice for leaders right now is to “pay attention to your people. Businesses are going to need humans for a long time to come, and humans can be made better or worse with AI,” she said. “You can scale mediocrity, you can use AI to improve it, but it all starts with human quality, emotional intelligence, leadership skills, critical thinking skills.”
As DiCarlo puts it, “We're kind of in the dot-com moment, right? We're at a stage where we all know it's big, but we don't quite understand how it's going to be integrated and implemented.” His bet is to fundamentally change society on a scale that few in the business world can fathom.
“This is going to shape and change the way society functions,” he says. “It's hard for me to say calmly how big this is, how big it's going to be, and how dramatically this is going to impact everyone's lives. So it's great that we're sitting down and thinking about this now, because that's exactly what we need to do.”