At a pivotal time for new forms of artificial intelligence, and for the internet itself, an overwhelming majority of CEOs seem to be calling for a reevaluation of how these technologies are managed—but who will do it and how remains an open question.
That was a big takeaway from an online forum hosted Tuesday by Jeffrey Sonnenfeld of the Yale School of Management. chief executive officer Columnist celebrates 50 yearsNumber The anniversary of the birth of the Internet.
In three surprising poll results, the 200 CEOs of major companies in the audience expressed striking unity when it came to re-evaluating the legal protections that underpin the success of “Big Tech” platforms such as Google, Facebook and TikTok.
Nearly 85% of those surveyed said they strongly agree or agree with the statement, “I support increased government regulation of social media platforms.” 100% of voters said they support the Child Online Safety Act, introduced by Sens. Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.).
Perhaps most significantly, when asked whether tech companies have protections under Section 230 of the landmark 1996 Communications Decency Act — a legal provision that granted social media platforms a “safe harbor” for liability for content posted by users — some 96 percent of CEOs said they believe the law is now “outdated” and in need of reconsideration by Congress.
The event, a biannual gathering of major corporate CEOs, authors, investors, academics, lawmakers and technology pioneers, focused much of its attention on the question of how best to regulate, or not regulate, the digital world as it enters a new phase.
But who, how, and to what extent will reassess how governments patrol cyberspace remain very much open questions, as they have been for much of the past 50 years, and are questions that are increasingly hinged on questions of data ownership and protection, as well as the rapid growth of generative artificial intelligence.

Anne Neuberger, deputy national security adviser for cyber and emerging technology, reminded the group that, at least when it comes to cybersecurity, the private sector needs to work more with the federal government to combat adversaries in an era of growing threats, if only because so much of America's critical infrastructure is in private sector hands. She gave the example of a partnership between Google and Microsoft to provide free cybersecurity training to 1,800 hospitals in rural America. Such efforts will need to be expanded in the age of artificial intelligence.
“Before AI is used in our critical water, pipelines and rail lines, we need to ensure that safeguards are built in, such as transparency of the data used to train it, proper red teaming of the models, human involvement on key decisions, making sure that testing is done and guardrails are built in before operational systems are connected to AI models,” she said. “So cybersecurity is very demanding in some ways. And social media is demanding when we think about AI regulation and the need for us as a country to have responsible regulation in place to ensure we can benefit from the massive innovation that AI brings, but rather than waiting until later to put in place controls that are costly and difficult.”
Tom Bossert, a former homeland security adviser to President Donald Trump, largely agrees, but as he points out, new and proposed cybersecurity regulations for companies do little to stop “coordinated nation-state hacks” of U.S. companies. “I'm already seeing compliance costs increasing,” he said, “and I'm not convinced that's going to lead to better security outcomes.”
The most skeptical voice in the morning session came from longtime Silicon Valley investor Roger McNamee, who has grown increasingly concerned about how the tech industry regulates itself for decades, dating back to when he was one of Facebook's first investors. Over the past year, he has argued that the unregulated gold rush around generative AI, unsupported by the underlying economics, is not only a terrible long-term bet for investors, but could pose dangers to society far greater than the unforeseen challenges created by the rise of social media.
“My strong advice to all CEOs on this call is to pause,” he said. “There is no need to rush into artificial intelligence. In fact, if you do the analysis, you may conclude that this technology has not yet reached full adoption and that its application to improving enterprise productivity may be counterproductive, just as we have seen with other Internet technologies. So I urge you to recognize that not only is this battle far from over, but we've barely entered it.”