But how will business growth seen as a top priority for directors in 2025 (more than three-quarters of this was identified as a primary objective in our research) and AI adoption, which is increasingly unnegotiable in the corporate world, help digital transformations using AI expand faster and economically?
While many directors believe that AI has the potential to optimize operational and cost efficiency, factors such as increased employee productivity and access to better data are also seen as important opportunities.
The latter two have raised three and four locations, respectively, when compared to previous 2024 reports, and the board considers these as key use cases for artificial intelligence, along with other aspects such as product innovation and enhanced customer support capabilities.
The majority of the board already uses AI at least at a certain level of capacity, but one in five people have no action at all when it comes to AI or generation AI. It emphasizes that many companies are still taking this area with caution. Core anxiety, such as lack of internal AI knowledge and data privacy concerns, may perhaps be seen as outweighing the opportunities in the eyes of some board members.
The board is putting pressure on adopting AI
The fear of falling behind the competitors and raising customer expectations may make the board and leadership team feel driven into an accelerated adoption pass with AI. But of course, there is an important “promise and danger” dilemma for businesses to navigate, especially given the public debate on the potential dangers of generative AI.
The global harmony of evolving global AI regulations is Nirvana, a legal and compliance team, but there is diversity in approaches born from a variety of geopolitical priorities. The government is currently deciding on the best way to govern AI. This is because we are trying to find a balance between incredible social and economic change possibilities and an innovation-centric agenda targeting the need to govern AI systems in a way that addresses concerns about legal, ethical and social harm.
One end of the equation places a much stronger emphasis on protecting citizens' rights and ensuring the ethical use of AI technology. The main example is the EU AI law, which aims to create a comprehensive regulatory framework for AI governance and provide a high level of protection for health, safety and fundamental human rights. It focuses on ensuring that AI is a “power for good” and promoting human-centered, reliable AI.
On the other side of scale, it focuses on the story of innovation, where regulations often lie as inhibitors of progress and innovation. In a way that is not different from companies or their boardrooms, governments are under pressure to keep up with their geopolitical competitors. So far, regulatory focus in many other countries, including the US and the UK, has tended to move away from comprehensive frameworks and enforceable laws like AI law. Instead, it focuses on sector-specific approaches that consider guidelines and standards for a wide range of industry.
“AI definitely needs strong governance,” says Dale Waterman, lead at Global Solution Designer. “The issue of competing values is not new to the government or the technology sector. We have been working over the years to find the right balance between the competing interests of privacy and national security. Protecting timeless social values and ensuring the ethical use of AI while creating an environment for AI innovation is undoubtedly one of the crucial issues of our lives.”
The lack of “AI literacy” is considered the biggest risk for businesses
While directors feel there are important AI-driven opportunities to be utilised in areas such as cost-efficiency, data processing, employee productivity, and engagement, there are still some obvious risk factors, particularly with regard to the use of generated AI.
Among these concerns is the perceived discrepancy between technology capabilities and the knowledge and capabilities of board members to make informed decisions. Almost a third of directors cite AI procurement and implementation as a significant risk. The intersection of many conflicting principles and data privacy, the trends in generating AI tools that “hastised” subjects' risk and lack of expertise in strategic issues, and IP breach are considered significant potential risks.