To understand how artificial intelligence is dramatically reshaping the cybersecurity risk landscape, consider the sources of fraud that most of us are most familiar with: from family, friends, and colleagues. However, in reality, just checking such emails is sufficient. A ruse to defraud a company or gain access to its systems. Timothy Howard, a partner at Freshfields, said such scams have become steadily more sophisticated over the years, and that using AI to create more convincing communications and scammers will make them less likely to occur at all. It's reaching a new level, he told directors gathered at a recent meeting. corporate director Forum on artificial intelligence held in partnership with Freshfields.
Increasing cyber risk
“Scam emails target individuals when they leverage AI’s ability to quickly digest someone’s social media profile or biography to create the perfect message to get them to click a link or download an attachment. Imagine how effective that would be,” Howard said. “If one of our employees is compromised, we have access to their entire inbox. We use AI tools to capture that inbox and analyze their communication history and messaging style. You can conduct effective internal spear-phishing campaigns.”

And that's just the beginning. The integration of AI into cyberattacks has significantly enhanced the capabilities of bad actors across the cybersecurity landscape. “AI is creating additional cyber risks as a force multiplier, increasing the ability of threat actors to penetrate systems, maintain access, exploit information networks, and evade various defenses,” Howard said. said.
The use of AI in malware is particularly worrying. These AI-powered threats can adapt to evade detection and maintain persistence within compromised systems. “We have also seen reports of malware using AI capabilities to understand when it is detected and make slight changes to maintain persistence and prevent it from being removed from the system.” Howard said.
Brock Dahl, a partner at Freshfields and former deputy adviser for operations at the National Security Agency, said AI has also lowered the hurdles for cyber attackers, making it relatively easy to carry out attacks. “It used to be that developing sophisticated malware that was used to infect systems or cause ransomware incidents required a special skill set, the ability to write that kind of code,” he said. “Some of these basic skills can now be done by machines, and we can program machines and we can actually ask machines to perform some of those functions. So , a wider range of people will be participating in this type of activity. And if you look at the statistics around malware events, ransomware events, and these types of threats, many of them are increasingly enabled by these tools. But the numbers are quite staggering.”
Innovation with AI
From a governance perspective, the emergence of AI capabilities requires a proactive approach to assessing the impact on cybersecurity vulnerabilities, Dahl said, addressing both the challenges and opportunities of AI. asked directors to establish a robust governance framework to “It's not just that there are threat actors using these capabilities,” he said. “That means your own company is using tools, products, and services, or you are creating products and services in new and different ways.Then, because of the nature of these technologies, many of the risks are unpredictable. How do you think about managing and overseeing it in areas that are difficult or unrecognizable?”

For board members and executives, understanding the transformative role of AI in cybersecurity is becoming a key requirement for protecting digital assets and ensuring enterprise resilience. Boards of companies at companies at the cutting edge of AI development and those using AI tools in new and diverse ways will want to ensure that incorporating security and data privacy measures is an appropriate consideration in the race to be a leader in AI adoption. You need to make sure that it is. When layering functionality onto a model provided by a third party, you need to understand how the underlying model works and what that partnership means in terms of control over the data provided or shared. there is.
Brock shared three aspects of governance and risk management that boards can adopt as companies consider AI.
Visibility – Understand the nature and content of the data being collected and used. Given the huge amount of data involved and the security risks involved, boards need to look at the quality of the data used, the processes it goes through and how that information is used. there is. This includes privacy law considerations and the need for a deep understanding of datasets and how they are used.
Testing and Replication – Understand what internal processes or external providers are doing with your data. “It is important to replicate and test these processes to ensure reliability and break potential risk cycles caused by problematic data and feedback loops,” Dahl explains. did.
Audit results – Check that the output matches the expected results or appears reasonable. “A lot of the challenge here is that it's hard to see what's going on inside this AI feature because there's so much going on,” Dahl says. “There are already examples of problematic outcomes related to security and other features related to data generation. [flawed] “The result is…” he said. “But by having visibility into the data and its use, and being able to independently measure and verify what results can be expected from a particular product or service, we can reduce these types of risks. It can be prevented.”