Every ten years, transformative technology wipes out the business world, threatens the incumbent, and creates great new opportunities for businesses that understand it first. In the 1990s, it was on the Internet. Today it is artificial intelligence.
But like the high-tech bubble, AI produces big winners and big losers. No one expects corporate directors to fully understand how technology works or how it can be applied to businesses that are responsible for supervision. But they can ask the right questions. And the most important question at this point is, what is the return on investment?
“One of the big challenges for directors is that they're talking about AI being transformative for the industry, but ROI hasn't been proven in some industries yet,” says Katherine Forrest, a partner at Paul Weiss, who focuses on technology law.
Software code will soon be written almost entirely by AI tools, not human coders, for example. And while the retail industry has adopted AI widely throughout the ordering process, “not all the clothes are made,” Forest says. Similarly, legal professions experiment with AI to investigate and write briefs, but there are prolonged issues with the security of “hastiques” that allow AI agents to provide exactly what their clients want.
Predict risk
Before approving a large investment in AI technology, the director must ask three basic questions: What is an ROI, how do our competitors use it, and what is the timeline for the results? The answers differ depending on the industry, but the important thing is that the director gets answers from the managers who actually deploy this technology. Only then will the directors be required to approve the expenditure shareholder dollars for the AI project. Like the Internet, AI promises to dramatically improve efficiency. But it also puts businesses at large risk, some of which are nearly impossible to predict. There are very few companies other than Tech Realm. Because you can afford to invest in software and server farms that actually run AI programs, it means transferring your company's data to the cloud that can be stolen or infected by malware by hostile actors.
The tools generated by AI can also open up new potential debt, such as claims that automated employment and credit recognition systems are biased towards women and minorities. I don't even want to think about a lawsuit that could erupt after a completely AI-driven industrial process. Should the machine be held responsible or is there anyone who will start it by pressing a button?
What to ask
Common Law in the US has a remarkable track record of adapting liability rules to new technologies, and is sure to create predictable rules for AI. However, directors cannot wait for the court to catch up. They have a duty to anticipate risk and ask tough questions to corporate lawyers, chief technology officers and other managers. Do I need cybersecurity resources? Have you analyzed the risks? The risks were already embodied and how were they managed?
Avoiding AI is not the answer either. Forrest is very similar to a company that has not yet adopted technology in a castle with high walls and moats around it. Outside, a vibrant new community is growing, and those inside can't ignore it.
“You need to look around the outside of the castle and decide whether you want to lower the drawbridge or not,” Forest says. “But we need to disappoint a careful cyber-controlled drawbridge.”