Artificial intelligence (AI) is not a future consideration for corporate boards. Artificial intelligence (AI) already exists and is advancing rapidly. Companies across industries are actively experimenting with and deploying advanced AI systems, from customer-facing applications like support and sales automation to back-office functions like predictions, fraud detection, and process automation for routine rules-based processes. The pace and breadth of this development is important. According to CAQ research, 90% of S&P 500 companies will mention AI in their 10-K in 2024, up from 25% in 2023.
As the benefits and risks of AI impact the entire enterprise, issues of AI oversight and management have become top of mind for boards of directors, especially audit committees. As AI becomes more integrated into the processes and systems underlying financial reporting and internal controls, audit committees need to build a working understanding of how these technologies work, where they introduce new vulnerabilities, and what questions they need to ask of management and auditors. Playing a supervisory role in today's environment means being proactive about AI rather than waiting for it to surface as a problem.
I recently facilitated a webinar with practitioners, executives, and board leaders who have first-hand experience with AI integration and monitoring. Stacy Mills, Global Controller and Chief Accounting Officer, Marsh; The author, Tom Petro, AI in the boardroom And the chairman.
Here are some highlights from discussions for audit committees seeking guidance on AI:
the risk of doing nothing
The panel's most pointed message was that the biggest risk is not moving too fast, but too slow. AI will fundamentally reprice cognitive work within organizations, resulting in a significant shift in operational leverage. If your company is at a scale where AI can drive financial reporting efficiencies, but it is not being considered, audit committee members should challenge management as to why.
However, the rigor of traditional financial reporting must still be maintained. One of the risks associated with AI is that it can cause hallucinations, leading to decreased trust in AI-driven output. It is important that the committees ensure that they exercise the same level of oversight. AI handles data-intensive tasks, but human judgment is the final backstop.
“The key is that accountability remains with humans. AI can automate tasks and uncover insights, but executives remain accountable for decisions, approvals, and outcomes. AI supports the process, not owns it.” — Mike Leonardson, EY Partner
What does good AI integration look like?
Mills shared that at Marsh, AI is already deeply integrated across several financial reporting processes. For example, custom journal applications were built to automate routing, approval, and processing.
“Once this went live, we saved about 200 FTEs and saved the company about $10 million. So the benefits are real and achievable.” — Stacy Mills, Global Controller and Chief Accounting Officer, Marsh
The company also utilizes financial analysis tools that reveal variances and anomalies across its 600 legal entities at the end of each reporting period. This significantly reduces the time finance teams spend manually searching for exceptions, freeing them up to do more strategic work to maintain audit quality.
5 Questions Every Audit Committee Should Ask
Introducing AI models into your enterprise requires evaluating how the technology will impact current and future operations.
Petro distilled AI system monitoring into five topics and shared guiding questions to define each.
- temperature: Is this use case deterministic (i.e., no uncertainty built in) or stochastic (i.e., uncertainty built into the model)? And what does that mean for risk?
- Training duties: Are the decisions built into the model consistent with accounting policies before touching the actual ledger?
- Autonomous boundaries: When exactly can a system operate without human review, and what are the guardrails?
- Exception architecture: What flags it to the human eye? If the system doesn't identify anything, that's a concern.
- Validation loop: How can you be sure it's working as intended, not just at startup, but over time?
Answering these questions before launching an AI-driven system is key to identifying potential risks and how to respond to ensure the responsible use of AI by stakeholders.
Notes on third-party models
As more vendors incorporate AI into their financial reporting tools, audit committees are facing a governance gap. SOC 1 and SOC 2 reports address control rather than accuracy or behavior of the AI model itself.
As AI systems learn and evolve, there is a risk that the models will no longer keep pace with the business, or that the business will change in ways that the models can't keep up with. In any case, the output may no longer reflect reality.
“If our model is flow-based learning, every time it increments and starts doing something different based on what it has learned, whether we realize it or not, it effectively becomes an accounting policy change for us.” — Tom Petro, author and director of AI in the Boardroom
Continuous monitoring is required to ensure that AI technology continues to operate as designed. For a vendor's AI, the committee needs to understand how often the AI changes and what controls, if any, exist to internally govern its evolution.
conclusion
For audit committees, technical expertise is not necessarily required to provide effective oversight. By asking the right business and management questions, aligning management efforts with AI models, and holding AI output to the same standard as human judgment, financial reporting can ensure that it evolves as technology advances.
Want to learn more about AI in financial reporting? Below are some resources you can explore.
Subscribe here to stay up to date on the latest webinars in CAQ's Audit Committee Effectiveness Series.
