The impact of artificial intelligence (AI) has dominated business headlines in recent years, from streamlining corporate operations to bringing new products to the masses. Big technology companies are investing billions of dollars in AI data centers to support the growing demand for AI, and that demand shows no signs of slowing down.
Publicly traded companies across industries are also investing heavily in this technology, embedding AI into their daily operations, business processes, and customer experience. But for these investments to be successful, the element of trust is essential.
Stakeholders, from investors and regulators to employees and consumers, want to know that AI-enabled systems and outputs are reliable, transparent, secure, and used responsibly. This is where the auditor comes into play.
CAQ publications, The role of the auditor in AI: now and in the futureWe explore how companies are using AI, the challenges that are driving a lack of trust in the technology, and why auditors are uniquely positioned to increase trust and confidence in AI. Read my key takeaways about how auditors' proven approaches and evolving skillsets support this ever-changing landscape.
How public companies are currently leveraging AI
Companies across industries are incorporating AI in a variety of ways to increase the efficiency and effectiveness of their operations and improve the employee and customer experience. In CAQ's Spring 2025 Audit Partner Pulse Survey, audit partners cited the top five areas of AI use.
- Process automation (59%)
- Customer experience, service and support (48%)
- Predictive analytics (28%)
- targeted marketing (26%), and
- Cybersecurity (22%).
The reliance on AI in these areas varies. Critical processes such as financial reporting still require human oversight to ensure accuracy and reliability. In other processes, AI can operate with limited human involvement, automating certain activities and potentially freeing up employees for higher-value work. Trust is a key factor supporting reliance on AI.
AI transparency and disclosure
As AI adoption expands, stakeholders are demanding greater transparency. Companies are beginning to disclose information about their AI strategies, risks, and governance in both regulatory filings and voluntary reports.
Many companies are beginning to include AI-related information in their Form 10-K filings. An analysis of Form 10-K filings with the SEC in 2024 found that 72 percent of S&P 500 companies discussed AI, often highlighting its risks. Item 1A. risk factors or explain investment and strategy details Item 1. Business. These disclosures show that AI poses both risks and opportunities.
In addition to regulatory filings, many companies publish AI principles and governance frameworks on their websites. These often emphasize values such as accountability, transparency, trust, and privacy. Some of the leading companies are releasing independent reports on their approaches to responsible AI. Voluntary adoption of frameworks such as the NIST AI Risk Management Framework is also common. These actions demonstrate that companies are approaching AI rigorously and responsibly.
As AI frameworks and regulations continue to evolve, CAQ will continue to monitor and share resources as they become available.
Challenges in building trust in AI
AI has the potential to transform business, but it also brings new challenges that can undermine stakeholder trust if not managed effectively.
- Explainability and interpretability: AI can become a “black box,” making it difficult for users to understand how or why a system produces a certain output. Stakeholders may be interested in how companies evaluate the suitability of output from AI tools.
- Reliability and accuracy: AI systems, especially generative AI, can hallucinate and produce false but convincing output. If stakeholders expect consistency, errors can shake trust.
- Data privacy and cybersecurity: AI tools can inadvertently expose sensitive data or create new vulnerabilities to cyberattacks. Research shows that investors are very concerned about the privacy and security risks associated with the use of AI.
- Responsible and ethical use: Stakeholders expect companies to use AI fairly, ethically, and in accordance with regulations. Issues such as bias, discrimination and lack of transparency can create reputational and regulatory risks.
Companies are at different stages of addressing these issues, but one theme is clear. Stakeholders want more confidence that companies are using AI responsibly and effectively managing the risks associated with its use.
Expanding role of auditors in AI
An EY report found that respondents around the world have low confidence that companies will manage AI with their best interests in mind. Independent assurance provided by trusted gatekeepers, public company auditors, can help close the trust gap and provide confidence that companies are managing AI responsibly. Public company auditors bring independence, rigorous professional standards, and deep expertise in evaluating systems and controls that set us apart from other assurance providers.
AI assurance services directly address an enterprise's use of AI, such as the design and implementation of AI governance policies and procedures, controls designed and implemented to address associated risks in critical business processes and supporting technologies, or compliance with AI-related regulations. These services can increase stakeholder trust in AI by providing insight into how companies are managing the use of AI.
Companies such as KPMG and PwC have previously announced AI assurance services to address the need for businesses to have greater confidence in the use of AI. This is the actual audit effect. Auditors leverage their unique skill sets to increase confidence in new forms of company-reported information. While AI assurance is in its infancy and continues to evolve, stakeholders can look to public company auditors to increase confidence and transparency in the use of AI. CAQ will continue to monitor the impact of AI on the public company auditing profession and the U.S. capital markets. For more information, please see our publications. The role of the auditor in AI: now and in the futureand other AI resources.
