Modern finance teams are exploring how they can use AI to drive efficiencies and increase the organizational value added of finance.
AI’s predictive capabilities are already revolutionizing the way finance approaches risk management, analysing historical data, detecting market trends and incorporating real-time information into forecasts.
Glenn Hopper of Eventus Advisory Group spoke extensively about use cases for financial AI, including GenAI, and has some thoughts on where finance is headed in the future.
In this interview with Katie-Kuehner Hebert, Hopper, who was CFO of Eventus until he became head of AI R&D two months ago, details how CFOs have embraced AI and how they can expand its use across finance functions to support strategic planning and decision-making.
How can CFOs expand their use of AI?
One of the main promises of AI is to increase automation of repetitive and time-consuming tasks like data entry, bank reconciliations, accounts payable/receivable workflows, etc. Automation reduces the risk of human error and saves valuable time.
In the near future, AI has the potential to change how financial executives approach regulatory compliance. AI-powered systems can automatically analyze and interpret legal documents, flag potential violations, and generate compliance reports to help adhere to complex and constantly evolving regulations.
How will AI Large-Scale Language Models (LLMs) change the financial industry?
Traditionally, analyzing financial statements has been time-consuming and labor-intensive, requiring manual review and interpretation of long, detailed documents. One of the inherent powers of LLM is its ability to automatically extract relevant information from financial statements, including KPIs, risk factors and management commentary.
LLM's natural language processing capabilities can quickly identify trends, anomalies, and insights that a human analyst might miss. Similarly, in risk assessment and management, LLM can analyze various unstructured data sources, such as news feeds and regulatory filings, to identify risks and red flags. By monitoring unstructured data sources in real time, LLM can notify emerging exogenous risks, such as market volatility, geopolitical events, and reputational threats, enabling CFOs to make timely adjustments.
Can it be applied to analysis and planning?
Scenario analysis and stress testing are areas where LLM can assist, helping to assess the potential impact on financial performance and testing resilience. LLM also opens the door to new forms of financial analysis and research. By processing and consolidating large amounts of unstructured data, such as earnings call transcripts, analyst reports and market research, LLM can generate valuable insights and recommendations to help inform investment decisions.
What are the challenges of implementing LLM?
Successful implementation requires careful consideration of data quality, model interpretability, and ethical implications. Finance professionals should work closely with data scientists and technology experts to realize the full potential of the LLM.
Additionally, significant investments in data infrastructure, talent acquisition, and change management are fundamental to successful implementation. Understanding the ethical implications of AI and ensuring algorithms are transparent, unbiased, and aligned with company values and societal expectations is essential. This needs to be part of any AI implementation, not an afterthought.
From a practical standpoint, how do organizations go about integrating AI?
Like rolling out any new platform, implementing AI into workflows should be rolled out in a phased, “agile” approach, with an iterative process that prioritizes high-impact, low-risk use cases and ensures organizational buy-in.
Finance departments need to identify areas where AI can provide the most value: automating repetitive tasks, enhancing risk assessment models, improving investment analysis, etc. Starting with a targeted pilot can help validate the benefits of AI and build a compelling case for broader adoption.
AI integration also requires a culture, skills, and process change: embracing experimentation and encouraging collaboration with cross-functional teams that include data scientists and IT professionals. Building a strong data governance framework, ensuring data quality, and addressing ethical considerations are key elements of successful integration.