
The AI adoption curve in finance has hit an unexpected flat spot. After the rapid climb from 37% adoption in 2023 to 58% in 2024, the numbers for 2025 tell a different story: only 59% of finance organisations now use AI. Essentially, no growth.
It isn’t a rounding error. The momentum has stalled following several apparent shifts that occurred between 2024 and 2025.
The early wave of AI adoption rode on accessible use cases and vendor hype. Organisations automated invoice processing, deployed chatbots for employee inquiries, and analysed or arranged documents faster. These projects shared common traits. All are relatively simple to implement, have clearly defined ROI, and require limited enterprise-wide data integration.
By mid-2024, most organisations pursuing AI had already tackled the abovementioned applications. The next in line were more complex problems. They required deeper technical capability, cross-functional coordination, and infrastructure that most finance departments didn’t have.
The technology itself evolved in ways that raised the bar. Generative AI capabilities that emerged in 2023 created new possibilities—and new complexity. Finance leaders suddenly faced questions about large language models, prompt engineering, and the risks of hallucination. The learning curve steepened as the field became better regulated.
Vendor consolidation played a role, too. Early AI tools often worked as standalone point solutions. By 2025, leading platforms had integrated AI across multiple modules. They started pushing organisations to move from experiments to commitment to broader ecosystem strategies.
Analytical outlets tracking enterprise technology adoption confirm the above with surveys and numbers, pointing to several specific barriers that intensified by 2025.
#1 AI initiatives have grown more complex.
What started as straightforward automation projects has evolved to involve multiple systems, cross-functional teams, and technical architectures. Most finance departments weren’t built to handle that level of complexity.
The setup of an AI solution now requires coordination among finance, IT, operations, and often external partners. Each additional stakeholder multiplies implementation time and political complexity.
#2 Data quality remains a persistent problem.
The same issue that plagued early AI efforts remains. And it has become more visible. As use cases grow more sophisticated, the tolerance for messy, inconsistent, or incomplete data shrinks to zero.
organisations discovered that basic automation—like invoice processing—can tolerate messy data. A few mismatched vendor names or inconsistent formats don’t break the system. But the same data quality issues will make predictive models for cash flow or forecasting fail.
Bottom line: AI can’t learn reliable patterns with inconsistent data. Remediation can take up to 12–18 months.
#3 The gap in technical skills keeps widening.
The skills gap cuts both ways.
Finance professionals can spot a bad forecast, but they don’t know how to design an AI solution. They can’t evaluate model performance or troubleshoot when predictions drift. Engineers and data scientists can build AI systems, but without domain knowledge, they struggle to understand the actual problems, regulatory constraints, and business context.
Bridging either gap is difficult. Universities aren’t producing enough graduates with both skill sets. Cross-training existing staff takes time that competes with operational demands. As a result, organisations are either competing for very narrow expertise or taking time to develop these skills internally.
#4 Implementation lacks clear direction.
Perhaps most telling: the reports show that 25% of organisations don’t know how to move from planning to actual implementation. They’ve attended the conferences, read the whitepapers, and approved the budgets. Some have hired consultants who delivered strategies and roadmaps.
Yet when it comes to deploying something real—deciding which data to use, how to handle edge cases, what governance framework to apply—they’re stuck. The gap between strategy and execution proved wider than anticipated.
#5 Not everyone has planned AI implementations.
Analytical outlets also report that 16% of companies have no planned AI implementations for the coming year. The segment of continued skeptics is small but notable. It is not surprising, given a key observation from finance functions using AI published in the same survey. After overcoming the initial barriers to launching a pilot, 91% percent report low or moderate impact.
For some, early pilots failed, souring leadership on further investment. For others, the cost-benefit analysis didn’t justify moving beyond current automation. A subset concluded that their competitive position doesn’t yet require AI capabilities, though this calculus may shift as competitors pull ahead.
A psychological aspect behind the plateau adds more clarity.
There’s a behavioural element to the 2024–2025 slowdown that data alone doesn’t capture. Early AI adopters were self-selected risk-takers willing to experiment with immature technology. The organisations that joined in 2023–2024 followed once the path seemed clearer and vendor solutions had matured.
The remaining non-adopters include a higher concentration of risk-averse organisations. Some are burned by previous technology implementations. The others are operating in heavily regulated environments where AI governance requirements aren’t yet clear. These companies move more slowly by nature and design.
Additionally, some organisations that appear in those 59% “using AI” category have implementations so limited—a single pilot in one department, a vendor-provided feature turned on but rarely used—that calling them “AI adopters” overstates their actual commitment.
The plateau may reflect organisations being more honest about what constitutes real AI adoption versus superficial experimentation.
Meanwhile, the gap between leaders and laggards keeps widening, and that’s where the data gets interesting.
Despite some seeing low impact, organisations with AI in production or at scale tend to see measurably better results than beginners. According to major consulting firms tracking these implementations, AI leaders are twice as likely to report moderate business impact. Moreover, they are three times as likely to see high impact. At this point, it isn’t an incremental improvement but a structural advantage.
These leaders achieve something the hype rarely mentions: efficiency gains, cost savings, and richer analytical insights. Not one at the expense of another, but all three simultaneously. Such results are possible as the implementations in question are mature enough to deliver on multiple dimensions.
These findings lead to a simple conclusion: early success compounds. organisations that figured out AI in 2023 or early 2024 are now pulling away from those still figuring out governance frameworks.
So, what are finance teams actually building? The use cases reveal a pragmatic pattern. The focus is on solving known problems and handling high-volume tasks, such as:
Notice what’s missing from the top four? No strategic forecasting. No complex financial modelling. No revolutionary transformation of how finance operates. The highest-adoption use cases are practical, measurable, and solve known problems.
When you look at why organisations aren’t moving faster, you’ll notice that the barriers haven’t changed much over the years. However, they’ve become harder to ignore.
Data literacy and technical skills top the list. Finance professionals can spot a bad forecast, but they can’t debug a machine learning pipeline. It creates a dependency on IT or external vendors that slows everything down.
Data quality and availability follow close behind. Organisations discover (usually mid-project) that their financial data isn’t clean, standardised, or accessible enough to support AI at scale. Fixing this requires work that predates any AI implementation.
Cultural acceptance emerges as a major hurdle, especially for non-adopters. Teams resist tools they don’t understand. Managers hesitate to trust automated decisions they can’t explain. Executives worry about regulatory scrutiny of black-box models.
The pattern is clear: technical challenges can be overcome with enough investment, but cultural and organisational challenges persist regardless of budget.
According to the forecasts, the plateau won’t last. Several converging trends—maturing technology, evolving regulations, and simple competitive pressure—will reshape how finance organisations approach AI over the next three years. So, what changes, and what stays stuck?
It is predicted that 20% of finance organisations will stop hiring or developing non-digitally literate talent entirely by 2028. Organisations will prioritise digital fluency over traditional qualifications, and strong accounting fundamentals or industry experience won’t compensate for a lack of proficiency with AI tools.
This potential change represents a fundamental split in how finance teams are built. The workforce will stratify into three distinct tiers:
CFOs are already making the calculations. Every dollar spent training someone to use basic software is a dollar not spent hiring the talent finance actually needs. The advanced capabilities required to compete don’t come from incremental upskilling of the existing workforce. They come from bringing in entirely different people.
Basic technology users won’t be fired en masse. Instead, they’ll be managed out through performance frameworks that require digital competencies they don’t have. Natural attrition will do the rest. Finance needs data scientists, AI experts, developers, and data engineers—these roles will make up the core of every finance team in a few years.
Organisations still investing heavily in basic digital literacy programs face a critical problem. They may soon fall into a resource trap. Training someone to interpret a dashboard or use cloud-based tools consumes budget and management attention. Meanwhile, competitors are hiring people who can build those dashboards, design those tools, and architect the systems underneath.
At some point, organisations that continue to invest in basic digital literacy training will lack the resources to develop advanced AI talent. Finance functions that can’t attract and retain digital experts will struggle with core capabilities: forecasting accuracy, risk detection, operational efficiency, and strategic insight. That’s why talent requirements are to evolve along with technology.
By 2029, 40% of FP&A teams at large enterprises will abandon bottom-up manual planning in favour of AI-driven simulation. Today, this figure sits at 5%. This doesn’t seem like a gradual revolution. Organisations are moving towards replacing how planning actually works. We can expect:
The barriers, however, are real and substantial.
Firstly, AI simulation requires integrated data from financial systems, operational platforms, and external sources working together. Most organisations have their data siloed in separate systems with incompatible formats and conflicting governance.
Secondly, data architecture modernisation is the prerequisite for this shift. It’s expensive, disruptive, and politically complicated. It isn’t a software problem. It’s an organisational challenge involving data ownership, access rights, and architectural decisions that cut across departments.
Thirdly, algorithmic accuracy remains a challenge. Currently, only 18% of organisations report improved forecast accuracy from AI, even though it is their second-highest goal. The technology works when the foundations are solid. Most foundations aren’t solid—at least yet.
So, we’re dealing with another competitive divide.
Early adopters gain speed and agility that compounds over time. They respond faster to market changes, test strategies through simulation before committing resources, and present leadership with options instead of outdated projections.
Laggards face something worse than inefficiency. They risk paralysis—unable to make confident decisions because their planning processes can’t keep pace with external events. It, in turn, undermines investor confidence. Boards lose patience when management can’t model the impact of strategic choices before making them.
Teams aim not to work faster, but to see further. In a few years, 30% of finance teams will have AI deployed in transactional processes primarily for predictive risk and revenue insights, not just efficiency.
Processing invoices more quickly, closing books faster, and similar capabilities were the efficiency gains that justified early AI investments. At the moment, they are reaching diminishing returns.
The next wave of AI investments goes deeper and focuses on extracting predictive intelligence from the same data flows. What we’re to witness is a fundamental shift from automation for efficiency gains to predictive analytics for business value.
Traditional automation asks: “How can we do this task with fewer people and less time?” The new approach suggests changing the perspective: “What can this transaction data tell us about future risk, revenue, and opportunity?” It redefines what gets built, what gets measured, and what finance teams actually do.
Hence, new capabilities are emerging across processes, expected to enhance the following areas:
Organisations have squeezed most of the efficiency gains available from traditional AI. Reducing invoice processing time from five days to two days delivered meaningful ROI. By lowering it further to 1.5 days, the business impact barely registers.
Incremental speed improvements no longer justify the investment, governance overhead, and change management required. Finance leaders need a different value proposition. Predictive insight provides it.
In terms of market impact, CFOs will demand greater transparency for these advanced use cases. When AI simply automates a manual process, explainability is a nice-to-have. When it recommends credit limits or predicts revenue risk, explainability becomes mandatory.
Hence, we can expect increased scrutiny on three areas:
Vendors that can’t answer these questions clearly won’t win deals for predictive use cases, regardless of model performance.
By 2028, finance organisations running cloud ERP with embedded AI assistants will close their books 30% faster than those using traditional workflows. The speed gain comes from a different way of coordinating work rather than doing the same tasks more quickly.
Three core things make a difference here.
1\. AI assistants coordinate multiple tasks across the close process.
Current automation handles individual tasks. One tool reconciles accounts, one flags exceptions, and another generates reports. Meanwhile, an AI assistant manages the entire workflow like a project coordinator, orchestrating multiple simple agents. It knows which reconciliations need attention first, routes exceptions to the right people, and monitors progress across all activities.
2\. The automation happens natively within the cloud ERP.
Everything runs inside the ERP system, where financial data already lives. There’s no need to build custom integrations between different tools, connect disparate systems through middleware, or copy data between platforms. Organisations avoid the complexity and fragility that come from cobbling together multiple point solutions from different vendors.
3\. Audit readiness is enhanced through transparent records.
The system records each decision the AI makes, every exception it handles, and every adjustment it processes. Audit teams can trace the entire close process without hunting through email chains or asking what happened to specific entries. It is clear who did what, when, and why. Audit readiness becomes built into the workflow.
The abovementioned capabilities already exist, but have so far seen limited adoption. Most implementations are tightly embedded within specific vendor ecosystems. Organisations committed to a particular cloud ERP platform can access these features, while those running hybrid environments face complications.
Yet, the adoption is accelerating. Vendors are rapidly expanding their AI assistants, focusing on offerings tailored to different customer segments.
From a short-term perspective, it gives finance leaders two immediate benefits:
Both matter for organisations under pressure to modernise their technology stack while controlling costs.
But there’s a risk to account for: the potential vendor lock-in. These AI assistants work within their vendor’s ecosystem. The more processes that depend on embedded AI, the harder it becomes to switch platforms later. Companies need to evaluate this trade-off deliberately, asking themselves:
There’s no right or universal answer. It depends on the organisation’s technology strategy, risk tolerance, and existing vendor relationships.
The factors that make these complex implementations of entire workflows achievable include the following:
The technology only delivers if the foundation supports it. Without meeting these conditions, the promised 30% improvement turns into a struggle to extract value, regardless of the technology’s capabilities.
Meanwhile, leading cloud ERP vendors are expanding the scope of their AI assistants, working on specific offerings and customer segments.
Some prioritise breadth across all modules. Others focus on depth in specific functions like close management or forecasting. Enterprise-tier vendors build for complex organisational structures and advanced governance, and mid-market platforms emphasise quick deployment and simplicity over configurability.
Analytical outlets tracking enterprise AI maturity point to a fundamental split forming in finance organisations. It is not a case of AI adopters versus non-adopters. That distinction is already outdated. The new divide separates AI-first finance functions from everyone else.
What defines AI-first? These organisations operate differently across four dimensions. They:
AI-first organisations build capabilities that compound with each implementation. The gap between them and the rest widens exponentially. Early cloud adopters saw the same dynamic. They developed expertise that made subsequent migrations easier while latecomers struggled with higher costs and ongoing disadvantages.
There’s still time to catch up, although the barriers to progress grow steeper as leaders pull away. AI-first organisations are building institutional knowledge, technical infrastructure, and cultural norms that can’t be replicated quickly, regardless of budget.
Finance functions need to decide which side of this divide they’ll occupy. The choices made in 2025 and 2026 will shape the competitive position for years to come.
The surveys show that 59% of finance leaders expect AI spending to increase by at least 10% in 2026. We can tell that these aren’t just routine maintenance or incremental adjustments, but this material commitment reflects shifting priorities.
As for the factors driving this spending, there are three that dominate.
1\. Improved efficiency needs.
The efficiency case for AI has evolved. Early implementations targeted high-volume transactional work. The next wave focuses on cognitive tasks that consume disproportionate senior staff time: variance analysis, management reporting, regulatory compliance documentation, and ad-hoc financial modelling.
These processes are measured in days of analyst time per reporting cycle. AI that can compress a three-day variance analysis into three hours doesn’t just save time and cost. It changes what questions finance can answer and how quickly leadership gets strategic insight.
2\. Decision-making support.
CFOs increasingly view AI as decision infrastructure rather than an operational tool. Consequently, the investment goes toward the corresponding capabilities. Decision makers want to surface relevant data faster, identify patterns humans miss, and model scenarios that would take weeks to build manually.
This matters during acquisitions, market entry decisions, and other high-stakes moments where better information quality directly affects outcomes. Finance teams willing to spend on AI for decision support know that improved decision quality justifies the investment multiple times over.
3\. Enabling new finance capabilities.
Some organisations are funding AI to do things finance has never done before. Real-time profitability analysis by customer segment. Automated detection of revenue leakage across contracts. Predictive cash flow forecasting that updates continuously as transactions occur.
It may seem that these capabilities replace existing processes. However, they create net-new analytical capacity that wouldn’t exist without AI. The budget comes from strategic investment, not productivity savings.
30% of finance leaders plan to specifically increase spending on big data analytics and automated machine learning.
Big data analytics in finance means connecting previously siloed data—transactional systems, operational platforms, external market data, customer behaviour, supplier information—into unified analytical environments. The value lies in answering questions that span multiple domains simultaneously.
For example, which customer segments are most profitable after accounting for payment behaviour, service costs, and lifetime value? Most finance organisations can’t answer this today because the required data lives in separate systems.
Automated machine learning (autoML) lowers the barrier to building predictive models. The standard approach is for data scientists to hand-code algorithms. With autoML, finance analysts can create forecasting models, risk scores, and classification systems—all without coding.
This approach changes who can work with AI. It shifts the bottleneck from scarce data science talent to data quality and business problem definition—areas where finance teams already have expertise. organisations investing in automated machine learning seek to unlock use cases that never make it into centralized AI roadmaps.
The spending patterns reveal that finance leaders aren’t spreading the budget evenly across all AI possibilities. They’re concentrating resources on areas where business value is demonstrable. These include efficiency in cognitive work, better decisions on high-stakes choices, and analytical capabilities that create competitive advantage.
organisations boosting big data and autoML spending focus on a specific thesis: that finance’s analytical future depends on connecting more data and empowering more people to extract insight from it.
Moving to AI-first operations isn’t just a technology decision. It requires deliberate work across six areas, ranging from data quality to cultural acceptance. Organisations that neglect any of these consistently struggle, regardless of how much they spend on AI tools.
1\. Invest urgently in data quality and skills.
These two factors remain the core enablers.
Data quality issues that seem manageable in manual processes become catastrophic when AI scales them across thousands of transactions. A 2% error rate in vendor classification might cost a few hours of manual cleanup each month. Feed that into an AI system making automated payment decisions, and the 2% becomes a material risk.
Skills matter differently than most assume. Finance doesn’t need every team member to code machine learning algorithms. It requires enough people who understand what AI can and cannot do, how to frame problems AI can solve, and when to question AI-generated outputs. Without this baseline literacy, users don’t trust AI or misapply it.
Addressing both data quality and skills now creates compounding returns across all future AI work. Deferring either means facing the same problems repeatedly—failed pilots, project delays, and technical debt that grows harder to fix.
2\. Build a portfolio view of use cases.
Finance leaders tend to fall into a trap here. Some chase only quick wins that deliver immediate ROI but rarely create competitive advantage. Others do the opposite: they pursue only transformational projects. The latter often stall in planning or fail during implementation.
The best approach for a portfolio is to balance tactical quick wins with strategic differentiation:
More importantly, the portfolio must evolve. Early-stage AI organisations can focus on quick wins to prove value. Mature organisations should shift resources toward strategic capabilities once foundational automation is in place.
3\. Deploy targeted development for digital talent.
Traditional training programs, such as week-long courses and certification programs, move too slowly for AI’s pace of change. By the time content is developed and delivered, the technology has evolved. Effective organisations can use more dynamic approaches instead:
The abovementioned tactics develop competency faster than traditional approaches, cost less, and integrate learning into usual workflows.
4\. Modernise data architecture.
AI simulation, predictive analytics, and intelligent automation all require data that’s integrated across systems, accessible to both humans and machines, and adheres to high-quality standards. To add more specifics:
Most finance organisations have data that meets one or two of these criteria, but rarely all three. Teams that postpone architecture modernisation end up perpetually piloting AI but never scaling it.
5\. Implement robust AI governance.
When an AI system flags a transaction as potentially fraudulent, stakeholders need to understand why. What factors triggered the alert? How were they weighted? Does the logic align with regulatory requirements? This is when explainability and auditability come into play.
Governance frameworks need to address core logic and rules for the full spectrum of use cases. To name a few examples, they define transparently:
Without this structure, AI implementations remain fragile. A single unexplained error can undermine trust across the entire organisation.
6\. Manage cultural change.
The technical challenges of AI are often easier to solve than the cultural ones.
Overcoming resistance and building trust may be more complicated than it seems. Finance professionals built careers on expertise, judgment, and a detailed understanding of their domain. AI systems that automate tasks or question human conclusions can feel threatening rather than helpful.
Resistance manifests in predictable ways. Teams find reasons why AI won’t work for their specific processes. They point to edge cases and exceptions. They emphasise judgment calls that “no algorithm could make.” And sometimes these objections are valid. Yet, frequently, they’re defence mechanisms.
Building trust requires transparency about what AI will and won’t change. Which jobs are at risk? What new roles will emerge? How does AI success affect individual performance evaluations and career progression? The answers must be specific and honest, even to uncomfortable questions. It works much better than friendly reassurance or optimistic evasions.
On the practical side, it makes sense to involve sceptics early in AI projects. Let them identify problems, suggest improvements, and see limitations firsthand. People resist what’s imposed on them but support what they help create. Also, focus on human-AI collaboration, not AI replacement. In the end, this is how it should be.
Cultural change moves slowly, and organisations shouldn’t treat it as an afterthought. The technical part is already complex. There’s no need to fill adoption with even more struggles and challenges that are easy to manage when accounted for.