AI in Finance: Pilots, Promises, and a Reality Check

AI in Finance: Pilots, Promises, and a Reality Check

by Ala Haddadin, senior product marketing manager 

 

Several reports this year highlighted a growing divide between generative AI use cases and business transformation. The GenAI Divide: State of AI in Business 2025, a Project NANDA-MIT study, revealed that while enterprises are eager to launch GenAI pilots, most fail to progress beyond the pilot phase. The report attributes this gap to LLM tool inability to learn from feedback and retain context; the misallocation of AI budgets, with nearly 50 percent directed toward initiatives yielding lesser ROI; and inefficient organizational structures around AI implementation. 

 

Financial institutions are well-positioned to benefit from AI. They operate in data-rich environments where workflows are often highly procedural and governed by well-defined processes. They still rely on heavily customized in-house systems that are costly to maintain. Their dynamic regulatory and reporting requirements demand faster go-to-market times. However, does the sector see the same GenAI divide described in the report? At the Murex Capital Markets Technology Forum (CMTF) in Paris in June, our clients weighed in. 

 

A flexible, exploratory approach is the preferred strategy.

The participating financial institutions were at different stages of their AI journey. Many are adopting a more flexible, exploratory approach. 

“Having an AI roadmap is setting up for failure, because things are moving very fast. Instead, we’re providing our workforce with a secure environment where they can explore and upskill on AI,” a commodity trading firm CIO said. 

Similarly, other executives said they were moving away from rigid roadmaps, opting instead for user-driven experimentation within an AI-focused innovation lab. Prototyping freedom is gaining traction as well. Participants from a European financial entity said they were allowing users to build and test their own AI tools, provided they follow testing and maintenance protocols. 

“We created the EUCA (End User Computing Application) system to unblock users who want to move faster to market. It gives them flexibility and removes the excuse that things are too slow,” explained a COO. 

 

Institutions aim to lay data foundations for AI. 

While approaches to AI adoption varied, there was strong consensus that data strategy must be structured and deliberate. 

“The roadmap has to be for data—building data lakes and normalizing data,” a leading Wall Street bank participant emphasized.

 Participating institutions are investing heavily in making high-quality data available for AI and implementing centralized data lakes and golden sources of truth. Looking ahead, LLM tools need to generate new data points from their exchanges with humans and self-train on them to improve their learning over time, similarly to humans learning on the job. LLMs aren’t there yet. 

 

Pilots are ongoing. 

Below are some use cases shared the roundtable participants.

 

 • Enterprise chatbots. Bots were created early on. At an African bank, chat bots are used by business users to request custom reports using natural language, which significantly reduced the workload for the teams in charge. 

 

KYC (Know Your Customer). LLMs are used across all participants to process KYC documents. 

 

Trade booking. NLP is used to process client emails requesting quotes, automate quote generation and integrate trade booking into a capital markets platform, such as MX.3, or similarly generate trade information based on chat or email information and book it in the trading platform. 

 

Hedging and predictive modeling. This use case remains early stage. “We’re exploring how AI can analyze market liquidity, trade decay, and support hedging strategies by combining historical data analysis with predictive modeling,” said a European head of trading. 

 

• Production monitoring: A British banking group rep shared their organization is using Dynatrace’s deterministic AI engine to monitor the MX.3 environment. 

“In this case, it learned the normal behavior of the MX.3 system and automatically flagged deviations, helping the bank ensure stability and performance in production,” said the bank executive. 

 

Development productivity. AI is improving efficiency in software development, though full code generation remains limited. A commodity trading firm reported it was using AI dev tools to replatform internal tools. 

A major German financial institution reported a 20–30 percent improvement in development productivity. 

“Right now, it’s more about efficiency—like generating test code. We’re not yet at the point of letting AI write full code due to too many errors,” said an IT manager at a Danish bank. 

 

Are we at a point where AI can fully make trading decisions? The answer was unanimously no. 

“There are rules a trader must follow, and those can be codified. But AI can’t replace a trader’s instinct or appetite. The trader will always have the final say,” the banking executive said. AI pilots shared by the roundtable participants confirmed a trend also highlighted in the MIT report findings—what could be described as a “front-office bias.” 

Traditionally the source of differentiation for financial institutions, the front office received the highest concentration of AI pilot initiatives.

 

Organization, security, trust and integration are challenges. 

Despite the large number of pilots, several challenges continue to slow down the integration of these initiatives in the business processes of financial institutions. 

 

Insufficient business-IT integration: Selecting the right AI use cases requires deeper collaboration between IT and business units. “This is very similar to the discussions around blockchain—a lot of ideas, but they didn’t really materialize at scale for business because it wasn’t guided properly by business SMEs,” warned the managing director of a Swiss private bank. In contrast, one British banking group has adopted a more integrated approach: The MX.3 team sits side by side with the AI team. Four hundred to 500 people—including tech specialists, traders, quants and risk experts—work in the same space. 

 

This collaborative environment fosters rapid experimentation and successful implementation. This proximity and cross-functional setup are seen as key enablers of AI success. 

 

Regulation, compliance, extensive scale and impact slow integration pace in large financial institutions: “In big organizations, we need to take into account many considerations like due diligence and compliance, these processes take time.” said an executive from a large British banking group. “We’ll get the full value when we can fully integrate AI into our processes,” added a Swiss private bank executive. “But in banking, we’re constantly confronted with data security concerns. Once we can integrate AI into decision-making, that’s when we’ll see real value.” 

 

Security issues are complex. AI can originate from various sources—personal user tools, internally developed models, or embedded features in third-party vendor software. Two executives emphasized the importance of centralized AI governance to establish clear policies and enforce transparency in AI usage across business workflows. 

 

Documentation and traceability of AI decisions are critical for meeting regulatory requirements. AI-driven decisions must be justified to auditors. 

 

A rising number of AI use cases remain unintegrated, posing a risk to operational stability. As more end users are developing AI tools, many of these solutions remain standalone—lacking integration into core workflows and often not maintained over time “Everyone is a developer now,” said an executive at a Dutch banking group. “New traders often use Python to build their own tools, but these solutions are sometimes incomplete, poorly integrated, or not well maintained—posing operational and governance challenges. The bank is actively working with both GenAI and traditional AI to bridge these gaps and improve integration.” 

 

GenAI chatbot usage remains limited. The current generation of AI tools lacks features like long-term memory and customization, which would make these tools easier to integrate with existing company systems. That said, the pace of innovation in AI is very fast. Recent developments like MCP (Model Context Protocol) and new memory management frameworks aim to improve how LLMs interact with enterprise tools, enabling them to retain context, reason over extended interactions and perform more complex tasks. 

 

AI transformation is equal parts people and tech. 

Successful AI adoption is as much about people as it is about technology. Many of the institutions participating in the roundtable are investing in internal training, certifications and apprenticeships to build AI literacy across teams. People at all levels must be comfortable that AI won’t break regulations or compromise compliance. A trading firm executive highlighted another risk of falling behind on the AI transformation. 

 

“The new generation of employees is AI-native—they expect these tools to be available in the workplace,” they said. “Organizations that fail to meet this expectation risk falling behind in attracting and retaining top talent.” 

 

A subsequent AI wave might provide impactful value. 

This year has been a learning curve for AI across our client organizations. While the first wave of AI brought some productivity gains at the individual and team levels—along with many promising pilot initiatives—it often fell short of delivering measurable impact. The second wave, focused on deep integration, holds the potential for more meaningful, long-term value. 

 

This phase involves integrating AI with company-specific data, enterprise data platforms, and broader IT infrastructures. It is also about focus. Organizations must carefully select AI use cases that deliver meaningful value across the front, middle, and back office— prioritizing AI applications that can be measured and that offer the highest return and operational impact. This wave comes with its technological, security and organizational barriers to overcome. It may take us several years to become transformative. 

 

The CMTF roundtable made one point very clear to me: Participating financial institutions are keeping their people at the heart of their AI journey. They’re investing in internal AI education and creating safe spaces for employees to explore and innovate. For them, it’s not about replacing people with AI, but about bringing together technology and talent to build faster, better, and more resilient systems.