UK finance chiefs prioritise AI accuracy over speed
Inaccurate outputs are the biggest barrier to the adoption of artificial intelligence among senior UK financial services executives, according to a Bloomberg poll of more than 100 decision-makers in the sector.
Half of the respondents said hallucinated facts or numerical errors were their main concern when using AI in financial markets. Another 27% cited a lack of explainability, suggesting that reliability and transparency remain the key tests for the technology in investment and trading settings.
The survey indicates that finance leaders are assessing AI differently from users in many other sectors, where speed and fluency often attract more attention. In this poll, only 5% cited fast outputs as the main source of confidence in AI systems, while 9% cited sophisticated language and reasoning.
Instead, respondents preferred safeguards that allow results to be checked. Source attribution was the feature that gave 32% the most confidence in AI systems, while 30% highlighted built-in error checking and 25% chose human oversight.
Trust tests
This pattern reflects the compliance and risk pressures that shape technology buying in financial services. Errors in market data, research, portfolio construction or trade support can carry regulatory, financial and reputational consequences, making verification more important than speed for firms considering broader AI use.
The poll covered senior decision-makers from both the buy side and the sell side. It was conducted live at Bloomberg's AI in Finance Summit in London, offering a snapshot of executives' views on adopting AI tools in front-office and investment workflows.
Beyond concerns about reliability, the findings also point to continued interest in broader uses of AI. Two-thirds of respondents (66%) said full-workflow AI assistants were the most exciting next development in financial services.
That put workflow tools well ahead of the other applications tested in the poll. Personalised portfolio insights were chosen by 9% of respondents, while 12% picked no-code quant tools.
Workflow focus
The gap between those responses suggests that firms are looking beyond narrow use cases toward systems that can assist across multiple stages of the investment process. At the same time, the preference for attribution, error checking, and human oversight indicates that wider deployment will depend on whether those systems can be audited and controlled.
For data and technology providers serving banks, asset managers and trading firms, the results underline a familiar commercial challenge. Buyers may want broader AI integration, but they are signalling that trust in outputs must come before speed or polished responses.
Bloomberg linked the survey findings to its own product development and also disclosed a roadmap for ASKB, its conversational AI interface for the Bloomberg Terminal, which is currently in beta.
The roadmap describes ASKB as becoming more deeply integrated into investment workflows. Bloomberg said it is being developed around trusted data, source grounding and controls intended to support professional use in institutional settings.
Amanda Stent, Head of AI Strategy & Research at Bloomberg, said the findings show what financial firms now expect from AI products.
"The results suggest that trustworthiness depends on whether an AI's outputs can be interrogated and validated," Stent said. "Solving this challenge depends on attribution, transparency and the quality of the underlying data so outputs can be traced to their sources, validated for accuracy and confidently used in decision-making. This is exactly what is shaping Bloomberg's approach to AI development. We are focused on combining high-quality, trusted data with AI that is embedded into real workflows and designed with accuracy, transparency and control at its core."
The findings add to a broader debate in the City over how quickly generative AI can move from experimentation to routine use in regulated environments. Financial institutions have invested heavily in pilots and internal tools. However, survey data such as this suggest that deployment decisions still hinge on practical questions of traceability, accountability, and error rates rather than on enthusiasm for the technology alone.
For senior executives weighing adoption, the message from the poll is clear: firms may want AI that can support complete workflows, but they will rely on it only when the output can be checked, explained and overseen by people.