UK finance staff use generative AI daily, survey finds
Research published by Smarsh shows that 61% of UK financial services and insurance professionals now use generative AI every day, while many do not believe their organisations can fully monitor the risks linked to its output.
The survey of 2,000 UK adults working in financial services and insurance points to widespread use of AI in routine work and regulated communications. Respondents said the technology is used in 50% of reporting tasks, 49% of call notes and summaries, 40% of client and customer communications, 38% of marketing and social media content, 37% of internal communications and 34% of compliance documentation.
That matters because fewer than half of respondents said they carefully check AI-generated material before using it. Only 41% said they thoroughly review and make significant edits to AI-generated output before it is sent or published.
Daily use
The findings suggest AI has moved beyond trial use in the sector and into routine workflows. Among workers aged 25 to 34, more than 36% said they use AI tools multiple times a day.
Older age groups also reported regular use. Nearly a third of those aged 35 to 54 said they use AI tools daily, as did 28% of those aged 55 to 64.
Rising adoption is also increasing output. Some 69% of respondents said AI is boosting the amount of content they create.
Much of that content appears to fall in areas with compliance implications. Alongside reporting and summaries, AI is being used in external communications and formal documentation, including material tied to compliance processes.
Oversight gap
Confidence in existing controls was low. Just 32% said their organisation's surveillance systems are fully equipped to detect risks in AI-generated content.
Younger workers were most likely to raise that concern. Among respondents aged 25 to 34, 43% pointed to gaps in surveillance, even though that group is also among the heaviest users of the technology.
The research also suggests staff do not see supervision as a barrier to using AI at work. Instead, many said stronger oversight would make them more willing to rely on the tools.
Across the sample, 81% said they would feel more confident using AI for work-related tasks if they knew that their organisation was properly monitoring the outputs. The figure rose to 87% among those aged 18 to 34.
That result was 12% higher than when the same question was asked a year earlier, pointing to growing awareness among staff that AI use and compliance controls need to develop together.
The issue is especially sensitive in financial services and insurance, where firms must retain records, supervise communications, and demonstrate that customer-facing and internal materials meet regulatory standards. Wider use of generative AI in client interactions, reporting and documentation adds another layer to those obligations.
Paul Taylor, Vice President of Product at Smarsh, said pressure on compliance teams is rising as AI-generated communications become more common across channels.
"Financial institutions are rapidly adopting generative AI to meet growing demands for faster, more personalized client engagement-but this shift is creating an unprecedented volume and complexity of communications," Taylor said. "Compliance leaders are now under pressure to ensure every AI-assisted interaction is transparent, supervised, and defensible. Firms need the ability to capture and govern these communications across all channels, or they risk introducing critical blind spots at a time when regulatory scrutiny is intensifying. Getting this right isn't just about risk mitigation-it's about enabling innovation with confidence."