CFOtech UK - Technology news for CFOs & financial decision-makers
Realistic bank computer screen digital interface shadowy cybercriminals uk banks

UK banks face rising AI-driven fraud as tech outpaces rules

Fri, 5th Dec 2025

Financial institutions in the UK face a period of heightened risk in 2026, as fraudsters develop new tactics and exploit gaps in regulation. The increased use of artificial intelligence (AI), evolving social engineering techniques, and continued challenges with online platforms are key risks expected to impact banks and consumers in the coming year.

AI-driven fraud

Criminals are making greater use of agentic AI and remote access scams. These methods allow them to mimic real customers with a level of accuracy that challenges existing fraud detection systems. As more customers engage in digital banking and use devices that can be remotely controlled, the chances of exploitation through screen-sharing apps, mobile malware, and remote access tools are rising.

"In 2025, UK Finance figures show that overall fraud losses levelled out after years of relentless growth. But beneath that headline, the picture is more complex. The plateau masks a shift in tactics, with criminals increasingly relying on agentic AI, remote-access scams, and sophisticated social engineering. There are more than 7,000 instances of remote purchase fraud a day in the UK alone, and new technologies such as iPhones that allow devices to be remotely controlled will further exacerbate the exploitation of screen-sharing apps, mobile malware, and remote access tools," said Jonathan Frost, Director of Global Advisory for EMEA, BioCatch.

Frost noted that these threat actors are investing in technology that helps them avoid detection and operate on a scale not previously seen. This means banks must adapt rapidly to keep pace with changes in fraudster behaviour and technology adoption.

Behavioural biometrics

The rapid adoption of AI is not limited to criminals. Financial institutions are also exploring agentic AI solutions to automate risk checks and identify complex patterns in fraud. However, distinguishing between legitimate customer behaviour, AI agents acting legitimately, and those acting fraudulently is becoming increasingly complex. Behavioural biometrics - analysing patterns in how users interact digitally - is being highlighted as a critical tool.

"Looking to 2026, the challenge is set to escalate. As fraudsters adopt AI at scale, banks will face a new generation of synthetic, remotely controlled behaviour designed to mimic genuine customers with uncanny accuracy. The ability to distinguish real human intent from AI-generated or manipulated interactions will become the defining battleground. Behavioural biometrics will play a central role, offering real-time insight into patterns that can't be forged by automation alone," said Frost.

Role of Big Tech

The landscape is further complicated by the role of internet platforms and so-called 'finfluencers'. Around 70% of authorised push payment (APP) fraud in the UK starts online, and leaked data suggest that hosting fraudulent content accounts for 10% of Meta's annual revenue. Nearly 80% of young people are said to trust the financial information shared by influencers on these platforms.

"Big Tech has become a significant enabler of scams, and this will continue throughout 2026. Around 70% of APP fraud in the UK begins online, and recent leaks have reported that 10% of Meta's revenue alone stems from hosting fraudulent content. The rise of finfluencers is another concern, with nearly 80% of young people trusting information they share," said Frost.

Efforts by financial institutions to get social platforms to do more have not delivered significant changes. Delays to the UK's Online Safety Act and diluted measures within the pending UK fraud strategy and EU AI Act mean that criminals continue to exploit these online spaces with limited resistance.

Regulatory response

While regulation is evolving, Frost warns that it risks being reactive rather than proactive. Upcoming changes, such as the EU's PSD3 and PSR, are not expected to come into force until 2027 and may not go far enough to create parity across regions or fully address the causes of fraud. There are concerns that reimbursement requirements, although aiding victims, have not yet addressed a rise in the underlying number of fraud cases.

"Regulation in 2026 risks being reactive rather than revolutionary. Beyond holding Big Tech to account, the ubiquity of instant payments, AI, and crypto means that legislators must move quickly to develop the frameworks for stronger consumer protections," said Frost.

Early intervention

Preventing scams before transactions occur is seen as critical. Mandatory reimbursement practices in the UK have provided some relief to victims but have not reduced the rate of scams. Real-time intelligence sharing between institutions, as piloted in Australia, is viewed as an effective model to strengthen fraud prevention without disrupting customer experience.

"Preventing scams before the transaction occurs must be the goal. Along with deploying better fraud detection tools and behavioural technology to decrease APP losses, financial institutions must work together. Real-time intelligence sharing networks, as seen in Australia, can dramatically improve scam detection without adding friction for customers. In the face of evolving fraud in 2026, this is the only solution," said Frost.

"It cannot fall to banks alone. Regulators, telecoms firms, and technology platforms must also step up. Only a united, cross-sector response will allow us to meaningfully turn the tide in 2026."
Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X