CFOtech UK - Technology news for CFOs & financial decision-makers
Moody smartphone reflection real vs ai fake digital identity fraud

AI deepfakes trigger 'Great Trust Recession' online

Wed, 11th Mar 2026

New research from biometric identity verification provider iProov suggests consumers are increasingly sceptical about what they see online, with almost half saying they now doubt "almost everything" because of AI-generated content.

The survey of 2,000 people in the UK and the US, conducted in the first quarter of 2026, describes what iProov calls "The Great Trust Recession". It points to rising concern about deepfakes and AI-driven impersonation across financial services, media, recruitment and public services.

Overall, 48% of respondents said they question the authenticity of almost everything they encounter online. Women were slightly more likely to take that view than men (50% versus 46%).

Changes in habits

The findings suggest the uncertainty is changing online behaviour. More than half of respondents (52%) said they have changed their habits and now visit only trusted news websites to ensure the information they consume is verified.

Social media use also appears to be shifting. Some 26% said they are spending less time on social platforms specifically to avoid fake content. Among 18 to 24-year-olds, the figure rose to 32%.

Respondents also placed responsibility for identity-related fraud primarily on institutions. When asked who should be responsible for solving online identity verification, 87% pointed to institutions rather than individuals, allocating responsibility across tech platforms and banks, AI companies and government.

Bank switching risk

Financial services emerged as a flashpoint for trust and liability. The survey found 74% of consumers would be likely to switch banks if a competitor offered a guarantee around proving genuine human presence, including 26% who said they would be very likely to switch immediately.

Willingness to move banks was highest among 25 to 34-year-olds. In that group, 41% said they would be very likely to switch immediately, compared with 14% of those aged 65 and older.

Expectations around responsibility for losses are also shifting. More than half of respondents (52%) said banks should be liable for losses caused by deepfake-enabled fraud. By contrast, 24% said keeping personal data and biometrics safe is primarily their own responsibility.

Generational differences also emerged on potential public backstops. Some 18% of 18 to 24-year-olds said the government should provide insurance for online identity fraud, compared with 4% of those aged 55 to 64.

Hiring and workplaces

The research points to concern about impersonation risks in recruitment. Three-quarters of respondents (75%) said it is likely that someone using a fake identity or a deepfake could be hired by a major company-26% said very likely and 49% somewhat likely.

At the same time, many job seekers said they would accept stronger identity checks. Some 81% said they would consent to a biometric face scan during an application process as a fraud-prevention measure. Nearly half (47%) said they would consent happily, while 34% said they would consent reluctantly if they understood how the data would be used. A smaller group (13%) said they would refuse on privacy grounds.

Government trust gap

The study also reports low confidence in government websites' ability to correctly identify users and block impostors. Some 38% rated their trust as low or zero (27% low and 11% zero).

Even so, respondents preferred digital verification methods when they were perceived as secure. When offered a choice, 43% preferred a secure face scan via a mobile app, compared with 30% who preferred an in-person appointment and 17% who chose a video call. The results also suggest 55% would be more likely to use government services online if secure biometric login were available.

Age remained a factor in channel preferences. Among those aged 65 and older, 32% preferred in-person appointments and 19% preferred mailed paper documents, pointing to a divide between digital-first approaches and traditional formats.

Industry response

iProov frames the findings as evidence that institutions face growing pressure to prove authenticity in digital interactions. The company also released an online quiz-style game, "Find the Fake", which asks users to identify an AI-generated deepfake among social media profiles.

Andrew Bud, iProov's founder and CEO, linked the results to wider concern about digital credibility.

"AI has blurred the line between real and fake in digital ecosystems, and too many organizations are caught off guard. This study highlights a major shift in consumer sentiment, showing that generative AI is actively undermining the credibility of the institutions people have traditionally relied upon. Deepfakes are quickly undermining the trust at the heart of the digital economy, ultimately compelling consumers to change their behaviors and, importantly, who they are willing to do business with."

The findings are likely to sharpen debate about how banks, platforms, employers and public bodies verify identity as realistic synthetic media becomes more accessible.