CFOtech UK - Technology news for CFOs & financial decision-makers
Anxious business professionals uk office ai interface legal risks gavel shadow

UK firms warned of legal risk as AI adoption outpaces policy

Thu, 20th Nov 2025

Legal experts have warned that UK firms using artificial intelligence without formal policies in place could expose themselves to legal and reputational risks. The use of AI is growing rapidly across British businesses, but many organisations have yet to update their staff guidance on these new tools.

Recent cases

Examples have already emerged of companies facing legal difficulties as a result of AI decisions. In one case, an Uber Eats delivery driver's account was deactivated due to repeated failures in selfie verification checks powered by Microsoft's facial recognition technology. The driver, who is black, experienced repeated mismatches between his selfies and his profile photo. No human intervention reviewed the errors, and after continued algorithmic mismatches, his account was cancelled. The driver pursued a claim of indirect racial discrimination, which Uber settled out of court in 2024.

In another widely reported incident, a Samsung employee uploaded confidential source code to ChatGPT, resulting in sensitive information becoming publicly accessible.

Policy necessity

Emily Warman, Employment Associate at Square One Law, said the risks are not confined to unusual situations. She cautioned that as AI becomes deeply embedded in business operations, the absence of clear guidelines can lead to problems.

"Workplace policies regarding the acceptable use of technology, such as phones, social media etc have been commonplace for some time, however many businesses are yet to extend this principle to AI," said Emily Warman, Employment Associate, Square One Law.

Warman explained that an official AI policy can help manage emerging risks and provide both prevention and remediation mechanisms for misuse. "Introducing an AI policy however ensures the organisation has taken steps to manage the risk and deal with the consequences if an employee has misused AI, while providing a framework for staff to better understand how they should and shouldn't use generative tools, avoid discrimination and safely process personal data or commercially sensitive confidential information using AI.

"If it is then discovered that someone has used it outside of its permitted use, then it could be taken down the disciplinary route, potentially saving the employer from significant reputational and legal damages which could've arisen further down the line," said Warman.

Corporate responses

Some larger firms have started outlining ethical frameworks. NatWest published its own AI and Data Ethics Code of Conduct earlier this year. Dr. Paul Dongha, Head of Responsible AI and AI Strategy at NatWest, described the rationale behind the policy.

"The Code of Conduct is, in effect, a statement of our intent, outlining our fundamental principles regarding ethical use of AI and it reflects our aspirations to foster responsible and transparent practices. So, when we created our Code of Conduct, our aim was to align it to NatWest Group's purpose, values and strategic priorities. The Code of Conduct contains principles that govern how we use and process customer data and how our AI systems are developed and deployed.

"For me, this is hugely important. It's not some conceptual idea, based on abstract ideas. The principles in our Code of Conduct are embedded into the way we design and build AI systems, and the data used to train them," said Dongha.

Governance gaps

Industry leaders are warning of the risks posed by a lack of preparation and inconsistent governance measures as AI adoption accelerates. Susan Taylor Martin, Chief Executive of the British Standards Institution, said many organisations may be underestimating the challenges as they expand AI use without robust oversight.

"While it can be a force for good, AI will not be a panacea for sluggish growth, low productivity and high costs without strategic oversight and clear guardrails - and indeed without this being in place, new risks to businesses could emerge.

"Divergence in approaches between organisations and markets creates real risks of harmful applications. Overconfidence, coupled with fragmented and inconsistent governance approaches, risks leaving many organisations vulnerable to avoidable failures and reputational damage.

"It's imperative that businesses move beyond reactive compliance to proactive, comprehensive AI governance," said Taylor Martin.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X