
Nearly 8% of UK & US staff use Chinese GenAI at work
New research has shown that nearly one in twelve employees in the United States and United Kingdom is using at least one Chinese-developed generative AI tool while at work, according to a recent analysis by Harmonic Security.
The study, which monitored approximately 14,000 end users over a 30-day period, found that 7.95 percent of those surveyed had used Chinese generative AI (GenAI) applications such as DeepSeek, Moonshot Kimi, Qwen, Baidu Chat, and Manus. DeepSeek accounted for the overwhelming majority of this use, making up around 85 percent.
Exposure of sensitive data
Among the 1,059 users who interacted with Chinese GenAI tools, Harmonic Security identified 535 incidents where sensitive data was exposed. The exposures primarily took place through DeepSeek, with other incidents attributed to Moonshot Kimi, Qwen, Baidu Chat, and Manus.
The analysis categorised the types of sensitive data exposed, with code and development artifacts - such as proprietary code, access keys, and internal logic - making up 32.8 percent. Mergers and acquisitions data represented 18.2 percent of exposures, while personally identifiable information (PII) accounted for 17.8 percent. Additional categories included financial information (14.4 percent), customer data (12.0 percent), and legal documents (4.9 percent).
The research suggested that engineering-heavy organisations faced particular risk, as developers sought assistance from GenAI platforms for coding tasks, sometimes without recognising the implications of submitting internal or sensitive data to models hosted outside the company's jurisdiction.
Comments from Harmonic Security
Alastair Paterson, Chief Executive Officer and co-founder of Harmonic Security, stated: "All data submitted to these platforms should be considered property of the Chinese Communist Party given a total lack of transparency around data retention, input reuse, and model training policies, exposing organizations to potentially serious legal and compliance liabilities. But these apps are extremely powerful with many outperforming their US counterparts, depending on the task. This is why employees will continue to use them but they're effectively blind spots for most enterprise security teams."
Paterson addressed the challenges of trying to block access to foreign GenAI applications, highlighting that simple prohibitions are not always effective. He observed that employees often find ways around restrictions, even in environments where companies adopt stringent blocking measures.
Paterson continued: "Blocking alone is rarely effective and often misaligned with business priorities. Even in companies willing to take a hardline stance, users frequently circumvent controls. A more effective approach is to focus on education and train employees on the risks of using unsanctioned GenAI tools, especially Chinese-hosted platforms. We also recommend providing alternatives via approved GenAI tools that meet developer and business needs. Finally, enforce policies that prevent sensitive data, particularly source code, from being uploaded to unauthorized apps. Organisations that avoid blanket blocking and instead implement light-touch guardrails and nudges see up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300%."
Study methodology
The findings were drawn from data collected through Harmonic Security Protect, a tool that observes user behaviour in relation to software-as-a-service based GenAI applications. The analysis was based on anonymised and sanitised data sets, which included statistics on file uploads, app usage rates, and detections of sensitive information at the prompt level.
The report noted that the survey did not seek to identify individuals, and all findings were processed to ensure user privacy. The research provides new data on the extent of GenAI application use in corporate environments and the associated risks of data exposure when employees turn to unsanctioned AI platforms, particularly those hosted in other countries.