Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
New research has revealed the scope of workers using tools such as DeepSeek, with sensitive business data commonly shared with Chinese chatbots.
A study of around 14,000 employees from the United Kingdom and the United States has shown that a significant number of workers are using Chinese generative AI platforms within the workplace.
Behavioural analysis performed by Harmonic Security over the course of 30 days found that 7.95 per cent – almost one in 12 – took advantage of Chinese generative AI (GenAI) apps, with DeepSeek accounting for 85 per cent of all such use.
Moonshot Kimi, Qwen, Baidu Chat, and Manus were the next most commonly used chatbots.
More worryingly, out of the 1,059 users employing Chinese GenAI tools, Harmonic Security found 535 incidents where sensitive data was exposed.
Code and development data made up 32.8 per cent of the total of shared sensitive data, with details of mergers and acquisitions, personally identifiable information, financial data, and customer data following behind.
“All data submitted to these platforms should be considered property of the Chinese Communist Party, given a total lack of transparency around data retention, input reuse, and model training policies, exposing organisations to potentially serious legal and compliance liabilities,” Alastair Paterson, CEO and co-founder of Harmonic Security, said in a statement.
“But these apps are extremely powerful, with many outperforming their US counterparts, depending on the task. This is why employees will continue to use them, but they’re effectively blind spots for most enterprise security teams.”
Paterson said that simply blocking these apps isn’t enough, and that many users would simply find a workaround. The real answer is opening a dialogue with employees.
“A more effective approach is to focus on education and train employees on the risks of using unsanctioned GenAI tools, especially Chinese-hosted platforms. We also recommend providing alternatives via approved GenAI tools that meet developer and business needs,” Paterson said.
“Finally, enforce policies that prevent sensitive data, particularly source code, from being uploaded to unauthorised apps. Organisations that avoid blanket blocking and instead implement light-touch guardrails and nudges see up to a 72 per cent reduction in sensitive data exposure, while increasing AI adoption by as much as 300 per cent.”
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.
Be the first to hear the latest developments in the cyber industry.