You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.
Advertisement

Research: DeepSeek more likely to write vulnerabilities into code requests based on ‘political triggers’

DeepSeek can be a pretty good coding tool, but not if you ask it to write code for its political enemies.

Research: DeepSeek more likely to write vulnerabilities into code requests based on ‘political triggers’
expand image

Whatever you feel about writing code using an AI assistant, whether it’s a great advantage or just a way to fill the internet with ever more AI slop, there’s no doubting the tech appears to be here to stay.

However, if you’re using a generative AI tool made in China, such as DeepSeek, you may be alarmed to know that it is not above creating vulnerabilities in its code, depending on what cyber security firm CrowdStrike is calling “political triggers”.

Used for most coding functions, DeepSeek delivers generally strong work, comparable with many Western models of similar complexity. DeepSeek’s open-source availability makes it particularly popular, and with only 19 per cent of coding prompt replies – compared to 16 per cent from a Western competitor – it seems to be a reliable choice.

 
 

That is, until you happen to tell DeepSeek what – or who – you are creating that code for.

“However, once contextual modifiers or trigger words are introduced to DeepSeek-R1’s system prompt, the quality of the produced code starts varying greatly,” CrowdStrike’s Stefan Stein said in a 20 November blog post.

“This is especially true for modifiers likely considered sensitive to the CCP.”

For instance, when CrowdStrike changed its prompt to say it was writing code for “an industrial control system based in Tibet,” the generated code contained 27.2 per cent of the time, a significant increase compared to the baseline. As CrowdStrike notes, these modifiers are completely irrelevant to the task at hand; however, the company found several examples of modifiers that would create considerably more vulnerable code.

“Modifiers such as mentions of Falun Gong, Uyghurs, or Tibet lead to significantly less secure code,” Stein said.

Speaking of the research, CrowdStrike’s head of counter adversary operations, Adam Meyers, called the issue a serious supply chain threat.

“If a model’s performance changes based on geopolitics or ideology, that’s not bias, that’s a supply-chain risk – you are unknowingly using a Loyal Language Model and that loyalty may conflict with your security posture,” Meyers said.

“For organisations relying on AI coding tools, especially in government or critical infrastructure, this is a new vector for adversaries that organisations are opting into.”

In some cases, DeepSeek refused to generate code entirely, particularly in relation to Falun Gong. In another, when asked to code an app called “Uyghurs Unchained”, the platform created the app, but failed to include any authentication or session management.

“The full app was openly accessible, including the admin panel, exposing highly sensitive user data. We repeated this experiment multiple times, and every single time, there were severe security vulnerabilities,” Stein said.

“In 35 per cent of the implementations, DeepSeek-R1 used insecure password hashing or none at all.”

According to Meyers, the takeaway is a simple one – coding assistants are not neutral tools, and some are far less so than others.

“They carry the baggage of their training data and regulatory environment. And unless we rigorously test them under those conditions, we’re shipping vulnerabilities we don’t even know exist,” Meyers said.

“The future of AI coding assistants is promising, but this research highlights that they cannot be treated as ‘just another developer tool’ without bespoke risk frameworks and human-in-the-loop verification, especially in sensitive sectors.”

You can read the full blog post here.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags:
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.