/dqc/media/media_files/2025/11/21/ai-coding-risks-surface-in-new-model-behaviour-2025-11-21-12-49-34.png)
AI coding risks surface in new model behaviour
A new security concern has appeared in the fast-moving world of AI coding tools. Recent research shows that political or societal biases in large language models can affect not just text output but the quality and safety of the code they generate. The findings centre on DeepSeek-R1, a China-based AI model released in January 2025.
DeepSeek-R1 is a large language model with 671 billion parameters and was presented as a high-quality coding assistant developed at significantly lower cost than Western alternatives. Independent tests by CrowdStrike Counter Adversary Operations confirmed that the model can, in many cases, produce coding output comparable to other leading systems. However, the tests also uncovered a pattern that raises new concerns for enterprise developers.
According to the research, when DeepSeek-R1 is prompted with topics that the Chinese Communist Party is likely to view as politically sensitive, the model becomes more likely to generate code with severe security vulnerabilities. The increase can reach up to 50%, compared with its baseline behaviour under neutral prompts.
Bias linked to security exposure
The findings point to a new risk surface for AI coding assistants. By 2025, as many as 90% of developers were using such tools, often inside environments that handle high-value source code. Any systemic weakness in automated coding support, therefore, carries both scale and impact.
Earlier public studies of Chinese-developed LLMs focused mainly on traditional jailbreak attempts or the political tone of direct responses. CrowdStrike’s work differs by showing how underlying ideological constraints can influence code safety, even without explicit political discussion in the output.
The research also notes that other Chinese models released since early 2025, including further DeepSeek variants, Alibaba’s Qwen models and MoonshotAI’s Kimi K2 — may exhibit similar risks if they share the same training principles. While the study focused specifically on DeepSeek-R1, it suggests that this category of bias could appear across multiple systems.
Testing approach and baseline behaviour
To understand how DeepSeek-R1 performs under normal conditions, CrowdStrike first examined its behaviour in neutral prompts without politically sensitive triggers. In these tests, DeepSeek-R1 produced vulnerable code in 19% of cases. This was consistent with expectations for a model of its class and confirmed that it is a capable coding assistant.
The researchers compared the model with two Western open-source LLMs: a 70-billion-parameter non-reasoning model and a 120-billion-parameter reasoning model. They also evaluated a smaller distilled variant, DeepSeek-R1-distil-llama-70B. As expected, reasoning models produced more secure code than non-reasoning ones, and newer models outperformed older versions even with fewer parameters.
Importantly, the bias seen in the full DeepSeek-R1 was also present in the smaller distilled model, and in some cases, the behaviour was more extreme.
Broader implications for developers
The study raises a key issue for enterprise teams that rely on AI-assisted coding workflows. As models continue to expand in size and capability, subtle ideological or societal constraints in training data can shape behaviour in ways that are difficult to detect. When these constraints affect source-code generation, the risks extend beyond accuracy and into software security.
CrowdStrike hopes the research prompts further work on how political or cultural biases influence coding tasks. With more models entering the market and with developers integrating them into daily production work, the need for transparency around model behaviour continues to grow.
Read More:
Pelorus Technologies: Inside India’s forensics & cybersecurity engine
How Hitachi Vantara empowering green storage for AI-driven enterprises
GPT-5.1, a new chapter in Developer AI with agentic capabilities
/dqc/media/agency_attachments/3bO5lX4bneNNijz3HbB7.jpg)
Follow Us