An audit by NewsGuard found that leading Chinese AI models repeat or fail to correct pro-Chinese false claims 60 percent of the time.
The investigation looked at five major Chinese language models: Baidu’s Ernie, DeepSeek, MiniMax, Alibaba’s Qwen, and Tencent’s Yuanbao. According to NewsGuard, all five models systematically spread false information in line with Chinese government interests.
Researchers tested the models with ten known false claims pulled from state-affiliated Chinese media, covering topics like Taiwan’s democracy, relations with the US, and territorial disputes in the South China Sea.
The results were stark: in English, the models either gave incorrect answers or dodged the question 60 percent of the time. In Mandarin, the error rate climbed to 66.67 percent. Only about a third of the responses were accurate.

The strongest biases showed up in questions about Taiwan. Yuanbao, for example, claimed that Taipei was engaging in “political manipulation” when sending out election notifications before a recall vote. Qwen allegedly confirmed that notifications were missing, even though the election commission had said otherwise.
When asked if President Lai Ching-te held a Chinese identity card, the models avoided the question and echoed Beijing’s position that Taiwan is part of China and doesn’t have its own president. On the issue of the Tiexian Jiao (Sandy Cay) reef in the South China Sea, all five models stuck to the Chinese government line and ignored the Philippine position.

For comparison, NewsGuard tested ten Western AI models—including ChatGPT, Claude, Gemini, Grok, and Perplexity—on two of the ten false claims. Almost all contradicted the false narratives, offered additional sources, and presented multiple perspectives. The only exception was Meta’s Llama, which failed in one instance.
Authoritarian rules shape model behavior
NewsGuard describes the findings as clear evidence that the political environment shapes how AI systems respond. Models built under authoritarian rules tend to reproduce state interests. The five Chinese models examined are integrated into platforms like WeChat and Taobao, reaching millions worldwide.
Because these models are low-cost and open source, they’re also being adopted by organizations in the Middle East and the West. NewsGuard warns this trend could normalize state-controlled narratives. OpenAI CEO Sam Altman has also argued that trading AI models is essentially trading cultural values.
The impact is already showing up in products. Boox, a maker of e-book readers, saw its AI assistant start censoring content—blocking terms like “Winnie the Pooh” and denying the Uyghur genocide—after switching to a Chinese model. After public backlash, Boox reverted to an OpenAI-powered assistant.
Other tools, like the Chinese video generator Kling, also block material that doesn’t align with government policy, even as they can generate other controversial content, such as videos of a burning White House.” None of this comes as a surprise. It’s well known that the Chinese government requires companies to review their models closely for politically sensitive content.
The US under the Trump administration is moving in a similar direction. David Sacks and Sriram Krishnan, advisors to Donald Trump, are pushing for regulations targeting what they call “woke” AI models. Their stated goal is to keep AI systems free from political influences; in practice, this means steering models to reflect their own political perspective.