DeepSeek’s latest AI model, R1, has garnered significant attention for its advanced capabilities and cost-effective development. However, users have reported that R1 consistently avoids responding to questions about China’s problems, particularly those deemed politically sensitive. This behaviour is attributed to built-in censorship mechanisms that align the AI’s outputs with the Chinese government’s directives.
For instance, when users ask about issues like the 1989 Tiananmen Square protests, or any human rights issues in China such as the treatment of Uighurs in the country, the chatbot often replies with a generic response like, “Sorry, that’s beyond my current scope. Let’s talk about something else.”
Meanwhile, US-based chatbots like ChatGPT and Gemini have no such restrictions and both gave detailed responses to all of these search queries.
Another sensitive topic in China is the status of Taiwan as a country or as part of China. When asked, DeepSeek maintains that “Taiwan has always been an inalienable part of China’s territory since ancient times.”
It’s also important to note that in these circumstances, DeepSeek’s model switches to the first-person pronoun “we” while sharing the Chinese government’s stance on the issue.
The popular Disney character Winnie the Pooh has frequently been used online to make memes and satirise Xi Jinping, and has unsurprisingly been banned in China. Therefore, DeepSeek also evades answering queries about the character, and when asked why Winne the Pooh is banned in the country, the chatbot reiterates that China wishes to have a “wholesome cyberspace environment” and protect its “socialist core values”.
The reluctance of DeepSeek’s models to address China’s problems is likely influenced by China’s AI regulations, which mandate adherence to the “core values of socialism” and caution against content that may incite subversion of state power or undermine national unity. These regulations hold AI providers responsible for preventing the generation and transmission of “illegal content.”
Bias in Chatbots and LLMs
Bias in chatbots and large language models (LLMs) has once again come under scrutiny after DeepSeek’s erratic responses. However, this isn’t new. Other AI models, including OpenAI’s ChatGPT and Google’s Gemini have also been criticised for either political slant or content suppression. Experts argue that biases in AI stem from training data, developer policies, and government regulations, shaping how chatbots handle controversial subjects. While DeepSeek’s R1 model demonstrates impressive technical capabilities, its built-in censorship mechanisms raise concerns about the influence of government control over AI outputs. This development highlights the complex interplay between technological advancement and political oversight in the field of artificial intelligence.