Chinese regulators have reportedly told the country’s tech giants not to offer access to AI chatbot ChatGPT over fears the tool will give “uncensored replies” to politically sensitive questions.
That’s according to a report from Nikkei Asia citing “people with direct knowledge of the matter.” Nikkei says Chinese regulators told tech firms Tencent and Ant Group (a subsidiary of e-commerce giant Alibaba) to not only restrict access to the US-developed ChatGPT, but to also report to officials before launching their own rival chatbots.
Such a move would fit the Chinese government’s heavy-handed approach to censorship and quick regulatory responses to new tech. Last month, for example, the country introduced new rules regarding the production of “synthetic content” like deepfakes. These rules aim to limit damage to citizens from use-cases like impersonation, but also rein in potential threats to China’s tightly-controlled media environment. Chinese tech giants have already had to censor other AI applications like image generators. One such tool launched by Baidu is unable to generate images of Tiananmen Square, for example.
China’s tech community is worried that censorship is slowing AI development
Although ChatGPT is not officially available in China it’s caused a stir among the country’s web users and AI community, members of which have expressed dismay that such technology was not developed first in China. Some have cited the country’s strict tech regulation and zealous censorship as barriers to the creation of these systems. The United States’ success in creating new chatbots relies in part on an abundance of training data scraped from the web and the rapid launch and iteration of new models.
Nikkei reports that Chinese users have been able to access ChatGPT via VPN services or third-party integrations into messaging apps like WeChat, though WeChat’s developer, Tencent, has reportedly already banned several of these services.
In social media posts shared earlier this week, China’s biggest English-language newspaper, China Daily warned that ChatGPT could be used to spread Western propaganda.
“ChatGPT has gone viral in China, but there is growing concern that the artificial intelligence could provide a helping hand to the US government in its spread of disinformation and its manipulation of global narratives for its own geopolitical interests,” said ChinaDaily reporter Meng Zhe.
In a longer YouTube video from the outlet, another reporter, Xu-Pan Yiyu, asks ChatGPT about Xinjiang. The bot responds by citing “reports of human rights abuses against Uighur Muslims including mass internment in ‘re-education’ camps, forced labor, and other forms of persecution by the Chinese government” — a response that Xu-Pan describes as “perfectly in line with US talking points.”
Sources in the tech industry told Nikkei that the clampdown by China’s regulators did not come as a surprise. “Our understanding from the beginning is that ChatGPT can never enter China due to issues with censorship, and China will need its own versions of ChatGPT,” one tech executive told the publication.
Since ChatGPT was launched on the web in November last year, Chinese tech giants including Tencent, Baidu, and Alibaba have announced they’re working on their own rival services. Just today, search giant Baidu said its AI chat service “ERNIE Bot” would soon be integrated into its search services. It’s not clear, though, if such a fast development schedule will continue after regulators have weighed in on the bots’ potential for harm.
Whatever happens next, Chinese tech giants will find it tricky to navigate such limitations. Restricting the training data for chatbots will hobble their abilities in comparison to Western rivals, and even if their input is tightly controlled, users may still be able to solicit unwanted responses for which the companies will likely be held accountable.
Controlling the output of these systems is also a challenge for US tech companies. ChatGPT’s creator OpenAI has been criticized by right-wing US commentators for the chatbot’s supposed liberal biases, while some groups, like Christian nationalists, are attempting to create their own systems. Any new chatbots created in China will only add to a growing throng of AI services tuned to fit a variety set of political and cultural beliefs.