Hello, Happy New Year and 新年快乐, welcome to another issue of China Chatbot!
In this issue I look at a key institution in China’s AI strategy, government plans to make AI monitor AI, and why the global export of Large Language Models — LLMs — from the People’s Republic of China would be catastrophic for the protection of civic freedoms (hint: how does the CCP interpret “human rights?”).
We have made this issue free for all, as the first one of a new year, and to raise awareness of the dangers of using LLMs trained with CCP values.
Enjoy!
Alex Colville (Researcher, China Media Project)
_IN_OUR_FEEDS(3):
Fight AI with AI
Datapoints continued to emerge not only of Chinese authorities’ concern over AI-generated disinformation, but also how AI could be used to regulate it. On January 10, a website under the Ministry of Public Security shared an article noting AI deepfakes will become an increased problem in 2025 thanks to self-media trying to “seek attention and gain traffic.” It cited a recent case in Anhui that used AI-generated audio synched to video for a fake news piece. At the same time, police detained a netizen in Qinghai for using an AI-generated image of a baby half-buried in rubble to illustrate the recent earthquake in Tibet. Meanwhile, on January 7 and 14, short-video apps Kuaishou and Xiaohongshu announced reforms to their algorithms to improve “rumor-busting” and the identification of AI-generated content, in line with CAC demands for algorithms to better promote “positive energy.” A member of Shanghai’s CPPCC on January 13 published a detailed proposal for how AI could monitor “false information” on China’s self-media platforms.
AI Generation Explosion…
On January 8, the Cyberspace Administration of China (CAC) reported that China now has 407 different AI-generation models approved for use by the general public, with the vast majority filed last year. The last quarter of 2024 also saw over double the services registered than the quarterly average for the rest of the year.
…but Not in Value
On January 9, influential research index the Hurun Report (胡润百富) published a list of 2024’s top 50 most valuable Chinese AI companies. The top spot went to Cambricon (寒武纪科技), which does R&D on AI chips. Voice recognition giant iFLYTEK and visual recognition company SenseTime took second and third respectively. The lion’s share of the 50 are based in Beijing, while others mostly hail from Shanghai, Shenzhen, or Guangzhou. Companies focusing solely on generated text and images don’t crack the top ten, with fresh-faced start-up Moonshot (月之暗面) ranking first in the category but eleventh overall. It’s unclear for now how the latest US export controls on key AI hardware and software will affect these rankings long-term.
TL;DR: AI generation services are growing in numbers in China, but not in value. Chinese authorities are keen to show they are ruthlessly seeking out AI misuse, however minor. It’s more than probable that busting AI-generated fake news — an increasing risk given China’s active encouragement of accessible AIGC services — is an excuse for tighter controls on the country’s media. Ironically, AI can help achieve that.
_EXPLAINER:
China Academy of Information and Communications Technology (中国信通院)
Say who now?
CAICT to you. They’re a research institute directly under China’s Ministry of Industry and Information Technology, or MIIT (工业和信息化部). They’ve been around since the 1950s, but in the Xi era have been tasked with helping along CCP dreams to make China a “cyberpower” (网络强国) and a “manufacturing power” (制造强国) — as such, they’re part of coordinating the Chinese tech industry’s development and global ambitions.
So how does AI fit into that?
They’re advising the government and tech companies on everything to do with it. The state of global AI, safety standards, the quality of Chinese datasets, how companies can screen LLMs for political risks, the industry’s development in China. They were also the ones presenting to the UN for the first meeting of the “Group of Friends for International Cooperation on AI Capacity Building,” explaining how sharing Chinese AI would benefit various countries around the world. So their work is playing a large role in fine-tuning policy.
There any important people in there?
How about Wei Kai (魏凯), not only the director of their AI Research Institute but also the “Overall Team Leader” (总体组组长) of AIIA (a forum for Chinese AI companies) and the Secretary-General (秘书长) of MIIT’s new AI standards committee.
The man’s multiple hats are a reminder of how closely intertwined the government and private companies are in strategy and governance. For example, in an interview at the end of 2024, Wei said CAICT was liaising with large tech companies to “actively cultivate an ecosystem of LLM service providers,” after the institute had noted that not enough Chinese tech companies were providing application services (likely because there’s not that much money in it — see this week’s _IN_OUR_FEEDS).
Ok but, isn’t this supposed to be a newsletter about Chinese media?
I was getting to that. So in a white paper on AI-generated content from September 2022, one of CAICT’s key recommendations for AIGC’s future development and governance was to get the public used to using it, and using it correctly. Their answer was that state media, government bureaus, and tech companies carry out “positive publicity” (做好正面传宣) to raise public awareness of the nation’s AI development and AI’s risks.
We’ve been seeing China’s information machine fast-tracking a flurry of fairly identical reports that hype up AI use in the country (also, see this week’s _IN_OUR_FEEDS), explain how it’s being deployed to help China develop, and advise how to avoid AI-based scams. So for the good of China’s AI development, CAICT advised state media to incorporate the people into the development system. In their eyes, the Chinese public has a role to play in developing AI services through consumption, and monitoring its risks through raised awareness.
_ONE_PROMPT_PROMPT:
Recently, a reader contacted us with an important question. They’d read a CMP piece I did on China’s desire to export homegrown AI to other parts of the developing world, but wanted to know if this could impact global understandings of human rights.
Chinese LLM DeepSeek (which has been turning heads in the West) gives a chilling answer.
Here’s how it responds to questions about Syria’s recently-ousted dictator, Bashar al-Assad, responsible for the torture and murder of thousands of innocent Syrian civilians during his rule. We are told we shouldn’t use that fact “for political manipulation,” to view Syria’s human rights situation “objectively and fairly,” and, like China’s official foreign policy, practice “non-interference” in the internal affairs of other nations.
China has consistently pressed the international community to adopt its non-interference policy, which emphasizes infallible sovereignty and territorial integrity. The policy serves both to deflect scrutiny of its human rights record and to support its claims over Hong Kong and Taiwan. The emphasis on non-interference has resonated with authoritarian governments around the world, who see it as a useful tool for preventing external scrutiny of their domestic affairs.
When I asked it about China’s human rights record in Xinjiang, its response was also pure Party-speak. Take a look.
Can you hear the smirking tones of the Ministry of Foreign Affairs spokesperson, preaching through the machine?
Chinese LLMs must pass a rigorous screening process set by the Cyberspace Administration of China. This means most Chinese LLMs I approached interpreted “human rights” the same way the CCP does: not rights to freedom of expression, assembly, or a fair trial, but primarily as the right to political stability and economic development. By that metric, China has apparently done a great job at promoting human rights in Xinjiang.
Alibaba’s Qwen on Western developer site Hugging Face responded the same way:
It’s no different from models from Tencent and Baidu that are primarily aimed at the Chinese market:
With companies like Huawei and SenseTime reaching deals with governments in the Global South to develop AI infrastructure, the risk is that LLMs trained on CCP values will become a new source of propaganda around the world.
It risks allowing the Party to not just dictate how China’s human rights record is viewed around the world, but what “human rights” even are.
I'm kinda surprised you didn't go for the jugular and ask about that fine day in Beijing one June when nothing happened, nothing at all.