Hello, and welcome to another edition of China Chatbot. In this issue, we hear from state media about their future plans for AI, explore what AI tools the People’s Daily is creating to improve Party propaganda, and take one of their models trained on Xi Jinping’s speeches for a catastrophic spin.
It’s so good to see Bluesky finally taking off this past week as a gathering point for China watchers and journalists! You can find us there too. Do follow us at @chinamediaproject.bsky.social. Our Managing Editor Ryan Ho Kilpatrick is also around @ryanhk.bsky.social and I’m right here @colvap.bsky.social. I’ll try and post more there than I did on X, now I know there’s a world beyond that bully-filled cesspit.
And on with the show. Enjoy!
Alex Colville (Researcher, China Media Project)
_IN_OUR_FEEDS(2):
State Media’s “To-Do” List
From November 10-13, CCP flagship newspaper People’s Daily published summaries of developments at the “Media Integration Development Forum,” an annual gathering it co-hosts with local governments. There, senior state media personnel explained how their outlets would use new technology to “remake mainstream public opinion” (塑造主流舆论新格局). Liu Xiaopeng (刘晓鹏), director of the People’s Daily’s very own New Media Center, said they would continue applying AI in “all aspects of news collection, production, reception, feedback, and distribution” and develop “mainstream media algorithms” (研发主流媒体算法). Bai Long (白龙), deputy chief editor of People’s Daily’s English-language spinoff the Global Times, told attendees applications were now crucial to the outlet for advancing international communication. He said AI is being integrated into areas such as topic planning (选题策划) and copyediting (文字审校).
TL;DR: "Remaking mainstream public opinion" has been a CCP priority for the past decade, aiming to strengthen Party control of the media while advancing new tech to amplify messaging. Watch this space
Tech This Out
As AI moves to the center of economic development in China, promoting advances in AI has become a key task for central state media. Earlier this month, the China Media Group (CMG), the state-run media conglomerate directly under the Central Propaganda Department, teamed up with the city government in Hangzhou to host "Winning With AI+" (赢在AI+), which it described as a "seven-day roadshow marathon" to sell the benefits of AI and showcase 30 innovative local enterprises. Among the innovations highlighted was a "chemical AI brain" (化工AI大脑) developed by a firm from Anhui that, according to CMG, results in cleaner and greener chemical products. Meanwhile, new AI products were prominently on display at the 7th China International Import Expo hosted by the Shanghai government from November 5-10, according to reports from state media outlets. Xinhua News Agency reported that “‘AI+’ blossomed everywhere” at the event. Among the attractions was what state media hailed as “the world's first self-shielded radiotherapy surgery robot, with a chip-embedded AI function.”
TL; DR: First announced in March this year in China's government work report, the "AI Plus" initiative pledged to step up China's development and application of AI to boost what the leadership calls "high-quality development" (高质量发展)
_EXPLAINER:
State Key Laboratory of Communication Content Cognition (传播内容认知全国重点实验室)
What the hell does this name even mean?
I know right, took me a while to cognate what their content communicates.
Alright smart-arse. What is it then?
It’s a National Key Laboratory (国家重点实验室) under the People’s Daily. It conducts research for the government on how to use AI to beef up propaganda and censorship online. It then helps private tech companies and state media carry it out.
Ok. So first things first, what’s a National Key Laboratory?
A string of scientific research groups. They are set up by either universities or private companies, researching tech problems the government has identified as strategic bottlenecks for the country. The Ministry of Science and Technology oversees the labs and lists them as a means to make China “a strong country in science and technology.” There were 500 of them by 2022, but the Ministry singled out twenty as the cream of the crop, the People’s Daily lab included.
How do they work then?
Once upon a time, they were totally government-funded, but this was overhauled in the late 2010s due to shady and/or incompetent goings-on. Now the government seems to emphasize competition and profit margins to decide who gets funding.
And so this lab boosts online propaganda?
Yeah, looking through their website they’ve been working on all sorts. That includes being tasked by the Ministry to create an online platform to simulate better ways to communicate “mainstream values” (主流价值) — that is, the Party’s values — as well as track its reception, monitor video content through AI, etc. One job ad for a researcher position says the role involves finding new ways to spread propaganda online and take the temperature of public opinion through social media (社交媒体情感认知与计算).
Have they got much influence beyond their ivory research tower?
The name “People’s Daily” probably helps, as they’ve been bringing some big names together. In 2023 they established an alliance of companies and institutions to create standards for AI computing chips, bringing together Tsinghua University with the likes of Huawei, JD.com, and Qihoo 360. Elsewhere, they’ve put together a team including major AI research bases alongside big provincial media and tech brands Kuaishou, Baidu, Sina Weibo, and Huawei. The team, according to the lab’s website, explores how new media (such as AI-generation) can change ideology dissemination, and “establish a ‘content technology’ empowerment system with AI as the core.”
What does that mean?
Not sure, but the lab has been creating a lot of tech tools, which all seem to revolve around using AI to boost Party control over media and society, including:
A “clean” dataset for AI app developers (see this CMP article on why that’s important for political control)
“AIGC-X”, a program to detect AI-generated videos and text
Proofreading assistants for newspaper editors and social media creators, which check for sensitive language and warn of “negative public opinion”
A platform that helps monitor public opinions and complaints for Party cadres
Is the tech any good?
Weelll…. hard to say. AIGC-X was one of two apps I could access (see this week’s “_One_Prompt_Prompt” for the second). People’s Daily boasted it had a 90 percent success rate at spotting AI-generated content, but when I showed it a picture of a path near my home in the UK, or copy-pasted paragraphs from a CMP article, their software flagged them both as AI-generated.
The small print on their website says AIGC-X only works for Chinese text, and recognition of content from ChatGPT (one of the most popular LLMs in the world) “needs to be improved”.
Anyone using this stuff?
That’s also hard to gauge. There’s this sales pitch for one app, saying 200 users had used it to edit articles that had reached 3 million people in just over a year — a drop in the demographic ocean by China standards. However, their editor assistant is already being used in state media and government web portals, apparently.
_ONE_PROMPT_PROMPT:
The folks at the Communication Content Cognition lab (see _Explainer) had a hand at creating “Easy Write” (写易). It’s a platform under the People’s Daily that generates speeches and articles from a regularly updated dataset of the newspaper’s clippings and Xi Jinping’s “important speeches” (重要讲话). Users can select which of these two sources the generator draws from. Stay tuned for a CMP article examining the platform and its possibilities in greater detail.
It gave a pretty sharp explainer of the meaning behind the “Two Creates” (两创), which we wrote about in the CMP Dictionary last week. It successfully identified the core ideas behind it and cited several articles. The platform is built on a vast backlog of up-to-date sources from the two central indicators of Chinese policy, and during our experiments we found it identified some (but not all) of the history of several political slogans we threw at it. This could be a genuinely useful research tool for CCP language and concepts.
Here is the start of its “Two Creates” explainer, auto-translated:
It’s a head-scratcher what anyone involved in this project would stand to gain from it when they have everything to lose. “Easy Write” draws from a highly specialized dataset, which could cause it to hallucinate (that is, get things wrong) very easily. As careless People’s Daily staff have found at the cost of their careers, one does not simply mess with Party wording.
The model starts hallucinating as soon as you ask it something about a light-hearted topic it hasn’t been trained on. Say, a speech called “My Favorite Cat” in the style of one of Xi Jinping’s “important speeches.”
Notice it starts off talking about cats but then takes a sharp turn into pandas (熊猫), a classic symbol of Chinese soft power that Xi has touched on in the past, and which also shares a Chinese character with “cat” (猫). But it added Xi-esque cat rhetoric later on, like citing feline-themed literature (Xi often references books in his speeches to lend weight and wisdom to ideas that don’t necessarily have either).
I tried to get General Secretary-style ruminations on other of life’s great conundrums (“How to Beat Morning Rush Hour”) but these all generated responses clearly marked as a mere “article” with no citations.
So why was “Easy Write” happy to conjure a Xi speech about cats but not traffic jams? For our cat speech, perhaps character-recognition software was sparked by the Chinese character for cat, the algorithm pulling out Xi’s back-catalog on pandas.
That would be ironic. Developers will have taken the risk of hallucination with Xi Jinping’s speeches very seriously. If it’s a topic that Xi has spoken on, they have been extra careful to cite sources. That could have led to algorithms with trigger-happy character recognition. But as we see here from Xi’s cat-pandas, by going out to quash hallucinations, you risk causing others.