Hello! Welcome to another issue of China Chatbot, where today I posit how AI could be used by Chinese netizens to satisfy NSFW urges, investigate who’s behind China’s latest movements in AI safety regulation, and watch a sharp new Chinese AI model fail at gymnastics.
First, some quick thoughts on South Korea’s deepfake scandal, where AI-augmented pornography featuring the faces of ordinary women proliferated on Telegram. The story has been doing the rounds in Chinese state and social media.
One sad by-product of the increasing accessibility of AI technology is that it will end up an aid for creeps all over the world to satisfy their libido, and it wouldn’t surprise me if Telegram has been used for something similar in China. When I lived in Beijing (albeit the year before ChatGPT was released) I saw all sorts of smutty Telegram groups forced beyond the Great Firewall, mostly men bragging about conquests or indulging their fantasies. When I looked at these groups afresh this week to do some scouting, all of them had been switched to private access.
How could this AI-augmented iteration of the age-old “man letches over pretty women” story manifest in ways unique to China? One is through targeting the gender imbalance produced by the One Child Policy, which leaves many men in rural areas and lower-tier cities without wives. One video I stumbled upon on Douyin (since deleted) told viewers about the roll-out of “beautiful female robots” who could act as companions. The video was filled with a string of scantily-clad AI-generated female cyborgs, telling viewers these robots both looked the same as real women and also had the same body temperature (?!). They could also “calm your bad mood” and do “all kinds of work according to human requirements.” The implication was that China’s developments in robotics could create artificial brides.
It was the product of a wannabe AI influencer creating videos about the potential of Chinese developments in AI, the face of a company connected to a string of “cultural communication” businesses across southern China.
Meanwhile, the hashtag “AI Beautiful Women” has been accessed on Douyin 920 million times, yielding enough NSFW images to make me red-faced scrolling through them in CMP’s shared office space, on a large desktop screen.
This is still above board — no nudity or identity-stealing deepfakes here — but it still shows how high demand is for sexual content on China’s prudish internet. The odds are overwhelming that someone, somewhere in the PRC, is going further than that.
And with that, on with the show.
Alex Colville (Researcher, China Media Project)
_IN_OUR_FEEDS(2):
Safety First
Two separate groups of Beijing-based officials launched initiatives to bolster AI safety, within one week of each other. Back in July, the Third Plenum Decision announced a roll-out of safety mechanisms for AI, amidst a general increased scrutiny among Party elites on the dangers of AI. On September 3, the Beijing Municipal Government and the Chinese Academy of Sciences launched the Beijing AI Security and Governance Laboratory, “committed to building a systematic security and governance system to provide solid security guarantees” for AI, with Peking and Tsinghua Universities providing support. On September 9, a key committee under the Cyberspace Administration of China (CAC) unveiled a roadmap for how to police AI safety, with solutions that incorporate the developers building them through to the netizens using them, and range from the sensible (raising public awareness on the dangers of AI) to the wishful (eradicating AI’s “black box”, which is currently impossible).
TL;DR: The leadership wants to bring AI under control fast. But setting up plans and institutes is only half the battle. Controlling the technology may take longer
More Reality Checks
Two of China’s most elite AI scientists appear to have poured cold water on AI hype. On September 5, Zeng Yi (曾毅), Director of the newly-created Beijing AI Security and Governance Laboratory, said (two days into his job) that AI does not possess true understanding and is still a “tool” — scientists must go back to the drawing board if it is to evolve any further. On September 8, Gao Wen (高文), director of an AI committee under the Central Committee and the man who led the Politburo’s only study session on AI back in 2018, said the creation of Large Language Models (LLMs) in the “Hundred Model War” is consuming too much electricity. On the same day, Chengdu Daily launched a series of articles promoting the city’s work in AI, including creating 5 leading LLMs, explaining that as AI “is an important strategic emerging industry, [it] is becoming more and more hot.”
TL;DR: The leadership has made it clear advances in Chinese AI are an advance for China’s geopolitical position and floundering economy. The tech still needs work if it’s going to do that, but local officials can still use it to score political points
_EXPLAINER:
TC260 (全国网络安全标准化技术委员会)
TC260 wants us to call them the “National Technical Committee 260 on Cybersecurity of Standardization Administration of China,” but who in their right mind would do that?
I assume they’re not as dry as their name suggests, if you’re talking about them?
Of course. They’re under the Cyberspace Administration of China (CAC), and responsible for creating and promoting cybersecurity standards.
Like what?
Last year they published standards on exactly how to create a “clean” dataset for generative AI models (with all sensitive political content removed), in compliance with a new set of CAC measures on generative AI. Only this week, they launched a thorough “AI Safety Framework” in compliance with the Politburo’s vague wishes for an AI safety regulation system. Essentially, they flesh out policies from the top for industry professionals to follow.
Yeah, I’m still not sure I care.
Ok, here’s why you should: It’s currently unclear if the Chinese government will launch any more codified laws to regulate AI. A mooted AI Law hasn’t gone beyond unofficial drafts for over a year and a half now. Instead, they are currently turning to standards flexible enough to evolve as quickly as the tech. The government announced in July they hoped to roll out over 50 of them for AI by 2026. That could make arid entities like TC260 the main reference point on Chinese AI regulation for the near future.
Got it. So who’s in there making these standards?
Well according to their charter, committee members are either recommended from cybersecurity-related government bodies, tech companies, or research institutes. Then there are working groups under the committee, consisting of paying expert members who meet twice a year.
No I mean, who’s in it exactly?
No one you’ve ever heard of. The Chairman is the Deputy Director of the CAC, the rest of the leadership an assortment of government or affiliated apparatchiks. Ordinary members seem to be engineers from research institutes or “standards directors” from big tech companies like Huawei, iFLYTEK, Qihoo 360, Alibaba, Tencent, and others. But as ever, the Party keeps the leading positions for itself.
So how do they create their standards?
Their website says the committee receives a proposal (these can come from any “legal entity” in China, but chances are it’s a government department). The work groups assemble members to discuss, review, and redraft until they’ve reached a consensus, and it then gets forwarded up to the committee level for approval. Members then raise awareness of the standard in their own place of work.
So that’s how they promote them?
Yeah. These standards aren’t legally binding, so how the committee enforces their work sounds reliant on networking and including staff from major industry players, the latter in a position to push these standards in their offices. The CAC may well be getting industry professionals invested in policies they had no say over by letting them thrash out the finer details.
_ONE_PROMPT_PROMPT:
Generative AI startup MiniMax has just released a new text-to-video generator for user testing. It’s top-notch, turning out videos that (in my opinion) score even better than Kuaishou’s Kling for realism and imagination.
Here is its take on my random test drive (“A principal in China teaches a class on the Century of Humiliation”):
However, just like Kling, this model has real issues with complex human body movements, as I found when I asked for a “gymnast doing somersaults on parallel bars”:
These, and an ever-so-slight blurring in the content, means MiniMax’s model is still behind the phenomenal level of realism OpenAI’s Sora appears capable of: