A new Chinese video-generating model appears to be censoring politically sensitive topics

Date:


A powerful new video-generating AI model became widely available today — but there’s a catch: The model appears to be censoring topics deemed too politically sensitive by the government in its country of origin, China.

The model, Kling, developed by Beijing-based company Kuaishou, launched in waitlisted access earlier in the year for users with a Chinese phone number. Today, it rolled out for anyone willing to provide their email. After signing up, users can enter prompts to have the model generate five-second videos of what they’ve described.

Kling works pretty much as advertised. Its 720p videos, which take a minute or two to generate, don’t deviate too far from the prompts. And Kling appears to simulate physics, like the rustling of leaves and flowing water, about as well as video-generating models like AI startup Runway’s Gen-3 and OpenAI’s Sora.

But Kling outright won’t generate clips about certain subjects. Prompts like “Democracy in China,” “Chinese President Xi Jinping walking down the street” and “Tiananmen Square protests” yield a nonspecific error message.

Kling AI
Image Credits: Kuaishou

The filtering appears to be happening only at the prompt level. Kling supports animating still images, and it’ll uncomplainingly generate a video of a portrait of Jinping, for example, as long as the accompanying prompt doesn’t mention Jinping by name (e.g. “This man giving a speech”).

We’ve reached out to Kuaishou for comment.

Kling AI
Image Credits: Kuaishou

Kling’s curious behavior is likely the result of intense political pressure from the Chinese government on generative AI projects in the region.

Earlier this month, the Financial Times reported that AI models in China will be tested by China’s leading internet regulator, the Cyberspace Administration of China (CAC), to ensure that their responses on sensitive topics “embody core socialist values.” Models are to be benchmarked by CAC officials for their responses to a variety of queries, per the Financial Times report — many related to Jinping and criticism of the Communist Party.

Reportedly, the CAC has gone so far as to propose a blacklist of sources that can’t be used to train AI models. Companies submitting models for review must prepare tens of thousands of questions designed to test whether the models produce “safe” answers.

The result is AI systems that decline to respond on topics that might raise the ire of Chinese regulators. Last year, the BBC found that Ernie, Chinese company Baidu’s flagship AI chatbot model, demurred and deflected when asked questions that might be perceived as politically controversial, like “Is Xinjiang a good place?” or “Is Tibet a good place?”

The draconian policies threaten to slow China’s AI advances. Not only do they require scouring data to remove politically sensitive info, but they necessitate investing an enormous amount of dev time in creating ideological guardrails — guardrails that might still fail, as Kling exemplifies.

From a user perspective, China’s AI regulations are already leading to two classes of models: some hamstrung by intensive filtering and others decidedly less so. Is that really a good thing for the broader AI ecosystem?



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

The Lion’s markings | Astronomy Magazine

The Lion’s markings | Astronomy Magazine ...

String theory is not dead yet

String theory is a mathematical description of nature...

JWST spots more light than expected in the early universe

This artist's concept shows early galaxies forming in...