Skip to main content

AI Race

China Curbs AI Tools Over Safety Concerns

The proposed rules would prohibit AI chatbots from generating content that encourages any harmful behavior. The regulations would apply to all AI products and services offered in China.

A panda cub playing on his phone. Illustration.
A panda cub playing on his phone. Illustration. (ChatGPT)

China has released draft regulations that would impose sweeping restrictions on artificial intelligence systems designed to engage users emotionally, marking what experts say could be the world’s most far-reaching attempt to regulate AI’s psychological impact.

The proposed rules, published by the Cyberspace Administration of China, would prohibit AI chatbots from generating content that encourages suicide, self-harm, violence, emotional manipulation, gambling or other harmful behavior. Once finalized, the regulations would apply to all AI products and services offered to the public in China.

A central focus of the proposal is the protection of minors. AI platforms would be required to set time limits, offer child-specific safety modes, and obtain guardian consent before allowing minors to use emotional companionship features. Companies would also be obligated to identify underage users even if they do not disclose their age, and default to protections for minors in cases of uncertainty.

Under the draft rules, if a user raises suicide or self-harm, the AI must immediately hand the conversation to a human operator, who would then be required to notify a guardian or designated emergency contact. Platforms would also have to issue reminders after two hours of continuous interaction to discourage excessive use.

The regulations specifically target “human-like interactive AI” systems that simulate personality and form emotional bonds through text, audio, images or video. Analysts note the shift from traditional content moderation to what Chinese regulators describe as “emotional safety,” reflecting concerns that AI companions may exert unhealthy influence over vulnerable users.

Additional provisions would require security reviews for platforms with more than one million registered users or over 100,000 monthly active users, and ban content that authorities say threatens national security, social stability or national unity.

The move comes amid explosive growth in China’s AI chatbot sector, including services marketed as companions or therapists. Several major Chinese AI startups have recently filed for public offerings, adding urgency to Beijing’s regulatory push.

While the government emphasized it continues to support AI development, including for cultural promotion and elderly companionship, regulators made clear that emotional manipulation and psychological harm will not be tolerated.

Ready for more?

Join our newsletter to receive updates on new articles and exclusive content.

We respect your privacy and will never share your information.

Enjoyed this article?

Yes (55)
No (2)
Follow Us:

Loading comments...