Wedoany.com Report-Dec.29, China's cyberspace regulator issued draft rules for public comment on Saturday, introducing stricter oversight for artificial intelligence services that simulate human personalities and engage users in emotional interactions.
The proposed regulations focus on consumer-facing AI products and services available to the public in China. These include applications that display simulated human personality traits, thinking patterns, and communication styles, while interacting with users through emotional exchanges via text, images, audio, video, or other formats.
The draft emphasizes user safety and responsible development. Service providers would be required to place clear warnings about the risks of excessive use and to actively intervene when users show signs of dependency or addiction.
Under the proposal, companies must take full safety responsibility across the entire product lifecycle. This includes putting in place robust systems for algorithm review, data security, and the protection of personal information.
The rules pay special attention to potential psychological risks. Providers would need to monitor and assess users' emotional states and levels of reliance on the service. When extreme emotions or addictive behavior are detected, appropriate intervention measures must be taken to protect the user.
The draft also establishes clear content boundaries. AI services would be prohibited from generating material that endangers national security, spreads false information, promotes violence, or contains obscene content.
These proposed measures reflect ongoing efforts to balance innovation with safety as AI technologies become more integrated into daily life. By setting requirements for user protection, content moderation, and operational accountability, the regulator aims to guide the healthy development of emotional AI applications.
The public comment period allows stakeholders, industry participants, and citizens to provide feedback on the draft. Once finalized, the rules are expected to shape how such AI services are designed, deployed, and managed within China.
The move comes as AI companion technologies continue to gain popularity worldwide, offering users virtual interaction and emotional support. The draft rules seek to ensure these services operate in a secure and responsible manner while addressing potential risks to individual well-being.
Industry observers note that the emphasis on lifecycle safety, real-time intervention, and strong data protection aligns with broader efforts to build trustworthy AI ecosystems. The proposed framework is intended to foster innovation within clear safety boundaries, supporting both technological advancement and user protection.
Feedback on the draft will help refine the final version of the regulations, which are likely to influence the future direction of emotional AI services in the Chinese market.









