China is taking a significant step forward in regulating generative artificial intelligence (Generative AI) services with the release of draft measures by the Cyberspace Administration of China (CAC). These proposed rules aim to manage and regulate the use of Generative AI in the country. The draft measures, which were issued in April 2023, are part of China’s ongoing efforts to ensure the responsible use of AI technology. Let us look at the key provisions of the draft measures and their implications for Generative AI service providers.
Also Read: China Takes Bold Step to Regulate Generative AI Services
The draft measures, known as the “Measures for the Management of Generative Artificial Intelligence Services,” outline the regulations for using Generative AI in the People’s Republic of China (PRC). These measures align with existing cybersecurity laws, including the PRC Cybersecurity Law, the Personal Information Protection Law (PIPL), and the Data Security Law. They follow earlier legislation, such as the “Internet Information Service Algorithmic Recommendation Management Provisions” and the “Provisions on the Administration of Deep Synthesis Internet Information Services.”
Also Read: China Sounds the Alarm on Artificial Intelligence Risks
The draft measures are designed to apply to organizations and individuals providing Generative AI services, referred to as Service Providers, to the public within China. This includes chat and content generation services. Interestingly, even non-PRC providers of Generative AI services will be subject to these measures if their services are accessible to the public within China. These extraterritorial provisions reflect the government’s intent to regulate Generative AI services comprehensively.
Service Providers must comply with two filing requirements outlined in the draft measures. Firstly, they must submit a security assessment to the CAC, adhering to the “Provisions on the Security Assessment of Internet Information Services with Public Opinion Properties or Social Mobilization Capacity.” Secondly, they are required to file their algorithm according to the Algorithmic Recommendation Provisions. While these requirements have been in place since 2018 and 2023, respectively, the draft measures explicitly clarify that Generative AI services are also subject to these filing obligations.
Also Read: China’s Billion-Dollar Bet: Baidu’s $145M AI Fund Signals a New Era of AI Self-Reliance
Service Providers must ensure the legality of the Training Data used to train Generative AI models. This includes verifying that the data does not infringe upon intellectual property rights or contain non-consensually collected personal information. Additionally, Service Providers must maintain meticulous records of the Training Data used. This requirement is crucial for potential audits by the CAC or other authorities, who may request detailed information on the training data’s source, scale, type, and quality.
Complying with these requirements presents challenges for Service Providers. Training AI models is an iterative process that heavily relies on user input. Capturing and filtering all user input in real-time would be arduous, if not impossible. This aspect raises questions about the practical implementation and enforcement of the draft measures on Service Providers, particularly those operating outside the CAC’s geographical reach.
The draft measures mandate that AI-generated content must adhere to specific guidelines. This includes respecting social virtue, public order customs, and reflecting socialist core values. The content must not subvert state power, disrupt economic or social order, discriminate, infringe upon intellectual property rights, or spread untruthful information. Additionally, Service Providers must respect the lawful rights and interests of others.
The requirements regarding AI-generated content raise concerns about feasibility. AI models excel at predicting patterns rather than understanding the intrinsic meaning or verifying the truthfulness of statements. Instances of AI models fabricating answers, commonly known as “hallucination,” highlight the limitations of the technology in meeting the stringent guidelines set by the draft measures.
Service Providers are held legally responsible as “personal information processors” under the draft measures. This places obligations similar to the “data controller” concept under other data protection legislation. If AI-generated content involves personal information, Service Providers must comply with personal information protection obligations outlined in the PIPL. Furthermore, they must establish a complaint mechanism to handle data subject requests for revision, deletion, or masking of personal information.
The draft measures include a “whistle-blowing” provision to address concerns about inappropriate AI-generated content. Users of Generative AI services are empowered to report inappropriate content to the CAC or relevant authorities. In response, Service Providers have three months to retrain their Generative AI models and ensure non-compliant content is no longer generated.
Service Providers must define appropriate user groups, occasions, and purposes for using Generative AI services. They must also adopt measures to prevent users from excessively relying on or becoming addicted to AI-generated content. Furthermore, Service Providers must provide user guidance to foster scientific understanding and rational use of AI-generated content, thereby discouraging improper use.
Also Read: Alibaba and Huawei’s Announce Debut of Their Chatbots: The Rise of Generative AI Chatbots in China
The draft measures prohibit Service Providers from retaining information that could be used to trace the identity of specific users. User profiling based on the input information and usage details and providing such information to third parties is also prohibited. This provision aims to protect user privacy and prevent the misuse of personal information.
Non-compliance with the draft measures may result in fines of up to RMB100,000 (~USD14,200). In cases of refusal to rectify or under “grave circumstances,” the CAC and relevant authorities can suspend or terminate a Service Provider’s use of Generative AI. In severe cases, perpetrators may be liable if their actions violate criminal provisions.
China’s decision to regulate AI comes at a time of global discussions on the potential risks of the technology. As one of the pioneering regulatory frameworks for Generative AI, the draft measures are crucial for ensuring responsible AI use in China. However, the broad obligations imposed on Service Providers require careful consideration to strike a balance between regulation and fostering the competitiveness of Chinese Generative AI companies. Service Providers and related businesses should stay alert for any future updates as the CAC finalizes the measures.