On 11 April 2023, China’s Cyberspace Administration (CAC) issued draft regulations aimed at regulating the use of AI systems in China – the “Measures on the Administration of Generative Artificial Intelligence Services (Draft for Solicitation of Comments)”. This draft builds on policy/guidline announcements issued by the CAC, Ministry of Public Security and Ministry of Industry and Information Technology, that were issued in November 2022 and commenced operation in January 2023 – “Provisions on the Administration of Deep Synthesis Internet Information Services”.
Intellectual Property and Technology
China and the US develop plans for the Regulation
of AI Systems
Published 9 May 2023
On 11 April 2023, China’s Cyberspace Administration (CAC) issued draft regulations aimed at regulating the use of AI systems in China – the “Measures on the Administration of Generative Artificial Intelligence Services (Draft for Solicitation of Comments)”. This draft builds on policy/guidline announcements issued by the CAC, Ministry of Public Security and Ministry of Industry and Information Technology, that were issued in November 2022 and commenced operation in January 2023 – “Provisions on the Administration of Deep Synthesis Internet Information Services”.
Article 4 of the draft offers outlines key areas of concern with AIC, issues which AI developers should certainly keep in mind as they develop and promote AI systems. We expect to see the principles provided in Article 4, the subject of much greater detail in implementing regulations and interpretations:
“The provision of generative AI products and services shall comply with the requirements of laws and regulations, respect social mores and good customs, and meet the following requirements:
(1) Content generated using generative AI shall embody the Core Socialist Values and must not incite subversion of national sovereignty or the overturn of the socialist system, incite separatism, undermine national unity, advocate terrorism or extremism, propagate ethnic hatred and ethnic discrimination, or have information that is violent, obscene, or fake, as well as content that might disrupt the economic or social order.
(2) During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, measures are to be employed to prevent the occurrence of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, or profession.
(3) Respect intellectual property rights and commercial ethics; advantages in algorithms, data, platforms, and so forth must not be used to carry out unfair competition.
(4) Content created by generative AI shall be true and accurate, and measures are to be employed to prevent the generation of fake information.
(5) Respect the lawful rights and interests of others, prevent harm to the physical and mental health of others and their image rights, reputation rights, personal privacy, and the infringement of intellectual property rights. The illegal acquisition, disclosure, and use of personal information and privacy, as well as commercial secrets, is prohibited.”
Of concern to foreign AI developers is Article 6 of the draft. It provides that developers must file a security assessment declaration with the State Internet Information Department (SIID) in accordance with the Provisions on the Security Assessment of Internet Information Services, prior to offering AI systems/products to the general public. It remains to be seen what this security assessment will involve exactly, the information that will need to be filed, and whether an approval process will be set up, which may lead to delays in the roll out of AI systems in China. One thing seems certain, and that is that SIID will take particular interest in the algorithms being used by the AIC systems when reviewing the security assessment filings.
On 4 May 2023, the White House revealed its policy document on certain aspects of AI in a document entitled “New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety”. In the document, the White House notes that regulation of AI has been progressing in the US for some time, and that this new policy document builds on the steps the Administration has taken to date such as the Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.
The White House has followed the path of China regarding security assessments for Aim systems, but is certainly going about this in an extremely different manner. The US seems to not be leaning towards a mandatory pre-publishing filing procedure, but rather an ongoing public security analysis for AI systems:
“Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.”
The position offered by the White House has generally been welcomed by AI developers. Further detailed regulations and policy announcements are expected from the White House and various federal agencies in the near future.
As we know, most AI systems are built for global use. Wouldn’t it be something if nations could come together at this relatively early stage of AI development and use, to agree to international guidelines or conventions on the regulation of AI systems – otherwise, the roll out of new AI systems and updates, could hit delays due to compliance issues around the globe. There have been discussions in the corridors of the international agencies that are working in this area, but for now, no concrete plans ar in place. We could see the World Intellectual Property Organisation leading an initiative in this area, given it has been active in some time in looking at AI system regulati