The Chinese government is taking a leading role in setting some boundaries in how artificial intelligence (AI) technology should and could be used. Beijing has defined a set of provisional rules that are set to come into force on 15 August 2023. These regulations will apply to all services that utilize generative AI for various types of media, such as pictures, text, audio, and video. It’s mandatory for all content accessible to the Chinese public to adhere to these rules, and a licensing regime will be implemented for all providers.
The regulatory authorities confirmed that their aim is to “balance development and security” without restricting innovation too much since their intent is still to “encourage innovative development of generative AI.”
What do these rules actually entice? And how will other countries react to the apparently unstoppable avalanche of change that AI is currently bringing to the table?
The Rules Established By China
The rules described as “Interim Measures for the Management of Generative Artificial Intelligence Services” are set to serve the following purposes:
- Standardizing the application of AI
- Promoting a “healthy development” of this technology
- Encouraging innovation but with due prudence
- Safeguarding national security
- Protecting the interest and rights of Chinese citizens
- Respecting social morality and ethics
- Preventing discrimination, ethnic hatred, violence, obscenity, and false information
- Adhering to the core values of socialism
All content generated by AI must strictly adhere to these rules, which encompass the prohibition of promoting terrorism, racism, pornography, or anything that could pose a threat to national security, incite subversion, or undermine national stability. To ensure compliance, any algorithm or service with the potential to influence public opinion must be registered with the governmental authorities. Subsequently, an administrative license will be issued in accordance with Chinese laws.
Service providers bear the responsibility of identifying and promptly halting any illegal content generated by their algorithms. Furthermore, they are obligated to report such incidents to the respective authorities. Additionally, providers must implement anti-addiction systems specifically designed for underage users, similar to those employed, to prevent minors from excessively spending time and money on video games. As of now, the punitive terms for potential violations are yet to be determined, as previous fines have been removed from the current draft in recent days.
All these restrictions are apparently established only for services that can influence public opinion, while those used for internal corporate or industrial purposes are not covered by the regulation. The state aims at driving the innovation brought by generative AI towards a healthy and positive direction in “all industries and fields” and supports the development of all software, tools, data sources, and hardware provided they are “secure and trustworthy.”
Lastly, China encourages international cooperation in the formulation of rules related to generative AI, provided this occurs “on an equal footing and mutual benefit.”
Is Restricting AI Only Reasonable Or Outright Necessary?
The explosive expansion of generative AI uses is taking the entire world by storm, and many experts in the field are asking regulators to take a stand to define some limits. Some have gone so far as to express concerns about the potential risk of human extinction if AI use (and abuse) is not limited. Although these Skynet scenarios can be a bit of an exaggeration, it would be unwise to overlook the serious threats posed by uncontrolled AI growth to our society.
On one hand, the integration of generative AI in healthcare services holds the promise of saving countless lives. However, there is also a concern regarding its potential contribution to inequality.
A noteworthy instance is the alleged unethical use of AI during the recent Hollywood actors’ strike. While still largely unconfirmed, some sources from the SAG-AFTRA negotiations suggested that studios might exploit AI to replicate actor extras’ features and avoid compensating them in future shootings.
However, that’s not all. The unregulated use of generative AI comes with other risks. When content generated is inappropriate, inaccurate, or inaccessible, the risk of harm could be significant. For example, providing a doctor with a medical therapy plan for a patient or an oil rig operator with instructions for heavy machinery maintenance using flawed AI-generated information could have serious consequences.
To avoid such dangers, it is crucial to deploy these algorithms with clear and comprehensive guidelines to minimize unintended consequences stemming from poorly designed generative AIs.
What Is The Position On Regulating AI Of Other Major Global Players?
While the Chinese regulations for generative AI are notably strict and well-defined, they are not the first major global player attempting to address this issue.
In June 2023, the European Parliament took a significant step forward in reconciling the three-pronged EU Artificial Intelligence Act (“AI Act”). This move is crucial to negotiate a compromise between the three branches of the European Union: the European Parliament, the Council, and the Commission, with the ultimate goal of drafting a final Act.
Under their “risk-based approach to AI,” the EU Parliament explicitly prohibits any AI that “subliminally or purposefully” manipulates people, exploits their vulnerabilities, or is used to categorize individuals based on their behavior, status, or personal characteristics. Additionally, generative AIs will be required to “comply with additional transparency requirements,” including the explicit labeling of content as generated by AI and the establishment of design rules to prevent the generation of illegal content.
These measures aim to promote the responsible and ethical use of generative AI within the European Union.
On the other side of the Atlantic Ocean, the United States government has also taken steps toward establishing boundaries for unregulated AI proliferation. In January 2023, the National Institute of Standards and Technology (NIST) released the “Artificial Intelligence Risk Management Framework.” Although compliance with this framework is voluntary and non-mandatory, its primary objective is to “improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”
Within the framework, the inherent risks of uncontrolled AI adoption are acknowledged, especially the fact that AI technologies could “exacerbate inequitable or undesirable outcomes for individuals and communities.” The NIST’s proposal offers a set of practical guidelines for all AI actors to “govern, map, measure, and manage” the development and deployment of ethical and sustainable AI models.
Policymakers are facing challenges in keeping pace with the rapid evolution of generative AI. Much like an unstoppable nuclear reaction, since the introduction to the public of the first actionable models a few months ago, we have reached a turning point where changes occur in a matter of weeks. Regulations, on the other hand, traditionally demand a lengthy process of drafting, debating, negotiating, and enforcing, taking months to complete.
In this fast-moving landscape, time has become a luxury that we can no longer afford. Swift action is imperative to ensure that the adoption of generative AI happens in a healthy, ethical, and safe manner.