What Is the EU AI Act?
The European Union Artificial Intelligence Act (EU AI Act) is a proposed regulatory framework for the development, marketing, and use of artificial intelligence (AI). The purpose of the framework is to legally define artificial intelligence and impose documentation, auditing, and process requirements for AI providers.
The framework is risk-based and, like the General Data Protection Regulation (GDPR), is intended to strike a balance between innovation, economic interests, and citizens’ rights and safety. If passed, the framework will be binding on all 27 EU Member States and apply to anyone who creates and disseminates AI systems in the European Union, including foreign companies such as Microsoft, Google, and OpenAI.
If the framework becomes law, it will require companies to formally assess the risks posed by their AI systems before they are put into use, and it will grant the European government the authority to fine companies that violate the framework’s compliance rules.
The legislation will also give European citizens the power to file complaints against AI providers they believe are in breach of the Act.
Defining AI
The legal definition of AI has been an important issue in determining the scope of the proposed regulation, and the definition was revised multiple times before the framework was approved by the European Parliament in June 2023.
The latest revision of the Act defines AI as “a machine-based system that is designed to operate with varying levels of autonomy, and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
Risk Levels
Under the AI Act, all artificial intelligence will be classified under one of four levels of risk:
- Unacceptable risk
AI systems that pose an unacceptable risk to fundamental rights and freedoms will be prohibited. This includes AI systems that are used to create social scoring systems that could be used to discriminate against certain groups of people, AI systems that are used to create deepfakes that could be used to spread misinformation or propaganda, and unauthorized AI systems that could be used to control critical infrastructure, such as power grids or transportation systems.
- High risk
AI systems that pose a high risk to fundamental rights and freedoms will be subject to a number of regulatory compliance requirements. This includes AI systems that are used to assess creditworthiness or make hiring decisions, AI systems that are used to provide facial recognition or other biometric identification services, and AI systems that are used to make medical diagnoses or recommend treatments.
High-risk AI systems will require a permit from a government regulator. They must be trained on high-quality data, have logging and traceability capabilities and undergo a thorough risk management and mitigation process. Applications will need to be accompanied by detailed documentation.
- Limited risk
AI systems that pose a limited risk to fundamental rights and freedoms are not subject to any specific requirements, but they must still comply with the general principles of responsible AI ethics. This risk category includes AI systems that are used to provide customer service or answer questions, AI systems that are used to generate personalized news feeds or product recommendations, and AI systems that are used to control smart home devices or play games.
- Minimal or no risk
AI systems that pose minimal or no risk to fundamental rights and freedoms are not subject to any specific requirements. This includes AI systems that are used to identify and block phishing emails, generate weather forecasts, process images or videos, or make simple predictions, such as whether a customer is likely to click on an ad.
The EU AI Act does not explicitly categorize unknown risk levels, but it does state that if an AI system poses a risk that is “not yet known,” the system should be considered high risk.
EU AI Act Compliance
The European Union AI Act requires EU member states to establish at least one official secure sandbox environment to test AI systems before they are deployed.
Rule enforcement will be up to the EU’s 27 member states, and non-compliance will be subject to penalties of up to €40 million, or 7% of a company’s annual global revenue, whichever is higher.
Many of Europe’s top business leaders have pushed back on the European Union’s proposed legislation, warning that the draft rules for high-risk AI go too far, especially in regards to regulating generative AI and foundation models, the technology behind popular platforms such as ChatGPT.
They have expressed concern that the EU proposal applies too broad a brush to GPAIS (General Purpose Artificial Intelligence Systems) and large language models (LLMs) regardless of their use cases and maintain that the heavy compliance burden this presents will stifle innovation and discourage investors.
Competing Proposals for AI Legislation
The European Union AI Act, which is sometimes referred to by the press as AIA, was approved by the European Parliament in June 2023 and is expected to be adopted by the Council of the European Union by the end of 2023. If passed, it will be the first major, comprehensive piece of AI regulation in the world.
As AI continues to develop, however, other countries and organizations are also recognizing the need to legislate artificial intelligence. In general, the competing proposals for AI regulation vary in their scope of regulation, their level of detail, and their enforcement mechanisms.
Some countries are more concerned about the ethical risks of AI, such as machine bias and privacy violations, while others are more concerned about promoting the potential benefits of AI and economic growth. More than one country is struggling with creating legislation that balances the two concerns.
Competing initiatives and proposed frameworks from around the world include:
Country | Legislation/Strategy | Description |
United States | Safe Innovation Framework for AI Policy | A framework that outlines four “guardrails” for AI regulation: Accountability, transparency, explainability, and security. |
Blueprint for an AI Bill of Rights | Outlines industry-wide best practice principles for AI development and use. | |
National Artificial Intelligence Initiative | A bipartisan initiative aimed at accelerating the responsible, ethical development and adoption of AI in the United States. | |
China | New Generation Artificial Intelligence Development Plan | A plan that outlines China’s vision for AI development, particularly in healthcare, transportation, and security. |
Regulations on the Administration of Artificial Intelligence-Powered Products and Services | Establishes a framework for models and rules used to generate AI-powered content. | |
AI Ethics Guidelines for the Development and Application of Artificial Intelligence | Guidelines that provide principles for ethical AI development and use in China. | |
United Kingdom | National AI Strategy | A strategy that outlines the UK government’s vision for AI development in healthcare, education, and the environment. |
AI Governance Framework | Framework that emphasizes innovation and provides principles for AI governance in the UK, including transparency, accountability, and ethics. | |
Canada | The Artificial Intelligence and Data Act (AIDA) | Proposed regulatory framework for AI in Canada that covers all AI applications, regardless of risk. |
Pan-Canada AI Strategy | The Canadian government’s vision for AI development focuses on healthcare, transportation, and the environment. | |
Australia | AI Ethics Framework | A framework that presents the ethical principles for AI development and use in Australia. |
South Korea | AI Ethics Guidelines | Guidelines that provide ethical principles for AI development and use in South Korea. |
AI Industry Promotion Act | A master plan that incorporates previously fragmented legislation for AI in South Korea. | |
Japan | AI Ethics Guidelines | Guidelines for AI development and use in Japan. |
Act on Promotion of Research, Development, and Utilization of Artificial Intelligence | A non-regulatory and non-binding framework for the development and use of AI in Japan. | |
Singapore | AI Governance Framework | Framework for AI governance in Singapore that addresses transparency, accountability, and ethics. |
National Artificial Intelligence Strategy | A strategy that sets out the Singapore government’s vision for AI development in healthcare, education, and the environment. |