The White House has issued an executive order to manage the risks of artificial intelligence (AI), requiring the biggest AI developers to share information with the government before releasing their algorithms to the public.
The executive order is among the first government regulations on AI as authorities worldwide scramble to control the rapid advancement of the technology. It:
“…establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”
US Government Orders Safe, Innovative AI Implementation
The US government’s sweeping order aims to address the broad potential impact of AI technologies:
- sets standards for AI safety and security
- aims to protect Americans’ privacy, advance equity and civil rights
- shape the use of responsible AI in healthcare and education
- support workers
- promotes innovation and competition
- ensures accountable and effective government use of AI
Among the key provisions, the Biden administration requires that companies developing the most powerful AI systems share the results of safety tests and other critical information with the US government.
Any AI model that potentially poses a severe risk to national security, economic stability, or public health and safety will be governed by the Defense Production Act. This means that the company must notify the federal government when training the model. The National Institute of Standards and Technology will set standards for extensive testing to ensure that such models are safe before public release.
Addressing Potential AI Risks
Acknowledging the risks posed by the use of AI in various systems, the White House ordered:
- Agencies that fund life-science projects to establish standards for biological synthesis screening as a condition of federal funding to protect against the risks of using AI to engineer dangerous biological materials.
- The Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content, with the goal of limiting AI-enabled fraud and deception.
- The formation of an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
- The National Security Council and White House Chief of Staff to develop a National Security memorandum that directs the US military and intelligence community to use AI safely and ethically and direct actions to counter adversaries’ military use of AI.
- Funding of a Research Coordination Network to accelerate the development and use of privacy-preserving technologies.
The Biden-Harris Administration has previously published a Blueprint for an AI Bill of Rights and issued an Executive Order directing agencies to combat algorithmic discrimination while enforcing existing authority to protect citizens’ rights and safety.
With concerns about the potential for AI to reinforce discrimination and displace jobs increasing, the White House is directing additional actions to provide guidance to keep AI algorithms from being used to exacerbate discrimination, including throughout the justice system; advance the responsible use of AI in healthcare and the development of pharmaceutical drugs; create resources to support educators deploying AI-enabled tools; and develop best practices to mitigate the harms and maximize the benefits of AI for workers.
The Challenge of Government Regulation of AI
The broad extent of the executive order and the issues it raises about the impact of AI on national security, privacy, civil rights, healthcare, education, and the workforce illustrate the scope of the challenge governments face in regulating the proliferation of AI algorithms and models. It also raises questions about the extent to which governments will and can control the widespread adoption of AI.
The Biden Administration consulted a range of other governments on AI frameworks before releasing the order, including Australia, Brazil, Canada, Chile, the European Union, India, Israel, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The principles outlined support Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the UN, the statement noted.
But can governments balance the impact on job security in various industries with the desire to support home-grown companies in the global innovation race?
Several AI startups welcomed the announcement, but some CEOs expressed concerns over whether the regulations could hinder smaller companies from developing AI technologies, stifling innovation.
Leaders of advocacy groups responded positively but noted the challenges of implementation.
“It’s notable to see the Administration focus on both the emergent risks of sophisticated foundation models and the many ways AI systems are already impacting people’s rights. The Administration rightly underscores that US innovation must also include pioneering safeguards to deploy technology responsibly,” said Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology.
“Of course, the EO’s success will rely on its effective implementation. We urge the Administration to move quickly to meet relevant deadlines and to ensure that any guidance or mandates issued under the EO are sufficiently specific and actionable to drive meaningful change,” Givens said.
“Today’s executive order is a vital step by the Biden administration to begin the long process of regulating rapidly advancing AI technology – but it’s only a first step. Along with establishing a framework for the government’s use of AI, the EO recognizes the power of the government to establish norms and standards as a major purchaser of technology. That’s an important policy lever,” said Robert Weissman, President of consumer advocacy group Public Citizen, in a statement.
“However, as much as the White House can do independently, those measures are no substitute for agency regulation and legislative action. Preventing the foreseeable and unforeseeable threats from AI requires agencies and Congress to take the baton from the White House and act now to shape the future of AI — rather than letting a handful of corporations determine our future, at potentially great peril,” Weissman added.
Public Citizen has been pushing for the regulation of AI technology, recently petitioning the Federal Election Commission (FEC) to introduce a new rule banning political deepfakes in election campaign advertising.
“The new executive order strikes the right tone by recognizing both the promise and perils of AI,” said Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell University. “What’s missing is an enforcement and implementation mechanism. It’s calling for a lot of action that’s not likely to receive a response.”
The executive order from the Biden Administration addresses concerns shared by governments worldwide about the impact of rapidly developing and unregulated AI technologies on many aspects of public life, from national security to individual privacy rights, healthcare, education, and employment.
This raises questions about how much governments can and will be able to control the impact of AI. Any effective regulation will require support from lawmakers and businesses to realize the promise of the technology while limiting its negative impact.