U.S. Vice President Kamala Harris has announced that the White House Office of Management and Budget (OMB) will be implementing a government-wide policy to enhance AI risk management and innovation in federal agencies.
The new OMB policy states that, by December 1, 2024, federal agencies will be expected to implement safeguards to assess, test, and monitor the impact of artificial intelligence on the public.
In short, they’ll need to mitigate the risks of algorithmic discrimination and provide the public with greater transparency over how they’re using AI. If they can’t – they must “cease using the AI system” unless they can justify why doing so would increase risk to safety or impact critical operations.
The announcement comes the same month as the European Union (EU) passed The EU Artificial Intelligence Act defined categories for AI risk and banned use cases such as social scoring systems.
Key Takeaways
- The White House aims to manage AI risks in federal agencies, promoting transparency and innovation.
- A new policy mandates safeguards by December 2024, requiring agencies to mitigate algorithmic discrimination and provide transparency — and remove AI if it cannot guarantee these goals.
- Despite efforts to encourage AI experimentation, the policy faces criticism for its limitations in addressing private sector regulation and potential loopholes in risk management.
- While the policy is a step towards AI regulation, challenges remain in implementation, public feedback integration, and impact assessment deadlines.
Breaking Down the White House’s New AI Policy
While the White House’s new policy requires federal agencies to increase transparency over the use of AI, it also seeks to remove “unnecessary barriers to AI innovation,” and actively “encourages agencies to responsibly experiment with generative AI.”
For example, the Biden-Harris has committed to hiring 100 AI professionals by Summer 2024 and has included an additional $5 million as part of the fiscal 2025 budget to expand the General Services Administration’s AI training program.
Joseph Thacker, principal AI engineer at AppOmni, told Techopedia via email:
“It’s extremely important that OMB is encouraging agencies to expand their usage of AI.
“Often, government agencies are slow at implementing and learning about new technologies – this will force them to learn AI much faster because the best way to learn is by using it. Implementing AI will also dramatically increase their ability to regulate it.”
The practical controls mandated by the OMB include the requirement for federal agencies to designate chief AI officers to coordinate the use of AI, establish AI governance boards, and release government-owned AI code, models and data (providing there’s no security risk).
In addition, federal agencies will be required to release annual inventories of AI use cases and how the agency is addressing relevant risks.
Some use cases can be withheld from this inventory if they put a department’s operations at risk. In this case, the agency needs to notify the public about the AI solution that has been given a waiver from compliance with the OMB’s policy.
Likewise, agencies must also implement risk management for AI systems. In practice this means “agencies must review each current or planned use of AI to assess whether it matches the definition of safety-impacting AI or rights-impacting AI.”
The White House Still Fails to Address the Private Sector
As this policy only impacts federal agencies, it still leaves the private sector largely unregulated. This isn’t necessarily a bad thing, as overzealous regulation could seriously threaten AI development, but it is unlikely to please those who want the societal risks around AI to be managed and regulated more proactively.
Gal Ringer, co-founder and CEO at Mine, a global privacy management firm, told Techopedia:
“These rules will be somewhat successful in safeguarding AI use, but it’s key to understand this only applies to the government and, thus, the public sector.
“The American private sector, from where much of the technological innovation of the past few decades has come, is still operating with mostly free rein when it comes to AI.
“There needs to be a federal law that oversees the private sector, and while you don’t need to take the same risk-based approach the EU and UK have, meaningful legislation needs to come through to promote the same principles of transparency, harm reduction, and responsible usage choked in the announcement.“
For instance, there is no federal law against deepfakes being used to impersonate individuals without their consent.
Limitations of the White House’s Policy in the Public Sector
If we look at the White House’s policy just in terms of its impact on the public sector, there are some significant limitations that introduce barriers to innovation and risk management.
Thacker said: “In section 5 of the policy on managing AI risks, there’s a requirement to ‘consult and incorporate feedback from affected communities and the public’, including on how the agency implements minimum risk management practices.
“If the groups providing feedback are anti-AI or overly worried about the technology’s safety or effects, this could dramatically slow down the process of implementing AI, and create roadblocks.”
At the same time, Thacker notes that while drafting risk management policies and plans by the December deadline is doable, the requirement to perform an impact assessment and to test it in a real-world context may be difficult to complete by this date.
Perhaps more seriously — Ringer explained that “internal assessments and oversight could provide a loophole for lax AI governance.”
After all, such a measure relies on a department to be able to accurately and fairly assess its own level of risk and overall compliance with the policy.
This is a big oversight because it could give the public the impression that they are more protected against AI-related risks than they actually are.
The Bottom Line
What’s strange about this policy is that it indicates that the White House wants to have all the advantages of AI, yet simultaneously doesn’t want to move too fast. The end result is a policy that has constrained AI innovation while also leaving loopholes that actually enable AI innovation.
Above all, the White House’s policy demonstrates that there is a long way to go before AI is regulated in the U.S.
That being said, there is the potential that these guidelines in the public sector will eventually be applied to private sector organizations, too — for better or worse.