A Louisiana political consultant was indicted on May 23 for orchestrating a fake robocall that impersonated U.S. President Joe Biden.
The Biden robocall aimed to dissuade voters from supporting Biden in New Hampshire’s Democratic primary election.
New Hampshire’s Attorney General, John Formella, announced that the state indicted Steve Kramer on 26 charges. It includes 13 felony counts of voter suppression and 13 misdemeanor counts of impersonating a candidate.
The Attorney General accused Kramer of suppressing voters by sending misleading phone messages that hid their true source, used a fake candidate voice, or provided false information to discourage voting.
In a separate announcement, the Federal Communications Commission (FCC) proposed a $6 million fine against Kramer for violating caller ID rules. Attorney Formella expressed satisfaction with the FCC’s proposed fine, emphasizing the importance of protecting consumers and voters from harmful robocalls and voter suppression.
He stated,
“I am pleased to see that our federal partners are similarly committed to protecting consumers and voters from harmful robocalls and voter suppression. I hope that our respective enforcement actions send a strong deterrent signal to anyone who might consider interfering with elections, whether through the use of artificial intelligence or otherwise.”
Biden Robocall Attacks
Some months back, thousands of New Hampshire voters received a shocking robocall. An AI-generated voice impersonating Joe Biden, urging them not to cast their ballots in the January 2024 primary.
FAKE BIDEN ROBO-CALL TELLS NEW HAMPSHIRE VOTERS TO STAY HOME (Reuters)
As New Hampshire voters prepared to cast their votes in the state's first-in-the nation primary Tuesday, a robo-call is circulating in the state urging Democrats to stay home – using a fake audio of U.S.… pic.twitter.com/PB4D1yD8pv
— FXHedge (@Fxhedgers) January 22, 2024
The Attorney General’s office issued a public statement, warning voters to ignore the deepfake robocall of Joe Biden. Steve Kramer, who worked for Dean Phillips’ presidential campaign, admitted to orchestrating the scheme in a February interview with NBC News.
Concerns about AI Malpractices
US senators have sounded the alarm on AI’s potential to disrupt elections, citing its ability to deceive voters. Examples include the AI-generated Biden robocall and a DeSantis campaign video featuring fake AI-generated images of Donald Trump, highlighting the dangers of AI in elections.
Twitter erupts after Trump trolls DeSantis with AI-generated video of his 2024 announcement https://t.co/IOtM0p2rCZ
— Fox News (@FoxNews) May 25, 2023
A bipartisan group of lawmakers has also recommended that Congress provide $32 billion in extra funding to develop AI and establish safeguards around it.
However, the path to passing legislation that effectively addresses this issue remains uncertain. During a May 15 hearing, the Senate Rules Committee submitted three Acts to combat the threat of AI in elections.
These are the Protect Elections from Deceptive AI Act, the AI Transparency in Elections Act, and the Preparing Election Administrators for AI Act.
The Protect Elections from Deceptive AI Act would prohibit the use of deceptive AI video or audio related to federal candidates in political ads. The AI Transparency in Elections Act would require political AI ads to disclose this fact.
I held a Senate Rules Committee hearing on ensuring that every eligible American can make their voices heard when they cast their ballots. From protecting poll workers to putting in guardrails on AI, more needs to be done to support our election administrators. pic.twitter.com/sPwaLkZzRs
— Senator Amy Klobuchar (@SenAmyKlobuchar) March 15, 2024
Meanwhile, the Preparing Election Administrators for AI Act would require the US Election Assistance Commission to work alongside the National Institute of Standards and Technology to develop voluntary guidelines for election officials on protecting against AI threats, especially from foreign adversaries.