Lingo Telecom, a voice service provider, has agreed to pay a $1 million fine for transmitting AI-generated robocalls that mimicked President Joe Biden’s voice.
The robocall campaign, orchestrated by political consultant Steve Kramer, targeted about 5,000 New Hampshire voters in January.
The fine was imposed by the Federal Communications Commission (FCC), who also revealed Lingo Telecom’s agreement to strict caller ID authentication guidelines and requirements and to holistically verify the accuracy of the information provided by its customers and upstream providers.
“Every one of us deserves to know that the voice on the line is exactly who they claim to be,” FCC chairperson Jessica Rosenworcel said in a statement.
The deceptive calls falsely misled New Hampshire voters, claiming that participating in the state’s presidential primary would prevent them from voting in the general election.
Our Report: The Federal Communications Commission (FCC) has issued Lingo Telecoms—the telecom company behind the AI-generated robocall that impersonated Joe Biden—with a $1M civil penalty fine for using the technology to persuade New Hampshire voters not to vote in the primary…
— Ethical American (@AmericanEthical) August 22, 2024
This incident highlights the growing risks of AI in political manipulation, particularly ahead of the US elections. The FCC initially sought a $2 million fine, but the reduced penalty has not alleviated concerns over AI’s role in election interference.
After Biden’s win in the New Hampshire primary, Kramer claimed the robocall campaign was a stunt to expose the dangers of AI. However, the fallout has raised serious concerns, undermining Kramer’s stated intentions.
Kramer now faces a proposed $6 million fine from the FCC, along with criminal charges for voter suppression and impersonating a candidate.
AI-Driven Misinformation and Its Impact on US Elections
The case against Lingo Telecom comes amid broader concerns about AI’s growing influence in US politics. As the 2024 elections approach, AI technology is being increasingly utilized, raising alarms about its potential to spread misinformation.
Earlier this week, former President Donald Trump shared misleading AI-generated images of Taylor Swift’s support, which has resulted in backlash from Swift’s fans. The music superstar has not endorsed any presidential candidate for the upcoming November election.
In another instance, Trump alleged that Vice President Kamala Harris used AI to fake a crowd at a Michigan rally, further fueling worries about AI’s role in shaping public perception.
The combination of AI voice-cloning technology and caller ID spoofing, as seen in the Lingo Telecom case, illustrates the significant risks that AI poses in the political landscape.