Do you remember the Cambridge Analytica scandal from 2017? For the uninitiated, Cambridge Analytica was a company that worked with political campaigns by targeting groups of people with tailored messages to convince them to vote for a candidate or a political party.
It amassed a huge database of personal information of Facebook users without their consent to target them with tailored campaigns. When this deed came to light in 2018, it caused a massive uproar. One of the major concerns was that your private data is no longer private, and your safety and privacy can be compromised.
Cut to 2023, and we’re still concerned about the same problem, but the source of the problem is different – artificial intelligence (AI). There is a genuine concern that AI is more powerful and can subtly have you dispense with your data — and it could be used for political benefits.
When Countries Want Too Much Power Over the People
From a layman’s perspective, a ruler would forever want to cling to power by hook or crook, especially in countries ruled by autocrats.
That doesn’t mean the so-called democratic countries are immune to these tendencies. It’s just that strong institutions and checks and balances act as prevention.
AI has come as an ally, especially to dictatorial powers that want to control their citizens’ lives.
For example, this illuminating article by the New York Times shows the lengths the Chinese government has gone to snoop on its citizens with sophisticated surveillance technology in the Xinjiang province.
Bring the power of AI into the mix, and there’s enough evidence to suggest it can have a massive power over power — even in a democracy.
According to Samm Sacks, a China technology policy expert and senior fellow at the think-tank New America: “The [Chinese] government is using these technologies to build surveillance systems and to detain minorities [in Xinjiang].”
The Chinese government has been targeting the Uighurs, a minority Muslim community in China, and strongly regulating their lives beyond what a government should do, which allegedly includes AI in its surveillance cameras to survey and create racial profiles.
The Role of Generative AI and Deep Fakes
The world will have many elections in 2024, and experts worry that the voters could be influenced based on false narratives woven around AI.
Gary Marcus, a professor at New York University, said at a Reuters conference in NewYork, “The biggest immediate risk is the threat to democracy … there are a lot of elections around the world in 2024, and the chance that none of them will be swung by deep fakes and things like that is almost zero.”
Deepfake, a product of generative AI that can generate fake images, audio, or videos, is already on the scene.
In the US, the race for the Republican Party nomination for the Presidential candidate has already seen the use of deep fakes.
In June 2023, the war room of the Governor of Florida, Ron DeSantis, released an image showing Donald Trump hugging and kissing the nose of Dr. Anthony S. Fauci, former director of the National Institute of Allergy and Infectious Disease.
Later, DeSantis’ campaign admitted that the images were deep fakes.
In another example in the UK, a deep fake showed Prime Minister Rishi Sunak pulling a low-quality pint with a woman derisively looking at him.
Later, it was found that the image was edited, and the woman had a neutral expression.
These may be small cases at the moment — but ‘lies spread faster than the truth’, and even information we know to be fake can still shape our opinion.
The Bottom Line
The development and evolution of AI are things that can’t be reversed and should not be reversed.
But it’s a worrying trend that AI can used by certain sections of governments and countries to fulfil their narrow goals.
It may not be the complete solution, but it indicates that a strong and comprehensive regulatory framework is needed.