Davos Final Thoughts: How AI is Reshaping Global Politics

Why Trust Techopedia

There has been much debate about the extent to which artificial intelligence (AI) will be a force for good in the world — and how much of a threat it poses to humanity.

This includes how AI can be used in the context of war—not just on the battlefield itself, but on the battlefield of information and misinformation.

Key Takeaways

  • AI’s role in warfare is transforming global security, with AI being extensively tested and applied in military contexts.
  • When combined with quantum computing in the future, AI is predicted to have profound implications for global security, potentially rendering traditional military strategies obsolete.
  • The misuse of AI in spreading misinformation and deep fake technologies requires international cooperation, education, and regulatory measures.
  • Across Davos, panels simultaneously expressed hope and fear for the new age we find ourselves in — facing new technology that might be as transformative as the printing press.

One thing is clear — last year was pivotal in the transformation of warfare with the use of AI, Dmytro Kuleba, Minister of Foreign Affairs of Ukraine, said in a panel discussion at the World Economic Forum in Davos yesterday.

“Throughout 2024, we will be observing some — undebated publicly — enormous efforts to test and apply AI on the battlefield,” Kuleba said.

“But the power of AI is much broader than that. When nuclear weapons emerged, it completely changed the way humanity understands security architecture. To a large extent, it was an addition to diplomacy and a completely different reset of the rules.

“Now AI will have even bigger consequences to the way we think of global security. You do not need to hold a fleet thousands of kilometres away from your country if you have a fleet of drones smart enough to operate in the region.

“And when quantum computing arrives, and it matches with AI, things will get even worse with the global security and the way we manage the world.”

Advertisements

As an example, AI is being used to direct drones in the Russia-Ukraine war to pinpoint targets efficiently.

“You usually need up to 10 rounds of artillery to hit one target because of the corrections that you have to make with every new shot. If you have a drone connected to an AI-powered platform, you will need one shot—that has huge consequences,” Kuleba said.

“One of the biggest difficulties in the counter-offensive that we were undertaking last summer was that both sides — Ukraine and Russia — were using surveillance drones connected to striking drones to such an extent that soldiers physically could not move.

“Because the moment you walk out of the forest or the trench, you get immediately detected by the surveillance drones, who send a message to the striking drone, and you’re dead. It’s already having a huge effect on warfare.”

When governments consider the next level of threats they may face, they need to be aware that not every country in the world will agree to the regulated, civilized use of AI in warfare, Kuleba pointed out.

“I’m sure there will be two camps, two poles in the world in terms of approach to AI, and when people speak about a polarized world, it will be even more polarized because of the way AI will be treated.

“All of this will change enormously how humanity imagines its security, how diplomats try to keep things sane and manageable, and most importantly, how we do all our work.

“Diplomacy as a job will become either extremely boring or as exciting as ever after AI introduction,” Kaleba said.

How Can We Tackle Rogue Actors?

In a separate panel discussion at Davos, Jeremy Hunt, UK Chancellor of the Exchequer and formerly Foreign Secretary, emphasized the importance of international cooperation:

“We need to be sure that a rogue actor isn’t going to be able to use AI to build nuclear weapons. We need to have our eyes open, which is why the AI safety summit that Rishi Sinak organized at the end of last year was so important.

“But we need to do it in a light-touch way because we’ve got to be a bit humble — there’s so much that we don’t know. We need to understand the potential of where this is going to lead us, at a stage where no one really can answer that question.

“We have choices now, and the choice we need to make is how to harness it so that it is a force for good. One of the ways it would be a force for bad is if it just became a tool in a new geostrategic superpower race, with much of the energy put into weapons rather than things that could actually transform our daily lives.

“Those are choices we make, and one of the ways that you avoid that happening is by having a dialogue with countries like China over common ground.

“But I think we should — whilst being humble about not being able to predict the future — remember that we do have control over the laws, the regulations. We have the ability to shape this journey.”

Going back to the panel on AI’s impact on geopolitics, the ability of AI systems to absorb and commoditize vast amounts of information, generate new information, make it widely available, and take action on it “is going to be massively destabilizing” according to Mustafa Suleyman, a co-founder of DeepMind and co-founder and chief executive officer (CEO) of Inflexion AI.

“This is going to be the most transformational moment not just in technology but in culture and politics of all of our lifetimes.”

Battling AI Misuse in Politics

The effort to implement safeguards around AI is challenging, something that Sam Altman delved into at Davos on Thursday, just as it is hard to stop bad actors from misusing cell phones or laptops.

Suleyman said: “These technologies are not that dissimilar. Having said that, there are specific capabilities, for example, coaching around being able to manufacture a bio-weapon or a general bomb.

“Clearly, our models shouldn’t make it easier for an average non-technical person to go and manufacture anthrax — that would be both illegal and terrible for the world. So whether they’re open source or closed source, we can actually [restrict] those capabilities at source rather than relying on some bad actor not to be bad.”

When it comes to the potential for AI to spread misinformation, bad actors can feed misinformation into data sources at such volumes that it can appear to be a legitimate fact or opinion.

This, too, has consequences for international conflict, Kuleba noted.

“If I’m a rogue state and I want to prove that I’m the only one who has the right to exist and you all must speak my language, does it mean that if I spend billions and involve automated opinion producers like bots, the chat will come up with the opinion that actually it makes sense?”

Search engine results are filtered by algorithms, and they still serve up different links to different opinions. Social media also shows a range of views.

However, if humans build relationships with AI-driven assistants or chatbots, they may only be exposed to one opinion.

Kuleba said: “So, the transition that I see as extremely politically sensitive — and cultural as well — is the transition of a human being from looking for opinions to trusting the opinion of the universal intelligence, as AI will be considered, and that will become a problem in terms of politics.”

AI and False Information

Deep fake videos, which manipulate or generate video and audio content to deceive, are an increasing concern in politics. Irish Taoiseach Leo Varadkar has had his image manipulated in scam videos on the Internet in recent years.

In the latest deep fake, an ad has been circulating on social media using footage of Varadkar and Virgin Media news anchor Colette Fitzpatrick with AI voiceovers selling cryptocurrency scams.

“I hope most people realize that’s not what I actually do,” Varadkar said.

“But it is a concern because it has gotten so good, and it’s only going to get better. I hear audio of politicians that is clearly fake, and people believe it.

“The fact that the political, societal, ethical debate around generative AI is happening in parallel as the technology is evolving is a lot healthier than what we’ve seen over the last 15-18 years, where you had the kind of explosion of social media.

“And many governments are now getting round to deciding what kind of guardrails and legislation they should put in place 15 years later after a great pendulum swing, which swung from a sort of tech euphoria and utopianism to tech pessimism. It’s much better if those things work in parallel.”

Education is Key to Restoring Trust

Educating the public and implementing watermarking is likely key to combating the effectiveness of deep fakes.

People should know and be able to deal with the risks that can emerge from the technology, said Karoline Edtstadler, Federal Minister for the EU and Constitution, Austrian Chancellery. “This is our common task and our common responsibility.”

Restoring trust that is undermined by the misuse of AI will require putting tools in place to deal with it when it happens, Varadkar said.

“Ironically, the use of AI and the misuse of AI in politics might have two unintended consequences — it might make people value more trusted sources of information. You might see a second age of traditional news people wanting to go back to getting their news from a public service broadcaster or a newspaper with a 200-year record of getting the facts right.

“That might be one unintended outcome, and another in politics might actually be that politics starts becoming more organic again, and people want to see the candidate physically with their own eyes, want them to knock on their door again, be outside their supermarket. That might yet become an unintended consequence of people becoming so skeptical of what they’re seeing in electronic format.

“Detection is going to be really important so we can find out where it comes from. The platforms have a huge responsibility to take down content quickly. Some are better at that than others but also people in societies are going to have to adapt to this new technology.

“Any time there’s new technology, people learn how to live with it. But we’re going to need to try and help our societies to do that… That’s AI awareness and AI education.

“But as a technology, I think it is going to be transformative — it’s going to change our world as much as the Internet has, maybe even the printing press — we need to kind of see it in that context.”

If You Can’t Detect, You Can’t Regulate, You Can’t Respond

Nick Clegg, President of Global Affairs of Meta and former UK Deputy Prime Minister, agreed on the importance of verifying content:

“If you want to regulate this space, you can’t respond to something, you can’t react to something, let alone regulate something if you can’t first detect it.

“If I was still in politics, the thing I’d put right in the front of the queue is getting all the industry — the big platforms are working on this already — but crucially the smaller players as well, and really force the pace on getting common standards on how to identify and have invisible watermarking in the images and videos which can be generated by generative AI tools.

“That does not exist at the moment; each company is doing their own thing. There are some interesting discussions happening… but in my view, that’s the most urgent task facing us today.

Clegg said that the prevalence of hate speech on Facebook has fallen over the last two years as AI has become an effective tool in classifying and removing bad content.

“So it’s a sword and shield.”

Addressing Global Inequality

Clegg also emphasized the importance of ensuring that the benefits of AI spread beyond the developed world.

“We also need to ask ourselves who has access to these technologies. It is unsustainable and impractical to cleave to the view that only a handful of basically West Coast companies with enough GPU capacity, enough deep pockets, and enough access to data can run this foundational technology. We are such an advocate of open source to democratize this.

“We should also look at history — the industrial revolution, the computer revolution — where those revolutions succeeded was where the benefits were spread evenly throughout society and not concentrated in small groups.

“In the case of AI, I would say the challenge is to make sure the benefits are spread throughout the world — North and South, developing world and developed world — and not just concentrated in advanced economies, because otherwise, that will deepen some of the fractures that are already taking us in the wrong direction.”

The Bottom Line

From all of the above, we are going to pick out what Irish Taoiseach Leo Varadkar said about AI:

“It’s going to change our world as much as the Internet has, maybe even the printing press.”

It may take years or decades before we can look back at the good and bad decisions made in these early days, as well as see how much society has shifted with AI in our midst.

These are crucial years to get the laws and regulations right — keeping safety as the top priority and without harming innovation and the benefits that can come.

Whether this can happen globally and with people following the rules is a question that, so far, is without an answer.

Advertisements

Related Reading

Related Terms

Advertisements
Nicole Willing
Technology Journalist
Nicole Willing
Technology Journalist

Nicole is a professional journalist with 20 years of experience in writing and editing. Her expertise spans both the tech and financial industries. She has developed expertise in covering commodity, equity, and cryptocurrency markets, as well as the latest trends across the technology sector, from semiconductors to electric vehicles. She holds a degree in Journalism from City University, London. Having embraced the digital nomad lifestyle, she can usually be found on the beach brushing sand out of her keyboard in between snorkeling trips.