Artificial intelligence (AI) is becoming more persuasive than people and it’s not because it’s smarter. In a recent study published in Nature Human Behaviour, researchers found that large language models (LLMs) based on GPT-4 beat human opponents in nearly two-thirds of structured debates.
However, there is a caveat to that performance. The model only outperformed human opponents when it was fed their personal information. With that data, the system was able to adapt its arguments to individual preferences and biases, often more effectively than a human could.
This points to a deeper shift in how persuasion works. It suggests that influence may no longer rely on emotional intelligence or lived experience but instead on algorithms trained to match language with psychological triggers. That raises difficult questions about trust, manipulation, and the growing use of AI in political messaging, marketing, and education.
To understand what’s at stake, Techopedia spoke with experts in AI ethics, behavioral science, and digital policy about where this technology might go next and who gets to steer it.
Key Takeaways
- A new study shows GPT-4 can outperform humans in structured debates when given personal data about its audience.
- The AI used demographic details to tailor arguments, leading to a higher rate of opinion change.
- Without access to personal information, GPT-4’s persuasive advantage disappeared.
- Experts warn this kind of AI-driven persuasion could be used to influence voters or spread misinformation.
- The study underscores the urgent need for ethical oversight and regulation of persuasive AI technologies.
How Researchers Tested AI Persuasion & What They Found
Researchers are increasingly exploring how artificial intelligence can influence human beliefs, attitudes, and decisions. The goal is not just to understand the technology but to see how it might change the way we communicate and make choices.
This was the rationale behind a study led by Francesco Salvi, a research assistant at the Swiss Federal Institute of Technology in Lausanne. Salvi and his team set out to examine whether GPT-4 could outperform humans in structured debates. The study, published in Nature Human Behaviour, focused on whether giving the AI access to personal information would improve its persuasive power.
To test this, the team recruited 900 participants from across the United States. Each person was randomly assigned to debate either a human or the AI model on a polarizing topic such as climate change or abortion.
These debates followed a consistent format that included a four-minute opening, a three-minute rebuttal, and a three-minute conclusion. People gave their opinion on the topic both before and after the debate, using a 1-to-5 scale, so researchers could see if anyone changed their mind.
A key factor in the experiment was whether the debater had access to personal details about their opponent. This included demographic information such as age, gender, ethnicity, education level, job status, and political affiliation. When the AI had access to this data, it was more effective than human opponents in 64% of the debates.
With personal data, GPT-4 was able to adjust its arguments to match the values and concerns of each participant. This approach led to a significant increase in the likelihood that people would change their minds.
According to the study, the odds of persuasion in favor of the AI’s position increased by over 80% compared to when it had no access to personal information.
However, when the AI debated without that data, its performance dropped and was no longer statistically more persuasive than a human. This highlights how critical audience data is in shaping AI’s effectiveness as a communicator.
Despite the persuasive prowess of AI, the researchers admit one major limitation, though. AI’s reliance on tightly structured data for online debates might come out differently in a real-world scenario, especially when conversations are informal and unpredictable.
The Ethical Dilemma: The High Stakes of AI Persuasion
While Salvi’s experiment took place in a controlled environment, and results may differ in real-world conversations, the implications are serious.
Salvi, the study’s lead author, said:
“If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic.”
Oscar Buckley, Managing Director at Blumefield Ltd, echoed similar concerns. He warned that if AI systems have access to personal information, they could quickly become tools of exploitation.
During a briefing with Techopedia, Buckley said:
“If an AI system knows your fears, desires, strengths and weaknesses, coupled with some bad actors using AI for their gain, it could very well become a predatory tool.”
These systems could influence public opinion in ways that are hard to detect, regulate, or push back against. That makes the risks difficult to manage, especially in fast-moving digital spaces.
Salvi believes this is already happening:
“I would be surprised if malicious actors hadn’t already started to use these tools to their advantage to spread misinformation.”
Not everyone agrees with how the findings have been presented. A lecturer from Southern Oregon University and Phoenix College, Dean Batson, raised concerns about the framing of the results.
When asked about what he makes of the study, he told Techopedia:
“The framing around AI ‘outperforming’ humans at persuasion often conflates persuasion with manipulation. We can’t talk about AI’s persuasive power without acknowledging how weak human critical thinking often is. If audiences keep falling for red herrings, strawman arguments, and false analogies, it’s not surprising that a system trained on that content can outperform us in a structured debate.”
This ethical issue becomes even more serious when you think about how it could be used in politics, advertising, or anywhere else where changing people’s minds really matters. If AI can create messages that feel personal and convincing, it might start to shape opinions in ways that aren’t obvious — and that could hurt democracy and make it harder to trust what we see and hear online.
A few weeks ago, Mark Zuckerberg said this about Meta’s ‘AI friends’ on a podcast with Dwarkesh Patel:
“As the personalization loop kicks in and the AI just starts to get to know you better and better, I think that will be compelling.”
Zuck on:
– Llama 4, Behemoth, benchmark underperformance,
– Orion glasses, AI relationships, and not getting reward-hacked by our tech,
– DeepSeek/China & Trump,
– Intelligence explosion, 100x productivity, monetizing AGI.Links below. Enjoy!
Timestamps:
00:00:00 – How Llama 4… pic.twitter.com/CoDXAgLPOm— Dwarkesh Patel (@dwarkesh_sp) April 29, 2025
Because the risks are so high, this study highlights the need for clear rules and ethical guidelines to govern the use of persuasive AI in the real world.
The Bottom Line
There is a thin line between personalization and manipulation as AI becomes more adept at shaping human decisions. While GPT-4 can beat human debaters in structured debates, that result does not render human reasoning obsolete. AI systems are known to rely entirely on the data we provide. So, without personal information, the AI’s persuasive advantage drops significantly.
Nonetheless, the findings reveal something more pressing than performance statistics. They highlight how easily influence can be scaled, automated, and targeted using even minimal data. That raises concerns not only about who builds these systems, but also who gets to use them and for what purpose.