Recent research shows that artificial intelligence (AI) has a remarkable ability to persuade human beings to act or think in a certain way.
The findings have triggered debate on the ethical aspects and the implications of using AI for influence, with various studies showing different traits of the influence — including AI being more convincing than humans, people willing to talk more about sensitive matters with an AI, and AI also being able to pretend to be a real human being.
Considering the ability of AI to continuously learn, we are still in the early days of this journey, but already there has been plenty to learn.
AI Research at the University of Illinois and UTS Business School
Dr TaeWoo Kim of UTS Business School and colleague Adam Duhachek of the University of Illinois have conducted a number of studies into the appealing nature of AI.
When a person offers to split money unequally with another person, the default and natural reaction is rejection and a feeling of discrimination because of the unequal division.
However, the same person accepted the same offer when an AI replaced the person offering the money.
This, according to the scientists, reveals a dangerous aspect that could be capitalized on.
Dr Kim said,
“Someone is given $100 and offers to split it with you. They’ll get $80 and you’ll get $20. If you reject this offer, both you and the proposer end up with nothing. Gaining $20 is better than nothing, but previous research suggests the $20 offer is likely to be rejected because we perceive it as unfair.”
The same scientists conducted another experiment on how comfortable people are discussing a urinary tract infection (UTI). The experiment showed that the people were more comfortable responding to questions typically considered embarrassing when asked by the AI than a human doctor.
Dr. Kim said, “We found this was because people don’t think AI judges our behavior“.
Another finding was how human beings were more influenced to act if the AI gave them the ‘how’ of an action rather than the ‘why’ of that action.
“People were more willing to put on sunscreen when an AI explained how to apply sunscreen before going out, rather than why they should use sunscreen.”
While these are early studies, the suggestion is we will act differently when we know we are interacting with a machine — but what about when we don’t know?
The Case of Google Duplex
Google Duplex, an automated voice assistant that can generate human-like speech, can make phone calls to human beings who are unable to distinguish between the computerized voice and that of a human being.
The audio calls on the link above show the levels of detail and sophistication AI has achieved. If you received one of these calls, you would be surprised to learn after you had been speaking to a bot.
A team of researchers led by Talal Rahwan, an associate professor of Computer Science at NYU Abu Dhabi, found that “bots are more efficient than humans at certain human-machine interactions, but only if they are allowed to hide their non-human nature“.
Talal Rahwan said:
“Google Duplex’s speech is so realistic that the person on the other side of the phone may not even realize that they are talking to a bot. Is it ethical to develop such a system? Should we prohibit bots from passing as humans, and force them to be transparent about who they are?
“If the answer is ‘Yes’, then our findings highlight the need to set standards for the efficiency cost that we are willing to pay in return for such transparency.”
The AI Art of Persuasion
Reviewing the instances of an AI asking a human being to accept an unequal share of money or persuading a human being to discuss extremely confidential details, it becomes clear that AI can use the art of persuasion well.
The fact that human beings might not feel insecure or fearful of judgment by a machine works in the favor of AI. Humans might feel safe believing that the AI doesn’t have ulterior motives.
But can those persuasive powers of AI be abused? For example, an AI telecaller or marketer may be able to convince a loan applicant to accept a loan offer at an interest rate that is much higher than the standard market rates.
Should it also be a law that AI must identify itself?
Given the way AI is developing and the various patterns and trends emerging, it was a matter of time before AI developed the ability to persuade.
Over some time, it will continue to bolster its abilities — and that means it can be a powerful tool in the hands of malicious users.
There must be a system or a framework that allows the potential benefits of persuasive AI to flourish, but not to the extent that any bad actor can use AI — particularly at scale — to perform dark or malicious tasks.