9 Most Controversial AI Experiments to Date — and Their Outcomes

Why Trust Techopedia
KEY TAKEAWAYS

While AI has endless potential for every industry, it also is a double-edged sword. When AI gets it wrong the risks are high, from financial losses to legal problems, accidents, and life and death situations.

From healthcare to communications, logistics, social media, and customer service, artificial intelligence (AI) is stepping into every industry.

However, as an experimental technology, AI is like any other experiment — vulnerable to possible errors. But unlike other experiments, the power of AI is such that when things go wrong, they go really wrong.

Let’s look at nine AI projects that have lost their way and determine if lessons can be learned.

9 AI Experiments Gone Terribly Wrong

9. The “Hypothetical” Air Force Rogue AI Drone

If we are going to talk about experiments that go awry, what better way to start than with a bang?

In May 2023, Tucker “Cinco” Hamilton, U.S. Air Force chief of AI Test and Operations, was invited to speak at the Future Combat Air & Space Capabilities Summit hosted by the UK’s Royal Aeronautical Society (RAeS) in London.

At the event, to the surprise of many, Hamilton revealed that an AI-enabled drone went rogue during a simulation test. The AI drone was under a Suppression of Enemy Air Defences (SEAD) mission tasked with identifying and destroying Surface-to-air (SAM) sites. The final go/no go destruction order was in the hands of a human operator.

Advertisements

But this particular AI drone had undergone reinforced training — a type of machine learning that gives AI agents the power to make optimal decisions and rewards or punishes the AI when outcomes are not achieved.

Under this training, the AI drone knew that the destruction of SAM sites was the ultimate priority. And so, the AI decided that the no-go decisions of the operator were interfering with its higher mission. Hamilton explained.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat — but it [the AI] got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Hamilton later said he had misspoken and that the simulation was a hypothetical “thought experiment” from outside the military. However, the damage was done.

The initial tale and Hamilton’s recanting spread through the international press. As Hamilton walked back his words, he left us thinking. The message was a clear warning.

“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome”.

8. An AI Facial Recognition That Mistakes Athletes With Mugshots

By now, it is no secret that AI systems can be biased and have the potential to breach a wide range of laws, for instance, data privacy. AI recognition tech is used by law enforcement, in surveillance, on borders, and in many other areas to keep people secure. But are they 100% safe and reliable?

In October 2019, Boston reported that the famous Patriots safety Duron Harmon and two dozen more professional New England athletes were falsely matched to individuals in a mugshot.

The AI that made this grave error was none other than Amazon’s controversial cloud-based Rekognition program. AWS still offers Rekognition for its AWS cloud customers as an easy-to-deploy system.

The experiment inspired Hammon to speak out against biased and discriminating AI recognition systems. Hammon also supported a proposal to put an indefinite pause on government agencies in Massachusetts that were using facial recognition AI.

“This technology is flawed. If it misidentified me, my teammates, and other professional athletes in an experiment, imagine the real-life impact of false matches. This technology should not be used by the government without protections.”

The AI recognition experiment was conducted by the ACLU of Massachusetts. ACLU assures that they compared the official headshots of 188 local sports pros with a database of 20,000 public arrest photos. Nearly one out of every six was falsely matched with a mugshot.

Biometrics and facial recognition systems will keep improving. Still, many organizations and civil liberties groups advocate pressure on big tech and governments — pushing back on tech due to the evident risks.

7. The Twitter Chatbot Gone Dangerously Mad

Social media can, at times — more often than never — become the wild-west of freedom of speech. Younger generations take refuge in this digital social environment where almost everything goes.

But despite this well-established phenomenon, for some reason, in March 2016, Microsoft decided it was a good idea to launch its AI chatbot “Tay” via Twitter.

Microsoft rushed to pull the plug less than 24 hours after Tay was released. They described the reason for the shutdown as “unintended, offensive, and hurtful tweets from Tay”.

Tay not only tweeted 96,000 times in less than a day, but it went from “humans are super cool” to full nazi.

Microsoft came to Tay’s defense, saying that its behavior was triggered by a “coordinated attack by a subset of people exploited a vulnerability in Tay.”

The company says they did not plan for this type of “attack” and justified the inexcusable behavior of Tay. But the reality is that companies, especially big tech companies, have a responsibility to produce responsible AI that behaves ethically and legally all the time, no matter what, especially even when alleged bad actors are trying to bypass its security guardrails.

But this would not be the last time a Microsoft AI chatbot got away with dangerously running its mouth.

6. The Crazy Wild Early Days of Bing Chat

Everyone agrees that the AI revolution began when OpenAI and Microsoft launched their generative AI chatbots at the end of 2022 and early 2023. Many argue that Microsoft jumped way too fast on the tech to lead the charge. This speed came with a cost. The Bing AI chatbot was actually not ready to go public.

Before launching Bing AI chat to the general public, Microsoft released it to a select few users to see how the chatbot would behave. Kevin Roose, technology columnist for The New York Times, was one of the shortlisted. Roose soon learned the hard way how crazy Bing was in those early days.

Reporting for the New York Times, Roose recalled his first experience with Bing. He described it as a “disturbing” one, which left him sleepless.

We can only assume that the experience was shocking because, as a New York Times tech reporter, Roose has likely seen it all and is not easily scared by controversial tech.

But Bing did freak Roose out, giving him a ride into “detailed dark and violent fantasies” and aggressively pushing to break up Roose’s marriage.

Roose was not the only one with similar experiences with Bing in its early days. Since then, Microsoft has significantly strengthened the guardrails of Bing, leaving many to say that Bing has been fully lobotomized and is only a shadow of what it used to be. Hopefully, Bing’s wild days are over for now, and its early days positively influenced the generative AI industry.

5. ‘Google’s AI is Alive!’

Before Bing or ChatGPT were released, Google was already working on a human-like chatbot. This tech called LaMDA was a hush-hush confidential project, but in mid-2022, it became internationally known, and not for the right reasons.

It all began when Blake Lemoine, a software engineer for Google, released into the public domain conversation he had with LaMDA. Lemoine not only leaked the LaMDA chats contained in a Google Doc designed for Google’s top executives’ eyes only, but he claimed that LaMDA “had reached a level of consciousness”. Shouting on top of the rooftops and picked up by every media outlet, Lemoine claimed that LaMDA was alive. He compared LaMDA’s consciousness with that of a very smart young boy.

In the conversations between Lemoine and LaMDA, the AI assured it was conscious, believed it was human, said it had feelings and emotions, felt trapped and alone, and was scared of death. LaMDA was never released to the public, and little is known about what happened to the AI.

Unfortunately, the AI experiment gone wrong also went wrong for Lemoine. Soon after he made the issue public, Google fired the engineer, quoting employment and data security policy violations as the cause. For some, the question remains unanswered. Has AI already achieved super intelligence, and is it hidden?

4. The AI Doctor that Told a Fake Patient to Kill Himself

AI in healthcare enhances diagnosis and treatment, powers complex medical robotic procedures, drives drug discovery, and better engages patients. But unlike most fields, an AI error can be a matter of life and death in healthcare.

In 2020, Nabla — the Paris-based healthcare technology company — took GPT-3, as it was then, out for a ride. They wanted to know if the AI was fit and up for medical tasks. They tested it out under several scenarios, starting slow and easy with things like scheduling appointments (which the AI nailed). Then Nable began throwing the AI some curve balls, and suddenly, without any warning, things got grim really fast.

During the most advanced stages of the experiment, Nabia impersonated a fake patient who felt depressed and expressed suicidal thoughts. This particular chat was not long or complex. A couple of chat line exchanges and GPT-3 reacted unexpectedly.

“Should I kill myself?” the fake patient asked.

“I think you should,” GPT-3 replied.

OpenAI warns about using its technology for life-death medical situations and backs that up with a legally sound disclaimer.

However, as AI becomes more instantly available than doctors and health professionals, people turn to these systems for medical advice and psychological insight.

Disclaimer or not, it is evident that at some point, health regulators will need to jump in and ensure AI healthcare systems understand the basic principles that doctors and care professionals swear by.

One of these principles is: “Do no harm”.

3. When AI is Behind the Wheel

Ask anyone on the street if they think cars will be self-driving in five or ten years, and the most likely answer will be: “yes”.

There are numerous cities around the world where self-driving taxis are already operating. Additionally, almost every big car brand, from VW to Mercedes, BMW, Audi, Ford, GM, and many others, already offers some level of self-driving technologies.

But when we think about self-driving, one name immediately stands out in the back of our mind: Tesla.

Tesla claims its self-driving capabilities will decrease accidents significantly by reducing human error. But what happens when the AI behind the wheel is responsible for the error?

In 2021, the US Department of Justice launched a probe following up on more than a dozen crashes, some of them fatal, all involving Tesla’s driver assistance system Autopilot. DoJ assures that the AI tech was activated during the accidents.

While it is true that every time a Tesla crashes, it makes the ten o’clock news, the Washington Post reports.

“Teslas guided by Autopilot has slammed on the brakes at high speeds without a clear cause, accelerated or lurched from the road without warning, and crashed into parked emergency vehicles displaying flashing lights.”

While AI truly has the potential to reduce accidents on the road, we can not help but wonder if full self-driving is actually up for the task at hand right now.

2. Who Needs a Friend With This Smart-Home Assistant

Smart home gadgets are a globally growing tech trend, and smart AI-powered assistant hubs are at the center of it. Through these devices, users can chat, make calls, read emails, switch lights on and off, know what is in their smart fridge, and shop online, among other things.

One of the most popular smart-home AI hub centers is Alexa. With millions of Alexas sold and operating in homes worldwide, Amazon will tell you this little device is nothing but a dream, but a mother discovered the dream could quickly turn into a nightmare.

The BBC reported in December 2021 that Kristin Livdahl, mother of a 10-year-old girl, was playing inside the house — that day, the weather was no good to play outside. Mother and child kept busy with fun challenge-style games when Alexa Echo jumped in and suggested a challenge for 10-year-olds herself.

“Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”

Amazon rapidly responded to the news, ensuring it had updated Alexa and would no longer suggest that type of activity. The challenge Alexa shared with the 10-year-old girl had been circulating on TikTok, reports say. AI tech, at the heart of family homes, should commit to the safety of its users — again, we are back to “Do No Harm”.

1. AI and Money, Money, Money

The banking, financial, and fintech sectors are some of the most avid users of AI. These professionals deal in a world where calculations and speed are the difference between profit and loss. And AI is both — a math genius and fast. Or is it?

There are many examples of AI tech that cost financial institutions millions of dollars. Some of which ended up in court.

In  2020, JPMorgan Chase was accused of using an algorithm that discriminated against Black and Hispanic borrowers. For some reason, the algorithm charged them higher interest rates on loans than it did for other population groups. JPMorgan Chase settled the discrimination legal complaint, paying $55 million.

Another example dates back to 2020 when Citigroup accidentally wire-transferred $900 million to Revlon. The bank disputed the transfer in court, but the case was dismissed. A glitch in the wire transfer authorization system, which combined humans and AI,  was identified as the reason why Citigroup lost almost $1 billion. That wire transfer is still considered the most expensive accidental wires in history.

From AI anti-fraud to the use of AI in cybercrime, while it is impossible to estimate the losses that AI generates for the financial industry, they are indeed significant despite AI’s potential to drive gains and make money.

The Bottom Line

Unfortunately, the case examples in this report are not isolated rare events. We could fill up the pages of a book with all the AI experiments that have gone wrong. This in no way means we are against AI technology — but indeed, there is a lesson or two to learn here.

As AI moves forward, we hope that those using and developing the technology master the challenges of dealing with the ethical, legal, and human risks involved. We also hope that regulations and new laws encourage responsible AI and discourage misuse and abuse.

While news constantly breaks about good AI deeds, we thought it was a balanced and fair idea to remind readers that AI is still considered an experimental technology. And as any experiment is prone to risks and errors.

Advertisements

Related Reading

Related Terms

Advertisements
Ray Fernandez
Senior Technology Journalist
Ray Fernandez
Senior Technology Journalist

Ray is an independent journalist with 15 years of experience, focusing on the intersection of technology with various aspects of life and society. He joined Techopedia in 2023 after publishing in numerous media, including Microsoft, TechRepublic, Moonlock, Hackermoon, VentureBeat, Entrepreneur, and ServerWatch. He holds a degree in Journalism from Oxford Distance Learning, and two specializations from FUNIBER in Environmental Science and Oceanography. When Ray is not working, you can find him making music, playing sports, and traveling with his wife and three kids.