Part of:

Occupational Hazard: The Pitfall of Automation

Why Trust Techopedia
KEY TAKEAWAYS

In order for automation to work with and for humanity rather than against it, we need safeguards and informed humans empowered to stop or fix a system error.

“To err is human; to really foul things up requires a computer.” William E. Vaughan made this observation back in 1969. Giving control over to an automated system carries potential for the system to go awry and cause serious harm before it is checked.

Automation is not new, but it is becoming a lot more widespread thanks to the integration of digital and physical systems. The upside of automation at scale is great efficiency. But the downside of relying on a set-it-and-forget-it system is that someone may fail to set it properly.

Automation can have destructive effects. That dystopian possibility was envisioned back in 1936 when Charlie Chaplin’s character experiences mechanized mishaps as he gets pulled from the factory assembly line into the machine in “Modern Times.”

In today’s world, we have far greater precision in our machines and electronics, though errors can, and do, occur. Even when the automated processes work as designed, the experience for the humans involved can be far from optimal, as was the case for the large-scale layoffs at Alphabet.

AI Automation Benefits

Advocates of artificial intelligence (AI) automation point out that greater efficiency can deliver the advantage of automation, relieving humans from having to do repetitive tasks, research and data analysis. Higher efficiency in menial tasks allows for more time that can be applied toward problem solving, development and research.

Some have pointed out that AI is already being applied to specific uses that lead to more accurate medical diagnoses, better use of natural resources and more personalized education than human teachers in the classroom can provide. It also improves safety in three areas:

Advertisements
  • Detecting the presence of weapons, such as firearms.
  • Removing the need for humans to be exposed to hazards both physical and chemical.
  • Predicting natural disasters.

These are not just possibilities, but the realization of the real-life advancements in AI. At this point, AI has proven itself capable of carrying out automated decisions. However, those decisions are not always the right ones which raises the problem of the possible misuse of AI. (Read Also: AI in Healthcare: Identifying Risks & Saving Money)

AI Automation Misuse

Not everything that can be automated should be. A Forrester blog pointed out that we are now suffering the effects of what it calls “random acts of automation.”

Instead of improving outcomes as intended, these automations make the experience worse for customers and businesses that rely on them. Automated processes without oversight to correct them can lead to outcomes that range from annoyances to serious safety risks.

Here are some examples of when automation for automation’s sake goes wrong:

  • Flaws in self-checkout increase the delays they were intended to eliminate.
  • Chatbots that fail to answer questions and delay access to a real human increase customer frustration.
  • Discriminatory effects that deny people jobs or loans can be the result of biased algorithms.
  • Automation can even lead to fatalities in accidents involving autonomous vehicles and machinery.

Automation That Misses the Point

One of the more minor irritations that arise from automation is the proliferation of marketing messages we are all subjected to online. The brands that use these tactics bombard their customers with emails and texts meant to use FOMO to drive sales.

But overuse of automation diminishes the impact of FOMO because customers have learned that today’s offer will be followed by another one tomorrow and the next day and the day after that. Therefore, marketers try to get their attention with attempts at personalization that belie automation. (Read Also: AI Washing: Everything You Need to Know)

For example, last year, Old Navy tried to capture my attention by combining a seasonal promotion with a reminder that I had rewards I could use toward the purchase. It even tried to play on vanity by saying “these looks have your name all over ‘em,” but with no human or algorithmic to check, the email went out with the absurd reminder that I had a total of $0 in rewards to use.

While there was no serious harm resulting from this email, thoughtless automated messages can show a lack of sensitivity and consideration. Such was the case for some of the recipients of notices of termination from Google’s parent company, Alphabet.

A number of employees who had been terminated from the tech giant took to social media to complain about the callousness of the company. For example, Blair Bolick, who had been a recruiter for Google, shared her reaction on a LinkedIn post:

“For me, the toxic positivity I’m coming across is almost unbearable. I can’t feel gratitude in this moment for a company that I gave so much of myself to, but felt it appropriate to part ways by locking me (and 12,000 of my colleagues) out of my corporate account at 4am.”

Another recruiter from the company, Dan Lanigan-Ryan, told Business Insider “he discovered he’d lost his job after a call with one of his candidates suddenly disconnected.”

At first, he and his manager assumed tech issues were to blame. A short while later, though, his email got cut off, and he realized something was up. That was confirmed when he saw the news of the 12,000 layoffs.

A former Google software engineer, Tommy York, also posted on LinkedIn about his experience:

“I was laid off from Google last week. I found out on my fourth day back from bereavement leave for my Mom, who died from cancer in December… I’ve certainly heard worse stories, including layoffs of expecting parents and of Googlers on disability leave. But it still feels like a slap in the face, like being hit when you’re down.”

Other former employees echoed the sentiment of feeling betrayed because they had been given absolutely no warning about the termination with emails and texts coming in while they were still sleeping with no follow-up or context provided. When layoffs at scale are left to automated efficiency, the only concern is to get quick and immediate effect. Human feelings don’t enter into such calculations.

Keeping the Human Involved

It’s likely that Alphabet used some of its own advanced AI tech to carry out the layoff process. The outcome raises the question: Should humans allow AI the power of carrying out its decision?

Absolutely not, answer some experts; “AI isn’t ready to make unsupervised decisions,” was the warning in the title of another HBR article published a few months ago. It noted that AI has advanced a great deal in the past few years and has become more accurate, however the article’s authors insist that it still cannot be wholly relied upon.

They explain that the algorithm can only grasp “models and data,” which fails to account for “the big picture and most times can’t analyze the decision with reasoning behind it.” Consequently, humans should be empowered to make the final decision using “augmented intelligence instead of pure artificial intelligence.”

That does seem to be the wiser and more humane approach moving forward, especially for instances that affect people’s lives and livelihood. (Also Read: Why is consumer ML/AI technology so ‘disembodied’ compared to industrial mechanical/robotics projects?)

Advertisements

Related Reading

Related Terms

Advertisements
Ariella Brown
Contributor
Ariella Brown
Contributor

Ariella Brown has written about technology and marketing, covering everything from analytics to virtual reality since 2010. Before that she earned a PhD in English, taught college level writing and launched and published a magazine in both print and digital format.Now she is a full-time writer, editor, and marketing consultant.Links to her blogs, favorite quotes, and photos can be found here at Write Way Pro. Her portfolio is at https://ariellabrown.contently.com