We hear a lot about artificial intelligence (AI) and its transformative potential. What that means for the future of humanity, however, is not altogether clear. Not everyone even always has the same definition in mind when using the term "artificial intelligence." Even when you establish what capability can be attained, there is the question of the ramifications of that capability.
Some futurists believe life will be beastly improved, while others think it is under serious threat. Between the two extremes there’s a wide range. Here are some of the takes along the spectrum from 11 experts.
1. Sci-fi and Reality Are Not on the Same Page
What the popular imagination tends to associate with AI—a synthetic being that has consciousness of its own—is still in the realm of science fiction:
Richard Socher is currently the CEO of you.com, a new trusted search engine. He wrote the quote just a couple of years ago when he was the chief scientist (EVP) at Salesforce.
2. Defining Artificial Intelligence
John Frémont is the Founder and Chief Strategy Officer at Hypergiant. He spoke these words at a presentation sponsored by Salesforce called Building A Better World: AI And The Future Of Business.
3. You Say “Tapestry,” I Say “Frankenstein.”
While “tapestry” carries a nice connotation for a composite, there are those like Tristan Harris who don’t see it that way but as a monstrous synthesis:
That take was quoted, appropriately enough, in How to Stop Technology From Becoming a Digital Frankenstein. Harris is the Co-Founder & Executive Director of the Center for Humane Technology. He has expounded on his concerns in multiple presentations, including one entitled Humane: A New Agenda for Tech with Tristan Harris, viewable on Vimeo.
In that presentation, he offered another vivid image to picture how insidious the effect can be:
4. You Should Be Worried
Others share the concern about something so powerful growing and expanding unchecked. But some say we’re worrying about the wrong thing:
Andrew Ng, founder and CEO of Landing AI, and founder of deeplearning.ai also teaches at Stanford. This quote comes from Andrew Ng: Why AI Is the New Electricity, which reflects what he said to the Stanford Graduate School of Business community as part of a series presented by the Stanford MSx Program.
The focus has to be on societal impact, he argued “Evil AI hype,” he believes diverts attention from that critical issue: the question of displaced jobs:
5. Jobs Will Shift
Ng is not alone in bringing up the future of jobs:
James Manyika is the Chairman and Director, McKinsey Global Institute (MGI). He voiced his concern about humans adapting to the new future by equipping themselves with the right skills in a Salesforce panel entitled The Future of Work in the AI Age.
6. Never Mind Coming, the Robots Are Here
Indeed, the advance of automation when combined with robotics is bringing just what Ng warned about into our current reality:
The full quote that appears in context in National Geographic’s The robot revolution has arrived is the following:
“We’ve gotten used to having machine intelligence that we can carry around with us,” said Manuela Veloso, an AI roboticist at Carnegie Mellon University in Pittsburgh. She held up her smartphone. “Now we’re going to have to get used to intelligence that has a body and moves around without us.”
7. The Glass is Half Full on the Jobs Front
On the other hand, some see the rise of the machine as a good thing and that the jobs displaced will be more than offset by those created.
Svetlana Sicular is a research vice president at Gartner whose quote appears in a press release: Gartner Says By 2020, Artificial Intelligence Will Create More Jobs Than It Eliminates. Even while acknowledging that AI will lead to the loss of millions of jobs, she considered the gain worth it for improved productivity and the shift to new job opportunities.
She claimed that while some would require a high level of skill, she believed there would still be openings for jobs that don’t require such advanced skills. The main loss, she predicted would be for middle- and low-level positions.
8. Explain It to Us Humans
One additional concern about AI that has been the focus of increased attention over the past several years, as more and more business decisions are dictated by algorithms, is that of its “black box” nature. (Read also AI's Got Some Explaining to Do)
Now it’s not just people in the field who are raising the alarm about having determinations made by a program that doesn’t have to account for its decisions and their ramifications. Lawyers who fight for their clients’ rights have to contend with institutions that say it wasn’t a person who made the decision to cut services but an algorithm.
Michele Gilman is a clinical law professor at the University of Baltimore who was recently featured in MIT’s Technology Review’s The coming war on the hidden algorithms that trap people in poverty:
“A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services.”
Gilman was puzzled that as her elderly, disabled client grew sicker, she was getting fewer hours of care services allotted rather than more. Only once they got to the hearing did the state’s representative, a nurse, reveal that they were using a new algorithm. Gilman recalled:
"She couldn’t answer what factors go into it. How is it weighted? What are the outcomes that you’re looking for? So there I am with my student attorney, who’s in my clinic with me, and it’s like, ‘Oh, am I going to cross-examine an algorithm?’”
Given that algorithms are given the power to decide who is entitled to life-preserving services and who is not, it becomes essential to understand the process it uses. Despite appearances of being scientific and objective, not all algorithmic models are truly reliable.
8. The Question of Trust
Alexander Amini is a computer scientist at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). This quote is from a late November 2020 article in MIT News, A neural network learns when it should not be trusted,
He elaborated on the ramification of depending 100% on something that is not reliable 100% of the time. He pointed out: “Neural networks are really good at knowing the right answer 99 percent of the time.” But even that one percent error rate is not acceptable when lives are on the line.
That’s a problem because the tendency in the past has been simply to work out the models but not to identify “When they might be wrong,” Amini explained: “We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently.”
9. The Humans Behind the Algorithms Influence Outcomes
Vivienne Ming is the Executive Chair & Co-Founder of Socos Labs. She also named one of 10 Women to Watch in Tech by Inc. Magazine. She shared her views on AI in a presentation entitled Understand Your Love/Hate Relationship With AI. (Read also: Fairness in Machine Learning: Eliminating Data Bias.)
10. The Way Forward is to Adopt an Ethical and Transparent Approach
Kathy Baxter’s specialty is precisely that. She is an Ethical AI Practice Architect for Salesforce. She shared her insight into the “biases that live in our data” in an interview for the Salesforce blog. Her solution also calls for diversity for the same reasons that are called for in Why Diversity is Essential for Quality Data to Train AI.
11. Are Companies Honest About Tackling the Ethical Questions?
Major companies now have specialists and even whole departments devoted to keeping their AI within ethical bounds. But is this sincere or only so much lip service to standards that they hope to control themselves?
Timnit Gebru coauthored a yet unpublished paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”
The ideas in the paper may have cost Gebru her job on Google’s ethical AI team. On December 2 she tweeted that her “resignation” was being accepted by the technology giant:
Gebru was Joy Buolamwini’s co-author on Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (Read also: Can AI Have Biases?) that demonstrated facial recognition was fairly accurate for white men but much less so for women or people of color. Google was proud to have her on the team for that but balked at her latest research findings.
In reporting on her forced resignation from Google, MIT Technology Review also discusses what was in that controversial paper: “Gebru’s draft paper points out that the sheer resources required to build and sustain such large AI models means they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest.”
By calling attention to these issues, the paper brings up some considerations that the company may prefer to keep swept under the rug, including questions about the “negative environmental impact and inequitable access to resources” that reinforces the divisions between the haves and have-nots.
Emily Bender, a professor of computational linguistics at the University of Washington, is one of the seven coauthors of the paper. She provided the draft copy for the MIT Technology Review article. Bender explained the need to not just push ahead but to think about the ramifications: