Stanford Professor Ron Gutman: ‘AI Will Change Us All — But We Need Ethics’

Why Trust Techopedia

The emergence of generative AI and humanoid robotics are raising questions about what it will mean to be “human”.

While some observers warn that we are heading for AI-induced civilization doom — known as the irreversible point of “technological singularity”, others take a more optimistic view that the next wave of advanced AI will lead to a more optimistic outlook for humanity.

Ron Gutman, a Stanford University Adjunct Professor, argues that the fears around AI will be outweighed by the benefits of accelerated cognition.

Case in point: advanced AI is already driving discoveries and solutions for humanity that neither a human brain nor a computer could achieve alone.

For example, researchers are using machine learning to identify 100,000s of new viruses or so-called “zombie cells”, both of which help to understand how the world works and how humans fit into the equation.

Gutman argues we need to think differently about emerging AI: it will help us and improve our way of life and capabilities as humans.

Advertisements

Techopedia spoke with Gutman about where society currently finds itself in terms of the deployment and advancement of AI technologies and the importance of the compassion paradox — training machines to care may teach us to be more human — in managing the potential impact on human life, jobs, and health.

Key Takeaways

  • Stanford’s Professor Ron Gutman argues that AI will enhance humanity but requires ethical considerations.
  • Gutman highlights AI’s potential in healthcare, mental health, and personal freedom.
  • Robotics will support logistics, elderly care, and job automation.
  • Gutman sees the future of human-AI convergence as a new evolution for society.
  • Ethical AI, he argues, must prioritize compassion to avoid dangerous outcomes.

AI Over The Next 10 Years

Ron Gutman speaks to Techopedia about the future of AI.
Ron Gutman speaks to Techopedia about the future of AI. Source: Supplied

Ron Gutman speaks to Techopedia about the future of AI. (Supplied)

Q: How do you see AI technologies advancing over the next 10 years?

A: AI is the kind of technology that expands and improves exponentially, so it’s not going to be linear. It’s starting to move faster now, and we’re still at the learning stage before we get to the exponential part of the curve.

But we’re going to get there because a lot of people are focused on it.

Now with OpenAI, OpenAI, Google, Microsoft, Nvidia and others building a stack and heavily investing, we’re creating much of the infrastructure that enables a whole pyramid of developers to start creating applications on AI that can transform everything that we’re doing.

AIs are moving from becoming like other tools — a hammer or a fork, which human beings have used over time to do functions that otherwise they can’t physically do themselves — to becoming part of us.

It is becoming increasingly more integrated into our lives in a way that is seamless. It’s what I call convergence.

For example, the security system in my house knows when to turn on and off and how to interact directly with the authorities if something happens. I don’t need to worry; it’s freed up a part of my psyche that otherwise would be worried about what’s happening at home.

That’s profound — that’s not a tool but becomes part of who we are.

Otherwise, we need to use our mental capacity and stress about it. By doing this, we’re taking a lot of the stressors out of our lives, which can improve mental health and free up our minds to do other things.

Once it starts changing consciousness, AI changes who we are, how we operate in the world, and how we conduct ourselves. It’s more profound than being able to help write a recipe. In the end, whether it’s driving cars, buying groceries, or planning itineraries — all these things will be replaced by AI.

It’s still controversial, but jobs like lawyers, accountants, and CEOs — we’re all going to be replaced because in applying logic and decision making, the AI is better.

That’s one side of the equation that will continue improving —executing on tasks. And not just brain-related or mind-related tasks, but physical tasks, because we’re moving into robotics.

When Robots & AI Meet

Q: How far do you think we’ll go in adopting practical applications for robots in the short term?

A: Very far. Look at how quickly millions of cashiers in the US alone lost their jobs to automated systems. It’s already a process that’s been going on for the last couple of decades, but it will accelerate now in automating factories and agriculture.

This is good in some cases because it means that some of the workers experiencing brutal conditions will be replaced by machines.

Initially, functional robotics will help us with logistics, but eventually, robots will help the elderly, for example. Robots will move items from one place to the other for them, but they’re also going to be there as companions to talk about memories, to call their kids, to organize the house, and so on.

You can see a world, not too far in the future, where these companions will not just be voice companions, but will help the elderly out of the house. Automated wheelchairs will make it possible that the elderly are not confined to their homes and they can be independent for longer.

These are technologies that need to be honed and perfected, but they already exist.

In engineering, from the architecture of buildings to digital applications, we already have AIs coding full applications that are using data to create better experiences. The engineers who are training these AIs are taking away their own jobs.

Q: What are the implications of AI replacing these jobs?

A: We all become house cats for AI. It’s a trajectory; it’s something that will take time to evolve. In the first stage, there is displacement. Like the Industrial Revolution, you have technology that displaces existing jobs.

It’s scary, and governments need to handle this because it’s not going to handle itself, and it can create a lot of unrest. We don’t want to lose jobs without providing people with alternatives.

We need to take care of these people, especially those who cannot adapt fast enough, and ensure that they have a safety net.

And for those who can, train them quickly for the new set of jobs that will support what the AI does to achieve better outcomes. Some of them we can predict, and some of them we cannot predict.

With the acceleration of creating new medical therapies because of AI, we’re going to solve so many human health issues in 10-20 years that will change entirely the trajectory of life — the longevity, how many years we live, how many years we are available to work.

Our minds are going to be clearer, and our bodies are going to be stronger. So, we need to be very mindful and maybe use our AIs to help us think about what to do with these people to keep them productive; to keep them supporting what the AI does to take it to the next level.

But in our lifetime, there’s going to be plenty for people to do until the AI catches up. Who’s going to create these AIs? It’s people.

Changing Perspective on the AI Relationship

Q: Why do you view the “us vs. them” attitude to human-AI relations as a false dichotomy?

A: Rather than thinking that eventually machines and humans will clash and one of them will dominate the other, like in the Terminator, I believe that what we’re going to see is convergence.

The human species will enter the next stage of evolution, where humans and machines become one, and our capabilities are endless.

It sounds a bit scary and maybe a little bit robotic, but we’re not going to turn into robots, and AIs are not going to turn into humans.

The Neanderthals, the Homo Sapiens, they’re all humans, but they’re a different kind of human.

We can’t even imagine at this point what it will be possible to do in 10 years. But we need to accept it. The most important point is empathy — we need to make sure that we teach our AIs how to feel.

We need to make sure that when we’re creating the training sets for them and creating the guardrails that we make sure they are ethical. It sounds funny to teach a machine to be emotional. But I’d argue that we have to. We must create this convergence, to not have this Armageddon-type of machine fighting with humans.

AI can be designed only to a certain point, because you train them on a model of learning and they learn from what they see.

But the process must be designed in way that makes them feel, makes them compassionate, and makes them mindful. Because then the convergence will be of a better species.

We as humans are not killing the weak ones in our pack; its survival of the fittest not survival of the strongest and we take everybody with us because we believe that this is the right way to do it.

That’s an important part of what makes us the most successful species, because we care about our weakest ones. We want to make sure that when the AIs are stronger than us, they care about us as well.

Q: Is this the answer to the doomsayers that warn AI will eventually endanger humanity — to teach the algorithms compassion?

A: It is not dangerous, as long as we’re not leaving the compassion, the humanity, the caring and the ethics as an afterthought. As long as we’re putting the guardrails in place. We need to do some regulation around the AIs and who has the right to start developing, because they can be easily weaponized in the wrong hands.

The last mile is open, but we can regulate some of the infrastructure to prevent things from going out of control. Just like we regulated nuclear weapons — this is a lot more potent than a nuclear weapon.

We need to figure out a way to regulate, both by the developers that are smart enough to run very quickly now and eventually by nations. Nations will play an important role in regulating AI and creating global treaties to make sure that we’re going to a world in which convergence is a positive thing, rather than used by the wrong people for the wrong reasons.

Q: That’s a dilemma for the international community, isn’t it?

A: Yes, it’s already done, unfortunately. With any great technology — from gunpowder to the Internet — you find a lot of good actors who are making humanity better off and a handful of bad actors. As much as we’re excited about the good things we continue to develop, technology needs to be curbed like any other technology to ensure that we as a society are not allowing bad actors to weaponize it and use it against humanity.

AI & Healthcare Show the Way

Q: Can you tell us more about how AI is making healthcare advances?

A: Drug discovery is already a huge benefit of artificial intelligence, from figuring out new molecules to structures, which would take years to explore. The potential is huge in diagnostics.

There are now advanced cardiac ultrasound machines packaged with AI systems that can identify irregularities in ways that the human eye cannot, both in a snapshot and over time, by comparing to other people like you and to what you did before. The doctor can use it in their office; they don’t even need to send it to a laboratory.

AI also has an extremely important role in providing care. We’re already seeing that people are comfortable having conversations with AI.

In a world in which you have an AI companion that is available 24/7 and it’s never tired, upset, or busy doing other things — and frankly has nothing better to do than help you — wow!

If you make the AI good enough, compassionate enough, and present enough, people will connect to it. In some experiments with AI now, 50% of people can’t even tell the difference between an AI and a person.

Generative AI can help people make better decisions about their health and wellbeing from diagnosing and understanding what to do next, to eventually, improving the process of care itself.

This is the opportunity to provide a companion who can help guide you through the care process and achieve a better outcome in real time. Nobody can afford to have a doctor with them all the time. But if you connect all your data to a concierge AI, they have the right knowledge to not only guide you through the process but also help you remember to do things and change course when your health is changing.

We are already seeing robotic surgery using cyber knives, but as the technology gets more advanced, there will be more surgeries and procedures that AI will eventually be able to perform as well.

Now, with full prosthetics printing, we have the capability to print organs. AI will play an important role in keeping these systems viable and making them easily integrated into the human body.

That’s a little bit further down the line, but it’s where AI becomes unbelievably exciting.

But it needs to have a soul: With all these advancements, let’s make sure that our AIs are also compassionate, caring, kind, and all these good things.

Q: There’s potential in developing countries to overcome serious health problems but how do we overcome the issue of data bias or insufficient data for certain populations?

A: We need to make sure that our datasets include people of all origins so that we train our systems in ways that take into account that there’s some difference.

The majority of people are similar but there are some nuanced differences, and we don’t want them to discriminate. It’s exactly the compassion we’ve been talking about.

We have learned over the years the terrible cost of discrimination, and we don’t want to see it again just because we have a new technology.

This is essential, especially when we design it, because AI will play an even more important role in areas where people are less fortunate. Rather than replacing the transitional technology that we’ve had in the West, a lot of these AIs will be the first technology that people will have available to them because it’s going to be very competitive.

We want to make sure that there are no biases at that point.

Q: Does there need to be new groups of people or organisations that specifically collect data or ensure that it is appropriate and ethically balanced?

A: Absolutely. That’s something the big companies that are responsible for training the underlying AIs need to do, but there’s also an opportunity for groups themselves to make sure that they are being represented.

I’ve been working with engineers my entire life and for the most part they’re extremely ethical, mindful people. We need to do it by educating.

It starts at the universities, by going to the computer science classes and making sure that there are classes dedicated to ethics, so that this next generation of people that will be hands on designing these systems is educated to making sure that there’s diversity.

We need to lobby the big tech companies, and eventually as we’re getting into regulation, make sure that we have laws that provide equal opportunity so that the training is done right.

Advertisements

Related Terms

Advertisements
Nicole Willing
Technology Journalist
Nicole Willing
Technology Journalist

Nicole is a professional journalist with 20 years of experience in writing and editing. Her expertise spans both the tech and financial industries. She has developed expertise in covering commodity, equity, and cryptocurrency markets, as well as the latest trends across the technology sector, from semiconductors to electric vehicles. She holds a degree in Journalism from City University, London. Having embraced the digital nomad lifestyle, she can usually be found on the beach brushing sand out of her keyboard in between snorkeling trips.