Part of:

What Will It Take to Trust AI? Overcoming AI Fears, Uncertainty, and Doubts

Why Trust Techopedia
KEY TAKEAWAYS

It's actually easier to trust AI than people as we can break down algorithms easier than penetrate the human psyche. Trusting AI requires implementing frameworks like TRiSM before deployment, ensuring rigorous data vetting and transparency. AI must earn trust by proving itself through limited goals and ongoing monitoring.

Leading experts in artificial intelligence (AI), even some who were instrumental in its development, are now sounding warnings about the potential harm it could cause to the human species.

This is prompting governments to call on the technology community to start building safeguards into their models to prevent them from running amok or being used for nefarious purposes.

Essentially, the expectation is that there is some kind of code or algorithm that can make an AI model worthy of trust. But is this even possible? And even if a model can be digitally trusted, how can we ensure that people will not betray that trust, either intentionally or unintentionally?

Trust What You Know

One of the key elements of trust is understanding. People trust one another because they have achieved a comfortable level of understanding regarding the way others think, what motivates them, and how they’ve reacted to situations in the past.

This same approach is actually easier with AI because although it is highly complex, there are ways to monitor and analyze its behaviour on a highly granular level.

At the moment, the tech industry has developed a three-legged stool for AI governance called TRiSM: Trust, Risk, and Security Management. According to Gartner, organizations that embrace TRiSM as an operational element stand to see a 50% improvement in AI adoption, fulfillment of business goals, and the overall user experience.

Advertisements
Trusted AI could account for 20% of the global workforce and generate some 40% of economic productivity by 2028.

The key to this success, however, is to implement frameworks like TRiSM before models are deployed into production environments, not after, says Avivah Litan, Gartner Distinguished Analyst VP.

Don’t wait until models are in production to apply AI TRiSM. It just opens the process to potential risks. IT leaders should familiarize themselves with forms of compromise and use the AI TRiSM solution set so they can properly protect AI.

In fact, not doing it this way could introduce greater risk to intelligent processes due to the cross-functional cooperation that is needed for effective implementation. Diverse teams like analytics, security, legal, and line-of-business all need to be working on the same page, and this is easier to accomplish upfront rather than after dependencies and parochial needs have already been established.

Fortunately, there are many ways to make AI more trustworthy.

Caltech professors Yisong Yue and Anima Anandkumar highlight a number of best practices to ensure models are performing as needed, starting with rigorous vetting of the data that is used to train them. AI also needs clear instructions right from the start if it is to have any chance of returning the correct results.

Transparently Intelligent

On a more fundamental level, most AI models suffer from a lack of transparency into their inner operations. Even the experts often have to dig deep into code to figure out why a particular algorithm behaved the way it did.

In many cases, unforeseen results stem from data patterns that humans cannot perceive, which leads to actions that cannot be predicted and the erosion of trust that the model will behave as expected in the future.

Further complicating matters is the fact that there are many kinds of trust. Ramón Alvarado, assistant professor of philosophy at the University of Oregon, argues that the only kind of trust that should be given to AI is epistemic trust; that is, we only trust its ability to expand our capacity to understand things.

This is very different from the kind of trust we place in typical machines or even complex environments like the healthcare system. In part, this is due to the relatively finite expectations we have for most things and the static way in which they function. Healthcare professionals are expected to make us well, not pave our driveways, and vice versa.

AI offers a more open-ended relationship with humans in that it is capable of doing virtually anything in any way that suits it. For this reason, any attempt at trusting it other than epistemically – such as cognitively, intellectually, and psychologically – will simply lead to conceptual confusion.

How Could AI Earn the Trust?

Ultimately, trust must be earned regardless of whether the intelligence is artificial or biological. As author Gary Marcus explained to The Economist recently, only by proving itself worthy of achieving limited goals should AI be trusted with more important responsibilities.

This requires not just good engineering from the start but ongoing monitoring and optimization to ensure a given model does not spin out of control as its data environment, and possibly its operational mandate, changes over time.

This effectively puts AI on the same footing as humans, whose inner thought processes are just as mysterious and unknowable as today’s neural network. Just as you wouldn’t hire a recent college graduate as your new CFO, you shouldn’t put AI in charge of your company’s entire financial footprint.

Only after a lengthy period of competent legal service should AI be trusted with mission-critical responsibilities. And even then, a well-run organization will have checks and balances to ensure things don’t go awry regardless of who – or rather what – is running things. These precautions can involve AI keeping track of other AI, just like humans track the activities of their co-workers.

The Bottom Line

Perhaps the fundamental challenge to building AI we can trust is the fact that, for the first time, we are faced with sharing out the interconnected digital world with an intelligence unlike our own.

Not only is it capable of absorbing information about its surroundings, but it will soon have a pronounced ability to make its own decisions, come up with its own ideas and take actions that it deems appropriate – sometimes for its own reasons, not ours.

This represents a radical shift in the natural order, one that will bring untold consequences to the human psyche.

Perhaps, then, the real challenge facing humanity is not whether we should trust AI but whether can we trust ourselves now that we are no longer the smartest beings on the planet.

Advertisements

Related Reading

Related Terms

Advertisements
Arthur Cole
Technology Writer
Arthur Cole
Technology Writer

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.