Question

Why do AI engineers have to worry about intuitive engines?

Answer
Why Trust Techopedia

The idea of human intuition is now a major part of groundbreaking artificial intelligence work – which is why AI engineers pay so much attention to “intuitive engines” and other similar models. Scientists are at work trying to crack the process of human intuition and simulate it with artificial intelligence entities. However, in exploring how logic and intuition work in neural networks and other AI technologies, the definition of intuition itself becomes somewhat subjective.

One of the best examples is the use of a new, talented supercomputer to beat human champions in the game of Go – a game that is often described as somewhat intuitive, even though it also relies on hard logic. Since Google's AlphaGo has beaten expert human players, there is a lot of speculation about how well computers are at human-style intuition. However, if you look at the structure of the game of Go, you see that there is a lot to be determined in the actual build of these technologies to figure out how much they're relying on intuition, and how much they're relying on extensive logic models.

In a game of Go, a human can place a move well based on intuitive perception or long-range logic or a mix of both. By the same token, computers can build expert Go-playing models based on extensive logical models that can mirror or simulate intuitive play to an extent. So in talking about how good the computers may be at intuitive models, it's important to define intuition, which the scientific community has not fully done.

Mary Jolly at the University of Lisbon notes different opinions on definitions of intuition in a paper called “The Concept of Intuition in Artificial Intelligence.”

“There is no consensus among scholars about the definition of the concept,” Jolly writes. “Until recently, intuition did not yield to rigorous scientific methods of study and, often associated with mysticism, has been habitually avoided by researchers. So far, the discourse on the subject has lacked coherence and method.”

If the concept of intuition is itself inherently vague, the measurement of how well artificial intelligence is doing in the intuition simulation is going to be even more problematic.

One explanation by the writers of a paper called “Implementing Human-like Intuition Mechanism in Artificial Intelligence” suggests the following:

Human intuition has been simulated by several research projects using artificial intelligence techniques. Most of these algorithms or models lack the ability to handle complications or diversions. Moreover, they also do not explain the factors influencing intuition and the accuracy of the results from this process. In this paper, we present a simple series based model for implementation of human-like intuition using the principles of connectivity and unknown entities.

For a perhaps more concrete look at the process of human intuition, a Wired article cites MIT research in explaining the human mind’s “intuitive physics engine” – which explains what happens when we look at a stack of objects. We can intuitively understand whether or not objects are likely to fall, or whether they are stable or steady, but this intuition is based on extensive logic rules that we've internalized over time, as well as our direct vision and perception models.

Writer Joi Ito points out that the systems in which we intuitively use our physics engines are “noisy” and we are able to filter out that noise. That's been a big part of developing artificial intelligence – extracting sense from noisy models. However, those models have to go much further to really make the kinds of predictions and analysis that humans can apply to complex systems.

One easy way to put it is that to achieve this outcome, computers would have to mix sophisticated vision with extensive logic and perceptive cognition in ways that they currently cannot. Another way to explain it is that we see the human brain as a “black box” that has not been wholly reverse engineered by technology. Although our technologies are highly capable of producing intelligent results, they cannot yet simulate the powerful, mysterious and amazing activity of the human brain itself.

Related Terms

Justin Stoltzfus
Contributor
Justin Stoltzfus
Contributor

Justin Stoltzfus is an independent blogger and business consultant assisting a range of businesses in developing media solutions for new campaigns and ongoing operations. He is a graduate of James Madison University.Stoltzfus spent several years as a staffer at the Intelligencer Journal in Lancaster, Penn., before the merger of the city’s two daily newspapers in 2007. He also reported for the twin weekly newspapers in the area, the Ephrata Review and the Lititz Record.More recently, he has cultivated connections with various companies as an independent consultant, writer and trainer, collecting bylines in print and Web publications, and establishing a reputation…