What is Machine Consciousness (MC)?
Machine consciousness, also known as artificial consciousness, is a term used to describe a machine that’s aware of its own existence and can think autonomously. With mathematical and theoretical computations, researchers aim to emulate human consciousness in machines.
A conscious machine would be considered a type of strong AI, which is designed to emulate the human brain and displays a mix of awareness and intelligence.
This is notably different from solutions that use weak AI, which can appear intelligent when performing certain tasks, but have no general intelligence or consciousness outside of that narrow framework.
It’s worth noting that machine consciousness is a distinct conference from artificial general intelligence (AGI). The main difference is that AGI refers to a machine that is generally intelligent but isn’t necessarily self-aware and sentient.
Is It Possible to Make Conscious Machines?
Whether or not it’s possible to make conscious machines is not just a logical debate but also a philosophical one that’s often overlooked within AI research.
“The topic of consciousness, however, is neglected in the field to a large extent. On the one hand, this is because of the concerns that the brain and consciousness will never be successfully simulated in a computer system,” explained Patrick Krauss and Andreas Maier in Will We Ever Have Conscious Machines?
At the same time, it’s unclear whether even advanced machines that mimic human thought and emotion at a high level can be said to be conscious in the same way as an organic lifeform.
These challenges are made worse by the fact that our contemporary understanding of human consciousness is quite limited, making it difficult to build a blueprint to replicate this subjective experience mechanically.
Approaches to Building Machine Consciousness
When it comes to building conscious machines, AI researchers Liang Wang and Ziyi Ma have highlighted two main strategies; the algorithmic construction strategy and the brain-like construction strategy.
The Algorithmic Construction Strategy
This strategy is about producing algorithms to simulate the cognitive capabilities of humans with AI models. This approach primarily relies on simplifying and emulating cognitive processes. As such, while it can perform some basic tasks, it can struggle with more complex common sense problems that require abstract reasoning rather than basic pattern recognition.
The Brain-Like Construction Strategy
On the other hand, the brain-like construction strategy is where developers actively attempt to replicate the structure of the human brain within the confines of a machine. This is done by building an artificial neural network that replicates the human brain’s neural structures.
Building AI in this way can help to stimulate the information-processing capabilities of the brain and power potent use cases such as speech recognition, voice recognition, face recognition, image recognition, and information processing.
However, while this approach is effective, it’s very difficult for researchers to replicate the complexity of human consciousness.
User’s Perceptions of Machine Consciousness
Since the launch of ChatGPT in November 2022, there have been lots of misconceptions that verbose generative AI solutions can be considered AGI.
For example, chatbots like ChatGPT can respond to user prompts in natural language with significant depth in a way that indicates autonomous thinking on the surface, but in reality, they simply predict the next word in a sequence of text based on patterns they’ve learned from their training data.
So while such solutions may appear to exhibit AGI, they are incapable of thinking and maintaining self-awareness the way a human mind is.
Despite the limitations of current AI technologies, some consumers do perceive consciousness in these technological solutions.
A research paper titled Do You Mind? User Perceptions of Machine Consciousness surveyed 100 people to discover whether or not they perceived machine consciousness in available technologies and found that many respondents perceived consciousness in GPT-3 and a robot vacuum cleaner.
While this is a small study, it raises the question of whether or not a machine is classified as conscious will depend on whether a human being considers it capable of conscious thought.
Consciousness is a difficult concept to pin down. Until neuroscientists find a way to make consciousness and human thought objectively measurable, it’s going to be a monumental task to attempt to replicate human intelligence.
Even if researchers develop machines that are capable of emulating human thought in its entirety, their use of emulation and mechanical nature can always be used to undermine their status as “conscious.”