Even today, in 2023, there’s a lot of confusion about what AI, machine learning (ML) and deep learning (DL) are, what ‘intelligent machines’ can do and what the current state of AI technologies actually is.
With constant misinformation, it’s no wonder so many myths have sprung up. It’s time to enjoy some good old debunking as we bust the 10 most common myths about AI (Also Read: Is the AI Revolution Going to Make Universal Income a Necessity?).
1. “AI consists of intelligent robots or androids that look like humans.”
There’s a lot of confusion between robotics and AI, they are two completely distinct scientific fields which serve different purposes.
Robots are tangible devices served by actuators and sensors to perform a wide range of tasks, such as building, carrying or dismantling products in factories.
AI is software programmed in such a way that it is autonomous enough to make decisions and learn from its mistakes. Although some robots may eventually incorporate AI algorithms, the ‘intelligence’ part is just one aspect of AI.
2. “AI, machine learning and deep learning are all the same thing.”
Although they’re all parts of the larger AI system, AI, machine learning and deep learning are three different things.
Machine learning is the method through which AI learns from external sources, such as using algorithms to discriminate between data and determine its correct behaviors.
Deep learning is just one possible technique used in practical applications of machine learning. It is based on artificial neural networks (ANN) and works to determine the probability of successful decisions for AI.
3. “AI learns completely on its own.”
Despite some exaggerated hype about certain AI allegedly able to learn on its own, it is still impossible to find an AI-powered system that has any real-world application capable of growing from zero knowledge without human assistance.
Any system that has to deal with hidden information or uncertainty of any kind cannot be ‘understood’ by AI; it still needs to be fed input and data by humans. Also, every bit of information must have a clear purpose. AI cannot make guesses, it works through external sources and previous data; it can’t conceptualize abstracts like humans.
4. “AI is always better than human employees.”
The COVID-19 pandemic has required interventions that reduce in-person labor and close contact between humans. AI-powered automation has become a ‘hero’ that not only helped to prevent the virus from spreading, but also provided some much-needed resilience to many sectors plagued by lockdowns and restrictions, such as the supply chain.
While it is true that the move to AI systems has become permanent for a lot of jobs, many of these systems only handle simple and repetitive tasks that could be easily automated. Although they may be more efficient than humans in some instances, AI technologies cannot substitute a human employee in any area that requires creativity, empathy, ingenuity or critical thinking. Some very human things like face-to-face communication cannot truly be replaced by any machine.
AI just isn’t capable of creating original ideas or independent thinking. Even the most intelligent machines are still just virtual programs and algorithms.
5. “The power needed to perform all future deep-learning operations is unsustainable.”
It is undeniable that AI requires a lot of additional computing power to be trained and to perform all its complex, deep-learning operations. It seems in a future where most enterprises will make use of AI to some extent, this problem may grow to epic proportions, making its use potentially unsustainable.
However, AI is actually providing us with the perfect solutions to help tackle issues like climate change. It can help farmers push yields per hectare, improve energy production by reducing power grid waste and inefficiency, reduce carbon footprints and greenhouse gas (GHG) emissions, bolster strategic planning and decision models on how to tackle climate change and so on.
With additional advancements in computing, like quantum computing, it won’t be long until we have the power and resources to run even more demanding AI systems.
6. “It’s easy for an enterprise to rent the computing power needed to fuel AI operations.”
Perhaps this one would be true if AWS, Google, Microsoft and Alibaba Cloud weren’t currently centralizing the vast majority of the computing power available in the world. So, AI developers currently have just two choices: renting at exceptionally high prices or purchasing their own super-expensive hardware (Also Read: The Four Major Cloud Players: Pros and Cons).
A new company called Tatau developed a blockchain-based supercomputing platform that can solve the issue. Their solution allows the aggregation and reselling of the combined resources of a globally distributed network of GPU-based machines.
Imagine cryptocurrency miners, gamers or other high-performance computers dedicating their computing power toward AI development. AI companies can tap into this underexploited source of GPU power to train their machine-learning models at a much cheaper price. Note that this new platform may also provide an answer to the problem highlighted in myth five, since it promotes efficient use of currently untapped resources.
7. “You need immense amounts of data to train AI.”
This isn’t necessarily the case. Sure, you need a lot of data and computing power to train an AI from scratch. And, albeit to a lesser extent, you need terabytes of data to train an AI to perform a complex task, such as driving a car. However, depending on the field of application of the AI, pre-trained neural networks are flexible enough to be retrained in specific areas.
The basic data framework may come from a larger, more general data set, with only the last part of the network needing to be replaced to fill in the blanks. Time has passed since the adoption of early AI, now new AI can generate synthetic datasets that could be used to train other AI. In fact, it has even be proven by MIT that these datasets can be more efficient than traditional ones, paving the way to a world of new possibilities.
8. “AI will replace existing BI tools, making any previous technology obsolete.”
This myth is a bit of a stretch, to say the least. The majority of modern business intelligence (BI) solutions are highly scalable and often customizable so that any future AI-based model can be easily integrated directly into their platforms.
Companies always prefer to implement only those solutions which come without any risk of workflow disruption, and AI technologies have adapted to this need. Therefore, most AI platforms are implemented via the web so no replacement is necessary or, in the worst-case scenario, can be safely implemented in phases to lessen workflow interruptions.
9. “Artificial neural networks are like biological networks, but mechanical.”
No artificial neural network can ever hope to reach a fraction of the complexity of the human brain. It’s like comparing the complexity of a military aircraft to a kite just because they can both fly.
Despite many years of clinical and scientific research, we still fail to understand biological neural networks to their full extent, since neurons fulfil so many different tasks within the human body (think about the difference between a sensory and motor neuron) and even transmit information through many different pathways (using electricity, chemical potential and neurotransmitters).
The majority of AI employed by enterprises are just Narrow AI that possesses simple abilities to react to data triggers. They are equipped with little to no memory or data storage capabilities and only use historical data to inform decisions.
Strong AI and Deep AI that can apply their intelligence and knowledge to solve problems are still largely theoretical and have very little current practical application. To put things in perspective, the Fujitsu-built K, one of the most advanced strong AI, needed 40 minutes to simulate the equivalent of just one second of brain activity!
10. “AI will eventually become intelligent enough to understand that humans are dangerous to it and must be exterminated.”
Well, we can’t actually debunk this myth since it’s not a myth. It’s a reality. Brace yourselves, because resistance is futile!
Jokes aside, AI has nowhere near the intelligence needed to understand the world around itself and make autonomous, rational decisions (Also Read: Why Superintelligent AIs Won’t Destroy Humans Anytime Soon).
Each algorithm is developed to perform one task and is not able to do anything outside of that, let alone reach the ability to think independently. Computers use the ‘brute force’ of their superior computational powers to find a solution to relatively simple issues, but they lack the understanding, perceptive depth and strategic complexity to have a purpose outside the one they’re programmed for.
While it perhaps shouldn’t be written off just yet as a total impossibility, there is no real chance of computers developing sentience, at least not for centuries to come. AI will remain nothing more than another — albeit, more complex — tool for us to use as we please for a long, long time.