10 Ways ChatGPT Can Let You Down: Exploring the Chatbot’s Limitations

Why Trust Techopedia
KEY TAKEAWAYS

ChatGPT excels in diverse skills like translation, song writing, research, and coding. Yet, like any AI, it faces limitations. Understanding complex contexts and relying on biased data are among its challenges.

ChatGPT is all the rage, and it’s everywhere. For users unfamiliar with large language models (LLMs), the chatbot’s natural language capabilities may give the impression that it knows everything and can respond to any question. 

However, the reality is quite different. This popular chatbot has several fundamental limitations, including the tendency to hallucinate facts, a lack of knowledge about current events, and limited logical reasoning capabilities. 

This article will examine some of the top limitations of ChatGPT and look at the dangers of overreliance on chatbots. 

10 Criticisms of ChatGPT

10. Hallucinating Facts and Figures

The most significant limitation of ChatGPT at this point is that it can hallucinate information. In practice, this means it can make up false information or facts and present them to users with confidence. 

ChatGPT is a language model that uses natural language processing (NLP) to identify patterns in its training data and predict what words are most likely to answer the user’s prompt. This means that ChatGPT doesn’t think logically as a human being would. 

Thus, incomplete or limited training data can lead to incorrect responses. 

Advertisements

Hallucinations are a significant issue because, if unchecked, they can lead to users being misinformed. This is why OpenAI warns that “ChatGPT may produce inaccurate information about people, places or facts.”

9. No Knowledge of Events After April 2023

Another limitation of ChatGPT is that it has no knowledge of current events. For instance, GPT-4 Turbo has a cutoff date of April 2023, while GPT 3.5 Turbo is limited to data recorded before September 2021. 

In this sense, ChatGPT can’t be used as a search engine in the same way that a tool like Google can. So, it’s important to remember that not all information generated will be up-to-date. 

8. Generating Incorrect Maths Responses

While ChatGPT is excellent at generating natural language responses, its mathematical capabilities are limited. According to a study conducted by an associate professor at Arizona State University, ChatGPT’s accuracy on mathematical problems was below 60% accuracy. 

So, if you use the chatbot to try and balance an equation or solve a mathematical problem, there is a chance that it will make a mistake. As a result, if you’re using ChatGPT to solve math problems, you should always double-check the output. 

7. Spreading Bias

Since its launch, OpenAI has struggled to address ChatGPT’s tendency to spread biases. Back in August 2023, researchers at the University of East Anglia asked ChatGPT to answer a survey on political beliefs as if it was a supporter of a liberal party in the US, UK, or Brazil before asking the assistant to take the same survey.

READ MORE: 55 Best ChatGPT Prompts for Starting a Business

After analyzing the results, the researchers found that “ChatGPT had a “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K..” 

This incident is just one of the many incidents when ChatGPT has demonstrated bias, from generating content that can be interpreted as sexist, racist, and discriminatory against marginalized groups. 

For this reason, users should constantly evaluate output for potential bias and prejudice before acting on or publicizing outputs to avoid reputational and legal risk. 

6. It’s Super Expensive

Behind the scenes, one notable limitation is that the maintenance and operational cost of ChatGPT is prohibitively expensive. Some analysts estimate that OpenAI is spending at least $100K per day or $3 million per month on running costs. 

Likewise, by some estimates, the older version based on GPT-3 could cost as much as $4 million to train. 

The high overall cost of training and running an LLM places it beyond the reach of smaller companies that don’t have millions to spend on AI. It also allows well-funded organizations like Google, OpenAI, and Microsoft to dominate AI research. 

5. A Lack of Empathy 

ChatGPT doesn’t have emotional intelligence or understanding. So, suppose you’re asking ChatGPT to counsel you for an emotionally painful moment or episode. In that case, you’ll be disappointed because it’s not trained to empathize or understand your problems from a human angle

While it can recognize emotions in natural language input, it can’t empathize with users’ needs. 

A lack of emotional intelligence in a chatbot can be hazardous when interacting with vulnerable users. Just last year, a Belgian man allegedly committed suicide after chatting with a virtual assistant called Chai, which encouraged the user to kill himself during the conversation. 

4. It Struggles to Create Long-form Content

Although ChatGPT can create readable logical sentences, it can struggle to maintain a cohesive format or narrative in long-form content. At the same time, it’s prone to repeat points that it’s previously made, which can be very jarring for human readers. 

Together, these reasons are why many who use ChatGPT opt to use it to create shorter pieces of content. That being said, if you want to use ChatGPT to create long-form content, you might be able to improve your results by breaking the content down into multiple segments and writing a detailed prompt

3. Limited Contextual Understanding 

Given that ChatGPT can’t think like a human, it often has difficulty understanding context in certain situations. While it can understand and infer the primary intent of user prompts using NLP, it cannot “read between the lines.” 

For example, it is poor at picking up sarcasm and humor the same way that a human being would. It cannot also generate original humor. That being said, ChatGPT’s ability to infer context will change over time as its training data evolves. 

2. Poor Multitasking

ChatGPT is good at focusing on one task or topic at a time, but it struggles to provide quality responses if you give it lots of tasks and issues to cover at once. 

For example, try to mix prompts about history, geopolitics, and mathematics; the chatbot will respond with lower quality responses than if you confine your questions to a single topic. 

1. It Needs Fine-tuning for Specialized Tasks

If you want to use ChatGPT to generate insights on specific topics or as part of a niche use case, you’ll likely need to fine-tune the model: training it on a new data set is necessary to ensure it performs well on more specialized tasks. 

Without fine-tuning, you’ll be limited to using the generic ChatGPT version targeted toward general users. This is a significant disadvantage, considering the fine-tuning process adds additional costs. 

The Bottom Line

OpenAI’s flagship chatbot might not be perfect, but it will continue to evolve over the next few years as the vendor attempts to address these limitations.

Unfortunately, issues like bias and a lack of emotional intelligence will likely be challenging problems to solve.

Advertisements

Related Reading

Related Terms

Advertisements
Kaushik Pal
Technology Specialist
Kaushik Pal
Technology Specialist

Kaushik is a Technical Architect and Software Consultant with over 23 years of experience in software analysis, development, architecture, design, testing and training. He has an interest in new technologies and areas of innovation. He focuses on web architecture, web technologies, Java/J2EE, open source software, WebRTC, big data and semantic technologies. He has demonstrated expertise in requirements analysis, architecture design and implementation, technical use cases and software development. His experience has spanned across industries like insurance, banking, airlines, shipping, document management and product development etc. He has worked on a wide range of technologies ranging from large scale (IBM S/390),…