8 Major ChatGPT Problems in 2025: Is It Getting Worse?

Why Trust Techopedia

ChatGPT might be the king of generative AI, but is it getting worse? A quick web search across OpenAI’s community website and Reddit reveals lots of criticism of the chatbot’s performance declining over time.

As of 2025, ChatGPT has a lot of problems that need to be ironed out, from frequent hallucinations and bias to a lack of common sense reasoning, rampant jailbreaking, and restrictive content moderation.

In this article, we’re going to take an in-depth look at the top ChatGPT problems OpenAI faces in 2025. This includes a breakdown of what ChatGPT is bad at from a user perspective.

Key Takeaways

  • ChatGPT might be the most popular chatbot in the world, but it’s not without some serious problems.
  • Hallucinations often cause ChatGPT to share verifiably false information.
  • Many researchers have found ChatGPT has a liberal bias.
  • Jailbreaks can still be used to flout ChatGPT’s content moderation restrictions.
  • Content moderation guidelines can also be quite restrictive.
Table of Contents Table of Contents

Top 8 ChatGPT Problems in 2025

Top ChatGPT Problems 2025

1. Hallucinations

One of the biggest problems with ChatGPT is the chatbot’s tendency to hallucinate and generate inaccurate responses to questions. For example, if you ask ChatGPT a historical question, there’s a chance it could provide you with a response that is verifiably incorrect.

According to Brenda Christensen, chief executive officer at Stellar Public Relations Inc., ChatGPT “frequently makes simple errors.” She said:

“Case in point, I requested that it compare a New Year’s Day social post, and it inaccurately stated the incorrect year 2024.”

Advertisements

So, how often is ChatGPT wrong

One study conducted by Purdue University found that 52% of programming answers generated by ChatGPT were incorrect, suggesting the need for consistent fact-checking when using the tool.

While certain techniques like reinforcement learning from human feedback (RLHF) can help lower the frequency of hallucinations, Yann LeCun, chief AI scientist at Meta, argues that hallucinations are an “inevitable” part of auto-regressive LLMs.

But why does ChatGPT give wrong answers?

It appears that part of the reason for this is that LLMs learn patterns in training data and then use these patterns to predict text that will answer the question in a user’s prompt.

Some of these predictions are thus bound to be wrong.

2. Lack of Common Sense

Another one of the main issues with ChatGPT is its lack of common sense. Unlike a person, chatbots don’t think and thus don’t have any real understanding of what they’ve said and whether it’s logically correct.

As Liran Hason, VP of AI at Coralogix, told Techopedia:

“GPT relies on patterns of training data that it learned how to answer questions from. It doesn’t actually understand the world and has a date cutoff point before which no information is even included in the data.”

These limitations mean that ChatGPT is only capable of basic reasoning. For instance, it can answer basic mathematics questions like “What is 30 + 30?” but may struggle with more complex concepts and equations.

That being said, OpenAI is looking to address this limitation with more powerful models like o1, which it claims are capable of advanced reasoning.

3. Lack of Creativity

When it comes to creative tasks, ChatGPT can be useful but often generates extremely boring content. All too often, sentences created with ChatGPT are formulaic, whereas a human writer would have a more natural ebb and flow.

Dmytro Tymoshenko, CEO of Noiz, told Techopedia:

“The outputs it generates are usually bland and generic and lack original thoughts. They’re structured and coherent, sure, but most of the time, they carry little to no information value.

“The more you feed ChatGPT with similar prompts, the more it learns the ‘template’ of the standard answer, which results in you receiving outputs that are almost exactly the same.”

To make matters worse, lack of common sense and a brain means that ChatGPT doesn’t have any real insights to offer into the world around us.

It’s unlikely that ChatGPT could write an article, screenplay, or book that could captivate an audience like a human creative.

4. Bias

Another problem facing ChatGPT is that of bias. On a number of occasions, OpenAI’s chatbot has displayed significant bias.

Most notably, a study released by the University of East Anglia in 2023 asked ChatGPT questions about its political beliefs and found that the results displayed a “Significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil and the Labour Party in the UK.”

A number of other studies have also indicated a liberal bias.

These biased outputs have led to a significant backlash against the chatbot, with Elon Musk going so far as to tweet that “ChatGPT has woke programmed into its bones.”

Given that models like GPT-4o and GPT-4 are developed with a blackbox approach, its on users to check outputs against third-party sources to make sure that they’re not being misled.

5. Jailbreaking

Content moderation restrictions are one of the main controls that help prevent ChatGPT from producing hateful and malicious content. Unfortunately for OpenAI, these content moderation guidelines can be sidestepped through the use of jailbreaking.

One of the worst ChatGPT fails came when current head of Claude relations Alex Albert, jailbroke GPT-4 just days after its release.

We also saw a spate of users entering prompts such as Do Anything Now (DAN) to override the model’s content moderation guidelines and begin creating outputs that would be blocked.

While some users use jailbreaks to avoid overzealous content moderation, others use them to create hateful and malicious content.

For example, threat actors can use jailbreaks to create phishing emails or even malicious code to steal users’ personal data.

6. Declining Performance in Long Conversations

Many users have complained that the longer you converse with ChatGPT, the more performance will start to decline. Common complaints are that during long conversations, ChatGPT stops following instructions or forgets details.

This is a big limitation because it makes it difficult to engage with the chatbot for any length of time.

After all, having to start new chats periodically doesn’t make for a good user experience.

7. Too Much Content Moderation

Another one of ChatGPT’s errors is that it has overly restrictive content moderation. While too little content moderation creates risks of misuse, ChatGPT’s content filters are often overzealous. It’s not uncommon to ask a question about an inoffensive topic and have the assistant refuse to answer.

At the same time, the user doesn’t have any transparency into what content moderation policies are guiding content creation behind the scenes. This offers little insight into how to avoid having queries moderated and whether or not the guidelines themselves are ideologically biased.

In any case, as a private company, OpenAI has a right to implement a level of content controls it believes keeps its users and reputation safe, but this also has to be balanced with offering a consistent user experience, which isn’t happening right now.

8. Voice Recognition Inconsistencies

Although GPT-4o and advanced voice mode launched to critical acclaim, Techopedia’s experimentation with the feature has been a mixed bag.

All too often, ChatGPT with GPT-4o makes mistakes, misunderstanding voice inputs and issuing irrelevant responses.

During our testing, verbal prompts often had to be input multiple times before the model understood what was said. Such inconsistencies make voice conversations much less convenient than entering text.

The Bottom Line

ChatGPT has some serious flaws that users need to watch out for. If you search online, you’ll find plenty of examples of ChatGPT being wrong and spreading misinformation.

The reality is that if you’re using ChatGPT, you need to be extremely proactive about fact-checking and cross-referencing outputs to make sure you’re not influenced by misinformation or bias.

FAQs

What is ChatGPT bad at?

Why does ChatGPT give wrong answers?

How often is ChatGPT wrong?

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Writer
Tim Keary
Technology Writer

Tim Keary is a technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology. He holds a Master’s degree in History from the University of Kent, where he learned of the value of breaking complex topics down into simple concepts. Outside of writing and conducting interviews, Tim produces music and trains in Mixed Martial Arts (MMA).