Why Trust Techopedia

What is ChatGPT?  

ChatGPT (Chat Generative Pre-trained Transformer) is a series of popular generative AI chatbots developed and maintained by OpenAI. The large language models (LLMs) that support earlier chatbot models were unimodal and could only process and generate text. The latest versions of the chatbot are multimodal and can recognize images, generate images, engage in voice conversations, and search the Internet in real-time through the same conversational user interface (CUI).


OpenAI is monetizing ChatGPT by charging developers access to the chatbot’s application programming interfaces (APIs) and offering two types of paid subscriptions. To empower developers with varying coding expertise, OpenAI has released a drag-and-drop tool called Assistants API. This low code/no code (LCNC) developer tool will give end users with minimal coding experience the ability to create custom chatbots that can be shared or sold through OpenAI’s GPT store

ChatGPT currently has over two million developers and over 100 million weekly active users and is being used by at least 92% of Fortune 500 companies. To help ensure that artificial intelligence (AI) is being used responsibly, AI engineers and automated supervision systems monitor user prompts and model outputs continuously. To protect user data privacy, OpenAI does not use ChatGPT conversations for model training without consent.

How Does ChatGPT Work?

ChatGPT processes and generates content in small chunks called tokens. They provide a consistent way for ChatGPT’s neural network architecture to transform variable-length text strings into more manageable, fixed-size input vectors.

When ChatGPT receives a new prompt, the first thing it does is break the prompt down into a series of tokens. It then analyzes the token series to identify patterns and relationships and compares them with patterns and relationships the model observed in training data

GPT is based on a transformer architecture that was first introduced in a research paper entitled “Attention Is All You Need.” The architecture has since become the foundation for many state-of-the-art natural language processing (NLP) models. 

This architecture uses a technique called self-attention to identify long-range dependencies in content and weigh the importance of different tokens in a sequence. The process is carried out in parallel to produce different weighted representations. 

The results are then concatenated and linearly transformed to produce a token output. The AI model aims to produce a series of statistically similar tokens – but not the same – as the data used in training.

Processing for this type of deep learning (DL) happens very quickly, and short responses that statistically resemble training data semantics (meaning) and syntactics (structures) can be completed in milliseconds.

How ChatGPT Works

How Was ChatGPT Trained?

OpenAI used a website crawling tool, GPTBot, to collect the vast data required to train the ChatGPT foundation model, which was trained with “reinforcement learning from human feedback” (RLHF). This is a relatively new approach to training large language models that combines supervised learning and reinforcement learning strategies. 

In this method, human trainers interacted with a base version of the GPT model to generate conversations in which they played both the user and an AI assistant. (The human trainers were also encouraged to develop their own interactions.) The conversations were then mixed in with the original training data, and the model used the new dataset to fine-tune responses.  

During this part of the training process, the model was tasked with generating several potential responses to a given prompt, and human trainers ranked the responses based on their quality and relevance. 

The model uses the ranked comparison data and a Proximal Policy Optimization (PPO) reinforcement algorithm to learn what made some responses rank higher than others – and why higher-ranked responses should be used in future interactions. 

PPO works by training two policies: a current policy and a target policy. The current policy is used to generate responses, and the target policy is used to evaluate the current policy. The algorithm’s job is to update the current policy to make it more similar to the target policy. 

PPO is a relatively simple algorithm to implement, and it is a very effective tool for incorporating user feedback for continuous training and helping decision-making entities optimize outputs in complex AI environments. 

ChatGPT Explained
Source: OpenAI

Who Created ChatGPT?

ChatGPT was created by OpenAI, an artificial intelligence research laboratory consisting of the for-profit OpenAI LP and its non-profit parent company, OpenAI Inc

OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. The team at OpenAI includes a diverse group of AI researchers, engineers, and other professionals who work collaboratively on developing large language models for generative AI.

Greg Brockman is the Chairman, and Ilya Sutskever is the Chief Scientist of OpenAI. Sam Altman currently serves as the CEO.

Elon Musk was one of OpenAI’s co-founders, but he resigned from the board in 2018 and is now developing an alternative chatbot called Grok for his latest startup, x.AI.  

Is ChatGPT Free?

The browser version of GPT-3.5 is currently free to use, as are OpenAI’s mobile apps for Android and iOS devices. 

Not all ChatGPT offerings are free, however. 

  • Frequent users who want faster response times and reliable access during peak hours can subscribe to ChatGPT Plus. ChatGPT Plus is a $20/month subscription model that offers guaranteed access to ChatGPT during peak usage times, faster response times, and priority access to new features and improvements. 
  • ChatGPT Enterprise is a paid subscription model for businesses. This version can access the internet and is often used to improve customer engagement and service delivery. ChatGPT Enterprise is a $300/month subscription model that offers all of the benefits of ChatGPT Plus, as well as dedicated technical support and access to prompt engineering templates specifically designed for business users.
  • OpenAI also offers a pay-per-use API for ChatGPT. The API is an interface that allows developers to send prompts and receive responses from the ChatGPT model programmatically. The API enables developers to integrate ChatGPT’s conversational capabilities into their own applications or services. Usage costs are determined by the number of API calls. The API for GPT-4 Turbo is currently the least expensive model for developers to run.

How to Access ChatGPT

ChatGPT can be accessed for free through web browsers, browser plug-ins, mobile apps, and APIs in some third-party applications. 

The browser version of ChatGPT provides users access to their chat history and account settings. The mobile versions offer voice chat, real-time Internet search, and image recognition capabilities.

Web browser: Go to and sign up for an account. Once logged in, type a prompt and click the gray “send message” arrow in the prompt box. ChatGPT’s response will be accompanied by an optional “regenerate” button and thumbs up/thumbs down icons. User feedback helps improve future responses

Mobile Apps: Visit the Google Play Store or Apple App Store to download and install the official OpenAI mobile apps for ChatGPT. In addition to its conversational user interface (CUI), the official mobile apps offer the same features as the latest web browser version.

Plug-ins: To install a ChatGPT browser plugin, visit the Chrome Web Store or Firefox Add-ons store and search for “ChatGPT.” Once the browser plugin has been installed, users can click on the ChatGPT icon in their browser toolbar and type a prompt from any webpage. 

GPT Plus: ChatGPT Plus is a paid subscription plan for individuals who want guaranteed access to ChatGPT-3.5 and ChatGPT-4. This option costs $20 per month and is a good choice for users who need reliable access to ChatGPT during peak times of the day. This is also a good option for users who want to be among the first to try new features and improvements because OpenAI uses this version of the chatbot for canary releases

ChatGPT Enterprise: This paid subscription model is designed to provide a variety of advanced functionalities for businesses, including more robust security and risk management capabilities. Features include: 

GPT API: OpenAI provides detailed documentation for the developer community that outlines how developers can use Chat GPT’s API to make requests, handle responses, manage parameters, and adjust temperature (a hyperparameter that controls the randomness of the model’s output.) 

Offline Access: There are a number of publicly available scripts that allow ChatGPT to be used offline. They typically require users to download the ChatGPT model to their computing device and run the large language model locally.  

What Can ChatGPT do?

ChatGPT models are still developing, but they have already demonstrated their potential to be change agents for people using the Internet, doing their jobs, and completing tasks requiring creativity.  

Currently, ChatGPT can be used to:

  • Answer questions
  • Complete a given text or a phrase
  • Create outlines for fiction and non-fiction content
  • Generate short-form fiction and non-fiction content from prompts
  • Respond to user prompts in a variety of conversational tones
  • Analyze sentiment
  • Translate text samples from one language to another
  • Perform calculations
  • Generate programming code and fix bugs in existing code
  • Summarize a given text
  • Fact-check a given text sample
  • Offer suggestions for how to make a given text sample more accurate, clear, and concise
  • Classify the information in a given text into different categories
  • Organize text in a table
  • Provide suggestions for how to use, analyze, or summarize the data in a spreadsheet
  • Provide the hypertext markup language (HTML) for a table or comparison chart
  • Generate HTML for web page buttons, checkboxes, and input forms
  • Provide subject-specific keywords and long-tail keywords for search engine optimization (SEO)
  • Use voice to carry on a conversation
  • Search the Internet
  • Discuss an image

How Are People Using ChatGPT?

ChatGPT and generative models like it are increasingly being used in new fields and business contexts. 

Below are some of the ways individuals and businesses are currently using ChatGPT.

Content Creation

Writers are using ChatGPT to generate text content, brainstorm ideas for new content, and improve written content by asking the chatbot for suggestions and corrections. Although its use is controversial, the benefits are expected to outweigh the concerns.

Education and Learning

Educators and students are using ChatGPT as a research tool and writing assistant

Programming Assistance

Developers are using ChatGPT as a collaborative programming assistant to generate code snippets and improve agile programming time-to-market.

Customer Support

Call center managers are deploying ChatGPT to assist with customer inquiries and provide customer support 24/7. 


Software developers are using ChatGPT to create new video game scenarios, characters, and narratives.

Language Translation

ChatGPT can help people translate text from one language to another. Unfortunately, the depth and accuracy of its responses are likely to vary by language

Email Communication

Microsoft is integrating ChatGPT into its Outlook email platform to help people draft and edit their email communications.


Marketers are using ChatGPT to personalize email messages and analyze data to gain insight into customer behavior


Manufacturers are using ChatGPT to update product manuals and create employee training materials. Now that ChatGPT has a conversational user interface (CUI), the technology is likely to be integrated into many different types of products, including vehicle infotainment and control systems.

Accessibility Support

ChatGPT can act as an assistive technology that provides support to individuals with different types of visual, auditory, movement, and cognitive disabilities.

Blockchain Support

ChatGPT can be used to generate smart contracts in various programming languages. The technology makes building automated trading systems easier, analyzing blockchain data, generating code for specific trading strategies, and providing customer support for blockchain applications.

Online Security

Cybersecurity professionals are using ChatGPT to recognize patterns in phishing emails and provide natural language summaries for log data.

Healthcare Support

Healthcare professionals are using ChatGPT as an assistant for administrative tasks. While not a substitute for professional healthcare, ChatGPT is increasingly used as a search engine to answer people’s questions about health conditions, treatments, and medications.

When was ChatGPT Released?

The original model in the GPT series was introduced in June 2018. Each new version has improved over its predecessor regarding capabilities, performance, security, user privacy, and scalability. 

GPT-2: GPT-2 was announced in February 2019 by OpenAI. Initially, OpenAI did not release the full model due to concerns about its potential misuse. After implementing safety measures, OpenAI released the fully trained GPT-2 model in November 2019.

GPT-3: GPT-3 was introduced in June 2020. OpenAI introduced a new approach to accessing GPT-3. Instead of releasing the trained model publicly, OpenAI provided access to GPT-3 through an application programming interface (API). This allowed OpenAI to maintain control over the technology’s use.

GPT-3.5: GPT-3.5 was released on 30 November 2022 and significantly improved over previous versions of ChatGPT. It offered several new features and capabilities, including the ability to generate more realistic and engaging conversations, understand and respond to more complex prompts, and replicate different types of writing styles and formats. 

GPT-4: GPT-4 was released on 14 March 2023. It performs faster, generating more accurate and comprehensive responses to complex prompts. This chatbot version offers several new capabilities, including the ability to search the Internet in real-time, accept image prompts, and use voice to converse

GPT-4 Turbo: GPT-4 Turbo is only available to developers. This version of the chatbot can process the equivalent of over 300 pages of text in a single prompt and has an extended knowledge base that includes information up to April 2023. 

Differences Between ChatGPT-3 and ChatGPT-4

ChatGPT-4 is the successor to ChatGPT-3.5. Here are some of the key differences between the two versions of the chatbot:

Size: ChatGPT-4’s foundation model is much larger than the models used to train ChatGPT-3 and ChatGPT-3.5. This allows ChatGPT-4 to process more types of information and generate longer, more complex responses.

Multimodal capabilities: ChatGPT-3 and ChatGPT-3.5 can only process and generate text. In contrast, ChatGPT-4 can understand, process, and generate text, images, and voice. (Editor’s Note: The chatbot’s newest capabilities are currently being deployed through canary tests, so they may not be available consistently to all users on all devices.) 

Accuracy and fluency: ChatGPT-4 is arguably less likely to generate factually incorrect responses because it was trained on a much larger, more diverse dataset than previous versions of ChatGPT.

GPT3 vs. GPT4

How to Write ChatGPT Prompts

ChatGPT user inputs are called prompts. They are the starting point of each conversation or query, and they provide the large language model that supports GPT with instructions and context for returning responses. 

To write effective ChatGPT prompts, it is essential to be clear, concise, and specific. 

Each prompt should provide enough context and information for the AI language model to generate a comprehensive and informative response. (Editor’s Note: Depending on the implementation, ChatGPT may remember previous interactions within the same session to maintain conversational coherence.) 

Here are some tips for writing effective ChatGPT prompts:

  • Use complete sentences and proper grammar. This will help ChatGPT understand prompts better and return more accurate responses.
  • Break long prompts down into a series of shorter, simpler prompts.
  • Arrange prompts in a logical sequence to maintain coherence.
  • Provide the chatbot with the desired format for each response. 
  • When a prompt response isn’t acceptable, try regenerating the response. If that doesn’t work, change the prompt iteratively to gradually hone in on the desired outcome.


How to Use ChatGPT Responses

Users need to recognize that ChatGPT responses are not based on actual knowledge. Responses are purely based on patterns and relationships the AI model learned during training. 

Here are some tips for using ChatGPT responses ethically and responsibly:

  • Refrain from using responses verbatim. While ChatGPT can produce coherent and contextually relevant outputs, they may not always be unbiased or factually accurate.
  • Look for ways to combine responses from a series of prompts logically and coherently.
  • Edit the combined responses.
  • Double-check to make sure the combined responses make sense and are factually correct.
  • Follow relevant guidelines for citing ChatGPT as a reference source if applicable.

How to Get Rid of the Gray Background in ChatGPT Responses

When users copy ChatGPT responses with keyboard shortcuts, the copy keeps the response’s format. Unfortunately, that includes the interface’s gray background. 

To remove the gray background, users must use the copy icon at the top of each response or paste the content as plain text. Here are some general instructions for how to paste responses as plain text: 


  • Copy the text as usual using Ctrl + C;
  • When pasting, use Ctrl + Shift + V; 
  • Alternatively, right-click, choose “Paste Options,” and then click “Keep text only” to remove formatting. 


  • Copy the text as usual using Command + C;
  • When pasting, use Shift + Option + Command + V;
  • Alternatively, look for “Paste and Match Style” or a similar option in the Edit menu.

Web Browsers and Word Processors:

  • Copy as usual.
  • Right-click to bring up the context menu and look for the “Paste as plain text,” “Paste Special,” “Keep Text Only,” or “Paste without formatting” option.

If none of these options are available, paste the copied text into a simple text editor that doesn’t support formatting, such as Notepad or TextEdit. Copying the pasted version from the text editor should remove all formatting – including ChatGPT’s gray background.

Is Using ChatGPT Ethical?

ChatGPT use is ethical, but developers, individuals, and businesses using OpenAI’s models should be mindful of several considerations that have raised questions about OpenAI’s ethics. 

  • According to OpenAI, data scientists gathered the huge amount of data required to train the LLM by scraping the Internet. They supplemented this data with text sources either in the public domain or publicly available. While this approach to obtaining the vast amount of data required to train ChatGPT was legal, many content creators feel that OpenAI needed to obtain the training data ethically. A growing number of web publishers want to be compensated for their data’s use.
  • A related concern is that while OpenAI did not reveal how they were able to label vast amounts of training data for ChatGPT’s foundation model, they did acknowledge outsourcing much of the labeling. The process likely took advantage of crowdsourcing platforms like Amazon’s Mechanical Turk. Its use is controversial because MTurk workers are short-term contractors. They don’t have health benefits, are often paid pennies for their time, and have no recourse if they are not paid for their work.
  • Perhaps the biggest concern is the chatbot’s ability to generate harmful or misleading content that violates OpenAI’s ethical safeguards. Addressing these concerns requires the creation of thoughtful government and corporate policies, new guidelines for Responsible AI, and best practices for mitigating the adverse impacts of AI jailbreaking.

Is ChatGPT Safe to Use?

Concerns about corporate data leakage and privacy have prompted OpenAI to put policies in place to protect the data shared in user prompts. Still, it’s important to remember that safety ultimately depends on the context in which ChatGPT is being used. 

Using the model for educational purposes, content creation, or information gathering while adhering to guidelines is generally considered safe. In contrast, using the model to make critical decisions without verification can be extremely unsafe. 

For example, it’s essential to consult a qualified professional or a credible source when using the AI assistant to obtain health, finance, or legal information. ChatGPT’s responses may not be accurate, up-to-date, or based in reality. Users should be aware of the technology’s limitations and put their critical thinking caps on before using responses verbatim. 

Chat GPT and Plagiarism

When using content generated by Chat GPT verbatim, it’s important to cite the chatbot and provide an appropriate attribution to avoid the risk of plagiarism.

That’s because when ChatGPT generates responses, it can create outputs similar to the content the model was trained on that the content will be flagged by Turnitin or similar anti-plagiarism tools. 

To mitigate this concern, users can run the generated content through AI copywriting tools, including plagiarism detection tools, to ensure the content’s originality and help the user make necessary adjustments.

How to Cite ChatGPT

Because ChatGPT is not a traditional publication or a human author, it’s becoming increasingly acceptable to treat ChatGPT like a search engine and repurpose responses that have been edited without attributing OpenAI. 

When citing a source like ChatGPT-4 in a research article or scholarly context, however, it’s crucial to provide acknowledgment that refers the reader back to the source of the information. The chatbot should be cited as a reference tool if the original source cannot be identified. 

Here is a suggestion for how to cite ChatGPT as a reference tool:

ChatGPT [version], accessed [date],

Is ChatGPT Replacing Jobs?

The development and adoption of ChatGPT and similar AI models are expected to impact the job market much like the Internet, and other technology advancements have in the past. As with all technological advances, there will likely be a shift in the types of skills that are in demand.

Widespread adoption and ChatGPT use will likely result in job displacement, job augmentation, and the creation of new job roles. However, the broader impact on certain demographics in the job market depends on several economic, societal, and ethical choices. 

Policymakers, business leaders, and educators are expected to play an important role in determining how AI is integrated into the workforce and how potential job losses are addressed. It’s already clear that preparing for the future of work involves continuous learning (upskilling and reskilling) and adaptability.

Limitations of ChatGPT

While ChatGPT has diverse applications, it is crucial to consider the technology’s limitations. The bottom line is that ChatGPT should be treated as an augmented AI tool and not a single source of truth (SSoT). Even though the chatbot seems to be able to pass different versions of the Turing Test, it does not have consciousness or emotions. 

Users should verify information provided by ChatGPT with credible sources, especially in critical domains like health and finance. 

ChatGPT and AI Bias

Generative models like ChatGPT learn from the data they are trained on. If the training data contains biases, the models will learn and potentially reinforce existing stereotypes and inequalities. 

Bias in AI models can lead to discriminatory practices and potentially result in legal repercussions. Organizations may face lawsuits and regulatory actions if their ChatGPT use results in unfair treatment or discrimination.

OpenAI recognizes the challenges of inherent bias in training data and is committed to best practices for mitigating the unintentional impact of ChatGPT models. This includes:  

  • Using large, diverse datasets for training to capture a wide range of human knowledge and dilute specific biases. 
  • Using human reviewers during the model’s fine-tuning phase and requiring them to follow guidelines that explicitly advise against favoring any political group or taking stances on controversial subjects. 
  • Using an iterative feedback loop to maintain regular communication with human reviewers and end users to refine the model’s outputs and minimize biases. 
  • Prioritizing transparency and actively seeking external feedback on model behavior and deployment strategies. 

ChatGPT Alternatives

ChatGPT is arguably the most popular large language model for generative AI, but it is not the only one. Every day, it seems like someone is announcing an alternative to OpenAI’s LLM that will meet the needs of users with specific needs and requirements.

Notable alternatives to ChatGPT-4 include:

Google Bard: Bard is a large language model from Google AI. Google Bard is browser-based and can access the Internet in real-time. This generative AI chatbot requires a Google login and is free to use.

Google Gemini: Gemini is an integrated suite of large language models that Google AI is developing. According to Sundar Pichai, CEO of Google and Alphabet, the integrated suite was designed from scratch to generate multimodal outputs.

Llama 2: Llama 2 is a family of open-source large language models from the AI group at Meta, Facebook’s parent company. Llama 2 Long is a modified LLM version explicitly designed to respond to long prompts. 

Falcon 180B: Falcon 180B is a 180-billion-parameter large language model developed by the Technology Innovation Institute (TII) in Abu Dhabi. It is freely available for research and commercial use under the Falcon-180B TII License.

Claude: Claude is a large language model chatbot developed by the research company Anthropic. It can be accessed through a browser-based chat interface or an API in Anthropic’s developer console.

Mistral 7B: Mistral 7B is a large language model developed by Mistral AI. It is free for research and commercial use under the Apache 2.0 license. It can be run locally or in the cloud. 

Vicuna 13B: Vicuna 13B is a large language model chatbot developed by the research company LMSYS. It is known for successfully meeting GLUE benchmark criteria. The GLUE and SuperGLUE benchmarks are useful tools for comparing the performance of different NLP models and identifying areas where they need improvement.

Giraffe: Giraffe is a family of large language models from Abacus.AI, a research company focused on developing and commercializing large language models. Instead of using self-attention mechanisms to make predictions, Giraffe models use zero-shot learning mechanisms designed explicitly for long-context tasks.


Related Questions

Related Terms

Margaret Rouse

Margaret jest nagradzaną technical writerką, nauczycielką i wykładowczynią. Jest znana z tego, że potrafi w prostych słowach pzybliżyć złożone pojęcia techniczne słuchaczom ze świata biznesu. Od dwudziestu lat jej definicje pojęć z dziedziny IT są publikowane przez Que w encyklopedii terminów technologicznych, a także cytowane w artykułach ukazujących się w New York Times, w magazynie Time, USA Today, ZDNet, a także w magazynach PC i Discovery. Margaret dołączyła do zespołu Techopedii w roku 2011. Margaret lubi pomagać znaleźć wspólny język specjalistom ze świata biznesu i IT. W swojej pracy, jak sama mówi, buduje mosty między tymi dwiema domenami, w ten…