Thanks to the recent advances and massive popularization of generative AI, these technological tools are no longer reserved for specialist but have been widely adopted by millions of people and businesses. That’s great progress, but it also brings with it great responsibility.
OpenAI’s chatbot, ChatGPT launched on November 30, 2022 and attracted over a million users in the first five days. In just two months, it hit 100 million monthly active users, which makes it “the fastest-growing consumer application in history.”
ChatGPT is based on an open-source natural language processing (NLP) model called GPT that OpenAI first began developing it in 2018. The first version was GPT-1, followed by GPT-2 and GPT-3, which is the current offering.
ChatGPT is only one example of generative AI, though it’s the one most people are talking about now.
I asked ChatGPT to answer some questions when writing this article. As the information I got back still wasn’t complete, I added in human observations from various sources. That makes this article a reflection of both the capabilities and limits of generative AI used for researching information.
Characteristics of Generative AI
Generative AI refers to a type of artificial intelligence that can draw on the data it was trained on to create new and original content. Its characteristics include the following (as per multiple responses from ChatGPT):
-
Unsupervised Learning: Unlike traditional AI models that require large amounts of labeled data for training, Generative AI can learn from unlabelled data and can discover patterns and features on its own.
-
High-dimensional data: Generative AI can generate high-dimensional data, such as images or videos, that require a large number of parameters to accurately represent the data.
-
Diversity: Generative AI can generate diverse outputs by sampling from the stochastic probability distribution learned during training, allowing it to create a wide range of outputs for a given input.
-
Computational Intensity: Generative AI models are computationally intensive and require high-performance computing resources such as GPUs to generate high-quality outputs.
-
Probabilistic modeling: Generative AI often uses probabilistic modeling techniques to generate new content. This involves modeling the probability of different outcomes based on the patterns and relationships learned from the training data.
-
Creativity: Generative AI is capable of creating new and unique output, which can be used to generate music, art, or text. (Also read: 5 Ways AI is Changing Art)
-
Adaptability: Generative AI can adapt to new data and situations, allowing it to create new content that is tailored to specific contexts or needs.
Consequently, generative AI has the potential to accelerate productivity in a wide range of fields, from creative industries like art and music to scientific research and development.
Examples of Generative AI
Generative AI is a type of artificial intelligence that involves creating or generating new data, images, text, or other types of content. The field is rapidly evolving with new innovations and advancements. Current examples of generative AI include the following:
-
GPT (Generative Pre-trained Transformer) – a family of generative AI models developed by OpenAI that can generate high-quality text content. (Also read: We Interviewed ChatGPT, AI’s Newest Superstar)
-
DALL-E – an image generation model developed by OpenAI that can create unique and imaginative images from textual input.
-
StyleGAN – a generative AI model developed by Nvidia that can generate high-resolution images of faces, animals, and other objects with realistic details.
-
Magenta – a Google project that uses generative AI to create music and artwork.
-
Deep Dream – a Google project that uses generative AI to create psychedelic images from existing images.
-
MuseNet – an AI music composition tool developed by OpenAI that can generate original music in various styles.
-
Text-to-Speech (TTS) systems – AI-powered systems that can generate natural-sounding speech from text input.
-
Neural Machine Translation (NMT) – AI models that can translate text from one language to another with high accuracy.
-
Bard – Google’s large language model (LLM) chatbot is powered by LaMDA (Language Model for Dialogue Applications).
Benefits of Using Generative AI
I found ChatGPT’s responses to my prompts while researching this article added up to nine distinct benefits, including accuracy. In point of fact, though, generative AI has proven to be less accurate than human output – as illustrated in the section on risks. Accordingly, I left off accuracy, but I will add in some additional sources for human views of generative AI benefits.
-
Efficiency: Generative AI can quickly produce a large amount of output, saving time and resources for businesses. For example, it can be used to generate product descriptions, social media posts, or even entire websites.
-
Flexibility: Generative AI can create content in various formats, making it useful for a wide range of applications, including marketing, advertising, and content creation.
-
Creativity: Generative AI can come up with ideas and concepts that humans may not have thought of, making it an excellent tool for creative tasks. For example, it can be used to create new art, music, or even designs.
-
Personalization: Generative AI can learn from user data and preferences to create personalized content, such as product recommendations, marketing messages, and user interfaces. This can help businesses to tailor their offerings to individual customers, leading to higher engagement and sales.
-
Simulation: Generative AI can be used to simulate complex systems, such as weather patterns or financial markets, which can be useful for scientific research, training, and decision-making.
-
Data augmentation: Generative AI can be used to generate synthetic data that can be used to augment existing datasets and improve the performance of machine learning models.
-
Cost Savings: Using generative AI can save businesses money on content creation, design, and other creative tasks, as it can automate much of the work that would otherwise require human input.
-
Innovation: Generative AI can drive innovation by allowing companies to explore new ideas and concepts that may not have been possible with traditional methods.
To see real life examples of how these benefits are applied in business, see Deloitte’s AI Dossier, which offers applications for six major industries.
Risks of Using Generative AI
ChatGPT does offer answers to this prompt, though it doesn’t get quite as detailed as some of the responses from people who express speculative concerns about it or who have found serious flaws in some of its answers. So to get a more comprehensive perspective of what could – and does – go wrong in generative AI, we’ll refer to some human sources.
The potential risks of using generative AI, according to the results from multiple ChatGPT responses are the following :
-
Biased output: Generative AI can replicate machine biases and prejudices that exist in the training data, which can lead to biased outputs. For example, an image generation model trained on predominantly white faces may not generate realistic images of people with darker skin tones. (Also read: AI’s Got Some Explaining to Do)
-
Misinformation: Generative AI can generate false information or fake content that can be difficult to distinguish from genuine content, potentially leading to convincing fake news or propaganda that may spread rapidly through social media and other platforms.
-
Intellectual property infringement: Generative AI can create content that infringes on someone else’s intellectual property rights, such as copyrighted images or music.
-
Legal liabilities: The use of generative AI to create content that is illegal or violates copyright standards can lead to risk management liabilities for individuals and organizations.
-
Security and privacy concerns: Generative AI can create content that can be used for malicious purposes, such as generating fake social media profiles or generating phishing emails.
-
Lack of human oversight: When generative AI creates content without humans in the loop it can leading to errors, mistakes and unintended consequences.
-
Ethical concerns: The use of generative AI without policies and guidelines for responsible AI raises ethical concerns related to accountability, transparency, and the potential misuse of the technology.
-
Trust: The use of generative AI may erode public trust in the authenticity of content, particularly in areas such as news media and advertising. (Also read: Explainable AI Isn’t Enough; We Need Understandable AI)
-
Quality and accuracy: There may be challenges in ensuring the quality of generative AI content, as it can be difficult to assess the accuracy and relevance of the output.
The Risks From the Human Perspective
In terms of quality and accuracy of imagery, generative AI has improved but fails on accurate representation with respect to how it renders human hands or in representing diversity of race and gender for different occupations.
ChatGPT has also been shown to be guilty of sexist and racist stereotyping. This is already an issue because ChatGPT draws on training data from no later than 2021, so it does not take updates into account in its output. It also fails to cite sources of information, so users cannot assess what it’s relying on for its presentation of facts.
An article in The Conversation illustrates multiple wrong answers the author got from ChatGPT who concludes that false information had to have been part of its training data to arrive at such positively false responses. There also must have been some glitch in its training for logic problems on math as publicized on Twitter, as well as other basic inaccuracies presented in its responses.
With search engines jumping on the generative AI bandwagon now, people will be relying on their responses for accurate information. MIT Technology Review went into some detail on the problem with this development in a recent article entitled Why You Should Trust AI Search Engines. It mentions the blatant error that appeared in Google’s ad for its chatbot, Bard. The cost of that mistake was $100 billion in the company’s share price.
Beyond these specific errors, some experts are concerned about potential risks. When Apple’s cofounder, Steve Wozniak, expressed his views on ChatGPT on CNBC’s “Squawk Box, he put it this way:
“The trouble is it does good things for us, but it can make horrible mistakes by not knowing what humanness is.”
In the CNBC interview Wozniak pointed out a similar situation in the limitations we still face with the usefulness of autonomous cars not interacting well with human drivers He said:
“It’s like you’re driving a car, and you know what other cars might be about to do right now, because you know humans.”
One more risk is the devaluation of human creativity. Businesses have already started to populate their sites and blogs with words on tap from ChatGPT3 and images on generative AI images that are now made available through services like Shutterstock.
The low-cost, instant output can mean less work for human writers and artists who already struggle to make a living. This concern has prompted one group to set up a GoFundMe called Protecting Artists from AI Technologies.
Future of Generative AI
Not surprisingly, ChatGPT is optimistic about the future of generative AI. Here’s a compendium of the trends and development it anticipates:
-
Improved language understanding due to advancements in natural language processing: Generative AI has already made significant strides in natural language processing, but we can expect to see continued advancements in this area. These developments will enable machines to produce more human-like language and improve the accuracy of text generation.
-
Improved visual content generation: Generative AI is already being used to create visually compelling content, such as images and videos, but we can expect to see further improvements in this area, including better image and video manipulation and creation.
-
More real-time feedback: Real-time feedback will be used to fine-tune generative AI models and allow for quicker and more efficient content creation.
-
Enhanced personalization: Generative AI will continue to be used to create personalized content including ads, product recommendations and other marketing materials. These developments will help companies deliver more relevant content to consumers and improve the overall customer experience.
-
Integration with other technologies: Generative AI is likely to be integrated with other technologies such as natural language processing, computer vision, and speech recognition to create even more complex and interactive content.
-
Increased collaboration between humans and machines: As generative AI becomes more advanced, we can expect to see increased collaboration between humans and machines in creative fields such as art, music and literature. This collaboration will enable artists, musicians and writers to explore new creative possibilities and push the boundaries of their respective fields.
-
New applications: As generative AI becomes more advanced, we can expect to see it to be used in new ways, such as creating virtual reality environments, generating art or even improving health. It is already in use in healthcare to create synthetic data for research purposes. In future it may also be used for disease diagnosis and personalized medicine.
-
Improved quality control and ethical standards: As generative AI technology becomes more widespread, there will be increased focus on ethical considerations such as transparency, accountability and avoiding bias.
We can conclude that generative AI certainly represents great progress in terms of technology. But we can not rely on it to regulate itself.
It is up to the humans who implement or plan to tap into it for their business or personal use to apply the requisite level of responsibility. Like any other form of AI, it requires a standard of ethics and explainability for use.