What is LaMDA?
Google LaMDA (Language Model for Dialogue Applications) is a collection of conversational large language models (LLMs) that work together to complete different types of generative AI tasks that involve dialogue.
LaMDA was designed to give Google products the ability to carry out natural language conversations with end users. LaMDA is often associated with Google Bard, a competitor to OpenAI’s ChatGPT.
LaMBDA is also known for being the subject of a Washington Post story that sparked a lot of debate about whether AI software can be sentient. The story, which quickly went viral, also created a lot of speculation about the usefulness of the Turing test and inspired conversations about the need for a regulated Responsible AI framework.
LaMDA Objectives Explained
According to Google, LaMDA’s large language models have three key objectives: quality, safety, and the ability to generate responses that are based on facts and evidence, a concept that Google developers refer to as “groundedness.”
The key objectives are continually assessed to help ensure LaMDA produces responses that make sense, support the tenants of Responsible AI, and provide users with information that can be validated by external sources.
Google uses a variety of methods to assess LaMDA’s performance, including:
- User surveys: Google continually surveys users who interact with LaMDA about their experience. They ask users to rate LaMDA on a variety of criteria, such as accuracy, helpfulness, and engagement.
- Expert evaluations: Google works with natural language processing (NLP) experts to evaluate LaMDA’s ability to generate text that is grounded in reality and does not contain harmful or misleading information.
- Internal metrics: Google tracks a variety of internal metrics to assess LaMDA’s performance. These metrics include the number of formats the model can generate text for, the number of user prompts the model responds to in a helpful manner, and the number of users who interact with the model on a regular basis.
- Humans-in-the-Loop Review: Google’s team of engineers and researchers review LaMDA’s responses to help ensure they are accurate, helpful, and ethical.
Google uses the metrics and feedback they collect to improve the models’ performance and make them more helpful and engaging for users.
How Lambda Works
LaMDA was initially trained with self-supervised learning algorithms on almost two terabytes of dialogue data scraped from internet websites and other forms of publicly available information. The training, which took months, allegedly required Google to dedicate a cluster of computers that had a total of 180,000 central processing unit (CPU) cores and 600 graphical processing units (GPUs).
Once the initial foundation model was trained, LaMDA was then fine-tuned for different language-related tasks with supervised learning algorithms. The result is an integrated multi-task model that is able to complete language tasks with an acceptable level of accuracy.
Here are some of the things that LaMDA can do:
- Respond to a series of user prompts in a conversational manner;
- Generate text in a variety of formats and writing styles;
- Translate text into over 100 different languages;
- Answer questions.
LaMDA is constantly learning and improving as it interacts with more users and is exposed to more data.
LaMDA and Google Bard
Bard is an AI-powered chatbot developed by Google. Unlike its competitor, ChatGPT, Bard has the ability to pull information in real time from the internet.
When Google Bard was first introduced, it used LaMDA to carry out the chatbot’s various language tasks. The latest version of the Bard chatbot, however, uses PaLM 2 instead of LaMDA.
PaLM, which stands for Pathways Language Model, is a new type of transformer-based neural network architecture that Google is in the process of developing.
PaLM 2 is designed to be multimodal and can understand and process multiple modes of data sources, such as text, images, and videos, without the assistance of another machine learning (ML) model or AI system.
In contrast, LaMDA and ChatGPT 3.5 and 4.0 are unimodal and can only understand and process and generate text.
LaMDA vs. Lambda
Google LaMDA should not be confused with AWS Lambda.
AWS Lambda is a cloud computing service that lets users run code without needing to provision or manage servers. Lambda is often pointed out as an example of serverless computing, a distributed architecture that allows developers to focus on writing and deploying code while the cloud provider handles the underlying infrastructure required to execute the code.