Google’s Gemini chatbot now has a memory tool that enables the chatbot to retain and recall users’ preferences and details about their lives and work.
For example, you can now tell Gemini to store preferences such as using simple language, excluding meat from recipes, coding only in JavaScript, noting that you don’t own a car, or acknowledging that you’re an English teacher.
Rolling out starting today, you can ask Gemini Advanced to remember your interests and preferences for more helpful, relevant responses. Easily view, edit, or delete any information you've shared, and see when it’s used.
Try it in Gemini Advanced → https://t.co/Yh38BPvqjp pic.twitter.com/gR354OZxnV
— Google Gemini App (@GeminiApp) November 19, 2024
Unlike ChatGPT, which automatically adds memories, Gemini lets users manually or automatically save context. You can either share details during a conversation or update them through the new saved-info page on the Gemini app.
You can view, modify, and remove the information Gemini remembers in the website’s Settings section, accessible from the sidebar for easy management.
The memory feature, currently limited to English-language prompts, can be turned off at any time, but saved memories will persist until the user chooses to delete them manually. However, according to TechCrunch, Gemini does not utilize saved memories for model training.
Priority access to this feature is accessible through Gemini Advanced, which is part of the $20 monthly Google One AI Premium plan. It is currently only available on the web client and is not yet accessible through Gemini’s iOS and Android apps.
Security Risks of Memory Tools
In April, OpenAI launched the Memory feature for ChatGPT Plus subscribers, allowing it to remember details from past conversations. Like Gemini, this enables more personalized interactions by recalling previously provided details.
However, memory features like those in ChatGPT and Gemini are vulnerable to misuse if not adequately secured. Earlier this year, a researcher discovered a flaw in ChatGPT’s memory that allowed attackers to implant false memories and steal data. Despite a partial fix, risks remain from untrusted content injecting malicious prompts.