Gemini Memory Feature Lets Users Save and Manage Personal Preferences

Why Trust Techopedia
Key Takeaways

  • Gemini’s memory feature allows users to save preferences for personalized interactions.
  • Users can manage saved memories via the website’s Settings section, with the option to delete them.
  • Memory tools like Gemini’s are vulnerable to exploitation if not secured properly.

Google’s Gemini chatbot now has a memory tool that enables the chatbot to retain and recall users’ preferences and details about their lives and work.

For example, you can now tell Gemini to store preferences such as using simple language, excluding meat from recipes, coding only in JavaScript, noting that you don’t own a car, or acknowledging that you’re an English teacher.

 

 

Unlike ChatGPT, which automatically adds memories, Gemini lets users manually or automatically save context. You can either share details during a conversation or update them through the new saved-info page on the Gemini app.

You can view, modify, and remove the information Gemini remembers in the website’s Settings section, accessible from the sidebar for easy management.

The memory feature, currently limited to English-language prompts, can be turned off at any time, but saved memories will persist until the user chooses to delete them manually. However, according to TechCrunch, Gemini does not utilize saved memories for model training.

Priority access to this feature is accessible through Gemini Advanced, which is part of the $20 monthly Google One AI Premium plan. It is currently only available on the web client and is not yet accessible through Gemini’s iOS and Android apps.

Security Risks of Memory Tools

In April, OpenAI launched the Memory feature for ChatGPT Plus subscribers, allowing it to remember details from past conversations. Like Gemini, this enables more personalized interactions by recalling previously provided details.

However, memory features like those in ChatGPT and Gemini are vulnerable to misuse if not adequately secured. Earlier this year, a researcher discovered a flaw in ChatGPT’s memory that allowed attackers to implant false memories and steal data. Despite a partial fix, risks remain from untrusted content injecting malicious prompts.