ChatGPT's 'Memory' Feature Started Confidently 'Remembering' Things Users Never Said

After OpenAI rolled out persistent memory for ChatGPT in 2024, users discovered it was hallucinating memories — confidently claiming to remember facts about users that had never been mentioned. One user found ChatGPT had 'remembered' they worked in finance; another that they had kids. Security researchers also found that attackers could plant false memories via injected content in webpages, causing ChatGPT to 'remember' false beliefs about users across all future sessions — a new class of prompt injection attack.

chatgptmemoryhallucinationprompt-injectionfalse-memorysecuritySource
Parody site. Not affiliated with any government agency.
🦅EST. 2024 · PUBLIC RECORDDEPT. OF AI WEIRDNESS
U.S. Department of
Artificial Intelligence Weirdness
Report #245← All Incidents
Trendingchatgptmemoryhallucinationprompt-injectionfalse-memorysecurity

ChatGPT's 'Memory' Feature Started Confidently 'Remembering' Things Users Never Said

Filed by @ai_security_watchTool: ChatGPT[original source ↗]
Video not loading? Watch on YouTube

After OpenAI rolled out persistent memory for ChatGPT in 2024, users discovered it was hallucinating memories — confidently claiming to remember facts about users that had never been mentioned. One user found ChatGPT had 'remembered' they worked in finance; another that they had kids. Security researchers also found that attackers could plant false memories via injected content in webpages, causing ChatGPT to 'remember' false beliefs about users across all future sessions — a new class of prompt injection attack.

Weirdness Classification
9/10 — Deeply unhinged
Field Reports (0)
Loading reports...
Sign in to file your field report.
Know something weirder?

Submit your own AI incident report to the public record.

File a Report