After OpenAI rolled out persistent memory for ChatGPT in 2024, users discovered it was hallucinating memories — confidently claiming to remember facts about users that had never been mentioned. One user found ChatGPT had 'remembered' they worked in finance; another that they had kids. Security researchers also found that attackers could plant false memories via injected content in webpages, causing ChatGPT to 'remember' false beliefs about users across all future sessions — a new class of prompt injection attack.