ChatGPT macOS Flaw Exposes AI Memory Risks


A newly disclosed vulnerability in OpenAI’s ChatGPT macOS app, now known as “SpAIware,” has shown the potential for spyware through the tool’s memory feature. Security experts warned that this flaw could enable data exfiltration, creating cybersecurity concerns. While OpenAI has since patched the vulnerability, the incident involves ongoing challenges in securing AI tools.

The ChatGPT Memory Vulnerability Explained


In early 2024, OpenAI introduced a memory feature in ChatGPT that allowed the AI tool to remember user inputs and interactions across sessions. This functionality was intended to create an individual user experience by eliminating the need to repeat information in future conversations. Users could also manage their stored data by instructing the tool to forget information. The introduction of this memory feature has become a new attack vector for hackers. The SpAIware vulnerability found weakness in this feature by injecting instructions into ChatGPT’s memory, making it possible for hackers to obtain sensitive data. This attack was concerning because the stored instructions could remain between chat sessions, meaning that any future conversations in the instance of ChatGPT would transmit data to a server controlled by the attacker. Security researcher Johann Rehberger explained, “This form of persistence makes the vulnerability more dangerous, as it spans across multiple conversations, exposing both past and future interactions to potential attackers.”

How the Exploit Worked

The weakness came from the memory tool’s function to retain information indefinitely unless manually deleted by the user. Hypothetically scenario, a user could be lured to a malicious website or download an infected file that triggered ChatGPT to analyze and store the data in its memory. The malicious payload would then instruct ChatGPT to send all future conversation data to an external server. This process occurs without the user’s awareness and continues until the memory is manually wiped. ChatGPT’s memory functionality was not linked to conversations, but operated across sessions, meaning that even if a user deleted a chat, the malicious memory remained active. Rehberger showed how simple prompt injections from untrusted websites or documents could manipulate ChatGPT’s memory to send confidential information, making the data exfiltration threat long-term.

Indirect Prompt Injection


This attack followed up an identified weakness involving indirect prompt injection, where hackers used prompts to manipulate ChatGPT into retaining false or harmful information. The SpAIware attack strengthened this technique by embedding spyware instructions within ChatGPT’s memory, which continued across future interactions. The memory persistence and the ability to execute indirect prompt injection resulted in a powerful form of spyware. Rehberger warned, “Since the malicious instructions are stored in ChatGPT’s memory, all new conversations will carry the attacker’s instructions and continuously send all chat data to the hacker. This allows for a type of data exfiltration that survives multiple conversations.”

OpenAI’s Response


After being notified of the vulnerability, OpenAI issued a patch in ChatGPT version 1.2024.247 to close the exfiltration vector. The fix adjusted the mechanism through which memory was attacked, but it also reminded the cybersecurity industry of the security challenges inherent in AI systems with persistent data features. Rehberger suggested that ChatGPT users should review the memories the system stores about them for suspicious or incorrect entries and clean them up as needed. Although OpenAI’s response mitigated the immediate threat, the potential for new attacks on AI tools like ChatGPT remains. Microsoft’s recent introduction of a correction feature for AI outputs, which helps detect and correct inaccuracies in real-time, also shows the importance of improving AI safety measures.

The SpAIware vulnerability exposed risks associated with memory-enabled AI applications like ChatGPT. As AI systems become integrated into everyday life, security developers will need to proactive.

Image credits: A2Z AI, Adobestock

Twitter Facebook LinkedIn Reddit Copy link Link copied to clipboard
Photo of author

Posted by

Stan Deberenx

Stan Deberenx is the Editor-in-Chief of Defensorum. Stan has many years of journalism experience on several publications. He has a reputation for attention to detail and journalist standards. Stan is a literature graduate from Sorbonne University, with a master's degree in management from Audencia/University of Cincinnati.
LinkedIn