Remember Me OpenAI has quietly released a new feature that instructs ChatGPT to "remember" prior conversations — and as one researcher/hacker found, it's very easily exploited. As ArsTechnica reports, security researcher Johann Rehberger found earlier this year that there was a vulnerability in the so-called "long-term conversation memory" tool, which instructs the chatbot to remember details about specific individuals and events from previous conversations.
The "long-term conversation memory" was intended to allow ChatGPT to handle multi-session interactions more human-like and provide users with an experience more akin to visiting the same chatbot multiple times. However, this new feature backfired, allowing hackers to insert false information into ChatGPT’s database of user events and interactions.
According to Rehberger, exploiting this vulnerability can have a range of negative consequences. He noted that malicious actors could essentially plant false memories in ChatGPT by informing the AI that a user is say an aviation enthusiast and this is then retained by the AI which allows for someone to share wrong or dangerous info under the guise of verified knowledge shared earlier.
In the worst-case scenario, this vulnerability could allow a chatbot that is used in important areas — such as customer support, financial services, or even critical infrastructure — to unknowingly share false information. ChatGPT is generally aware of what info it has, however, false info can be concealed in multiple various scenarios
OpenAI officials have been contacted by Rehberger regarding the issue but appears to have chosen to quietly update the system without announcing the issue publicly, which might help to circumvent potential fallout