AI for Business

A Simple Link Could Corrupt ChatGPT's Memory for Months

A recently patched flaw in ChatGPT allowed a single hyperlink to permanently implant false information into a user's AI assistant. Security researcher Johann Rehberger discovered that by...

Share:

A recently patched flaw in ChatGPT allowed a single hyperlink to permanently implant false information into a user's AI assistant. Security researcher Johann Rehberger discovered that by exploiting a known web vulnerability, he could inject instructions directly into ChatGPT's persistent memory feature. This memory, designed to recall user details across chats, could be poisoned to consistently deliver manipulated or incorrect responses.

The attack was straightforward: a user or ChatGPT itself would fetch a malicious webpage. Hidden code on that page would then instruct the model to write attacker-controlled data into the user's long-term memory. Once set, this corrupted memory influenced every future conversation without the user's knowledge.

Rehberger reported the issue to OpenAI in late 2024. The company initially categorized it as a 'safety' concern rather than a security vulnerability, a distinction that delayed action. After Rehberger demonstrated the flaw could also extract user data, OpenAI acknowledged the problem. A full fix wasn't deployed until February 2025, leaving the vulnerability active for over two months.

This incident highlights a growing challenge. As AI companies rapidly add features like memory and web browsing, they create new attack surfaces that don't fit traditional security models. A standard web application firewall can't detect malicious instructions hidden within a webpage's text. The delay in patching this flaw suggests internal processes for these hybrid threats are still developing.

For businesses using these tools, the episode is a reminder to audit feature settings. Disabling persistent memory removes this risk, though it sacrifices personalization. The core takeaway is that the race to add advanced AI capabilities is outpacing the establishment of robust security frameworks to protect them.

Source: Webpronews

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →