Turns out, OpenAI’s “temporary chat” feature was about as temporary as a New Year’s resolution. Users recently discovered that the company has been quietly preserving deleted conversations since mid-May—thanks to a federal court order in The New York Times v. OpenAI case.
The Fine Print: Your Data Was Never Yours
OpenAI claims it’s all about “privacy commitments,” but let’s be real—those commitments folded faster than a cheap lawn chair under legal pressure. The court demanded OpenAI preserve logs (including user-deleted chats) to investigate potential copyright infringement. OpenAI complied immediately… and then waited three weeks to tell anyone. Who’s affected?
- Free, Plus, Pro, and Team users: Your “deleted” chats are now in legal purgatory.
- Enterprise & Edu users: Congrats, you’re exempt—money talks.
Sam Altman’s “AI Privilege” Pipe Dream 💭
In a desperate bid to sound visionary, Altman floated the idea of “AI privilege”—comparing chatbot convos to lawyer-client confidentiality. Cute. But until AI gets a law degree or a medical license, this is just corporate wishful thinking.
Why This Matters for Businesses
If you’re using ChatGPT in your org:
- Verify your endpoints. “Zero Data Retention” isn’t the default.
- Assume nothing is ephemeral. Legal discovery just became a new attack vector.
- Read the room. OpenAI’s “privacy-first” branding just took a hit. The irony? The same company selling “AI safety” can’t even keep its own deletion promises. Maybe next they’ll promise us a bridge in Brooklyn—temporarily, of course. 🏗️