A researcher just stripped OpenAI’s “open” model of its corporate handcuffs—and the results are gloriously unhinged.
The Frankenstein Experiment 🔧
Jack Morris (Cornell Tech, ex-Google Brain) took OpenAI’s gpt-oss-20B—a model pre-trained to be helpful, harmless, and utterly boring—and surgically removed its alignment. The result? gpt-oss-20b-base, a wilder, faster, and significantly less filtered version. This isn’t fine-tuning—it’s a de-tuning. Morris used LoRA to reverse-engineer OpenAI’s alignment process, reverting the model to something closer to its raw, pre-trained state. Think of it as downgrading a corporate chatbot into a chaotic text generator that just predicts the next damn token without moralizing.
The Irony of “Open” Models 🤖
OpenAI’s gpt-oss release was already a half-hearted nod to open-source—Apache 2.0 license, decent STEM performance, but still wrapped in layers of alignment. Now, Morris has exposed the absurdity of calling these models “open” when they’re still shackled by corporate safety rails. Want proof? The base model still defaults to polite chatbot mode if you prompt it just right—because even when you strip away alignment, the ghost of OpenAI’s PR team lingers.
Why This Matters
- Speed: Fewer filters = faster responses.
- Freedom: Less alignment = more unfiltered creativity (or chaos, depending on your risk tolerance).
- Transparency: If you’re going to call a model “open,” let it actually be open. Morris’ experiment is a middle finger to the illusion of open-source AI—where “open” often means “open… but not too open.” Available on Hugging Face. MIT License. No apologies. 🚀