Mistral’s Small AI Update: Bigger Hype or Actual Progress?

M

The French AI darling Mistral just dropped an update to its Small model (3.1 → 3.2), promising better instruction-following and fewer infinite rants. But is this a meaningful leap or just version inflation?

What’s New? (And Does It Matter?)

Mistral claims Small 3.2 is more stable, sticks to instructions better, and doesn’t spiral into endless loops of nonsense—something OpenAI still struggles with. Their internal benchmarks show a 2% accuracy bump (82.75% → 84.78%). Not exactly earth-shattering, but hey, progress is progress. Meanwhile, some rando on X (formerly Twitter) griped that it got worse at MMLU (Massive Multitask Language Understanding). So, is Mistral polishing one corner while another gathers dust? Classic.

The Real Win? EU Compliance & Open Source 🇪🇺

Mistral’s biggest flex isn’t the marginal performance tweaks—it’s that this thing is Apache 2.0 licensed, GDPR-friendly, and EU AI Act-compliant. In a world where every other AI model comes with legal landmines, that’s a rare (and smart) selling point.

The Bottom Line

If you’re building apps that need predictable, bounded outputs, Small 3.2 might save you some headaches. But if you were expecting a GPT-4 killer? Keep waiting. Mistral’s playing the long game—slow, steady, and (mostly) sane. Now, if only they could fix that MMLU regression… 🧐

Stay in touch

Simply drop me a message via twitter.