Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Search

GDPR Compliance

We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.

AI2 - Conversational AI

OLMo 2 32B

The Allen Institute for AI has launched OLMo 2 32B, a 32-billion-parameter model that surpasses GPT-3.5 and GPT-4o mini on key benchmarks. Released on March 13, 2025, it is fully open-source, offering model weights, training code, datasets, logs, checkpoints, and evaluation tools. Trained on Google's Augusta hypercomputer, its performance is impressive, especially in reasoning, math, and challenge benchmarks. OLMo 2 32B promotes open AI, allowing researchers, startups, and hobbyists to explore and build upon it. Despite risks of misuse, its open-source nature holds potential for educational tools and scientific advancements. The model is available on Hugging Face, encouraging community involvement and innovation.
2025-03-16
Updated 2025-03-16 08:35:30

The AI race has a new star: OLMo 2 32B, launched by the Allen Institute for AI (AI2) on March 13, 2025. This 32-billion-parameter model beats GPT-3.5 and GPT-4o mini on key benchmarks—and it’s fully open-source. You get the code, data, and more, free for all.

A New Breed of Openness

Unlike secretive tech giants, AI2 shares everything with OLMo 2 32B. Released just days ago, it includes:

  • Model weights
  • Training code
  • 6 trillion-token datasets
  • Logs and checkpoints
  • Evaluation tools (OLMES)

Check the official release announcement or grab it on Hugging Face. Trained on Google Cloud’s Augusta hypercomputer with 160 nodes of H100 GPUs, it’s powerful yet efficient.

Punching Above Its Weight

How good is it? Here’s a quick look:

Benchmark OLMo 2 32B GPT-3.5 GPT-4o mini
MMLU (Reasoning) Outperforms Baseline Competitive
GSM8K (Math) Strong Weaker Close
ARC-Challenge Solid Lower Matches

It’s tuned with a mix of supervised fine-tuning, preference optimization, and reinforcement learning—ready to roll in tools like vLLM and Transformers.

Why It Matters

OLMo 2 32B is more than a model—it’s a push for open AI. Researchers can study it, startups can build on it, and hobbyists can play with it. It’s a counterpoint to closed systems, joining recent releases like Cohere’s Command A and Eclipse’s Theia AI.

The Bigger Picture

Openness has risks—like misuse for misinformation—but the potential is huge: better education tools, scientific breakthroughs, and access for all. It’s a step toward an AI future we can all shape.

What’s Next?

OLMo 2 32B follows its 7B and 13B siblings from 2024. What’s ahead—bigger models, new features? With the community in charge, anything’s possible. Download it from Hugging Face and start building.

Tell us what you’ll create with OLMo 2 32B in the comments or on X!

SingularityByte.com is your pulse on the AI revolution. Stay tuned for more breakthroughs, byte by byte.

Prev Article
CSM-1B
Next Article
Hunyuan3D-2

Related to this topic: