Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Search

GDPR Compliance

We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.

AI2 OLMo 3.1

The Allen Institute for AI (AI2) has released OLMo 3.1, a family of fully open 32 billion parameter reasoning models. Unlike most open-weight releases, OLMo 3.1 ships with the complete training data, training code, evaluation scripts, and intermediate checkpoints, making it the most transparent frontier model ever published. The Think 32B variant gains 5+ points on AIME, 4+ on ZebraLogic, 4+ on IFEval, and over 20 points on IFBench compared to OLMo 3. AI2 calls the Instruct 32B variant the most capable fully open chat model to date.

OLMo 2 32B

The Allen Institute for AI has launched OLMo 2 32B, a 32-billion-parameter model that surpasses GPT-3.5 and GPT-4o mini on key benchmarks. Released on March 13, 2025, it is fully open-source, offering model weights, training code, datasets, logs, checkpoints, and evaluation tools. Trained on Google's Augusta hypercomputer, its performance is impressive, especially in reasoning, math, and challenge benchmarks. OLMo 2 32B promotes open AI, allowing researchers, startups, and hobbyists to explore and build upon it. Despite risks of misuse, its open-source nature holds potential for educational tools and scientific advancements. The model is available on Hugging Face, encouraging community involvement and innovation.