- AI, But Simple
- Posts
- Hierarchical Reasoning Models (HRMs), Simply Explained
Hierarchical Reasoning Models (HRMs), Simply Explained
AI, But Simple Issue #65

Hello from the AI, but simple team! If you enjoy our content, consider supporting us so we can keep doing what we do.
Our newsletter is no longer sustainable to run at no cost, so we’re relying on different measures to cover operational expenses. Thanks again for reading!
Hierarchical Reasoning Models (HRMs), Simply Explained
AI, But Simple Issue #65
In June 2025, a research paper published by Guan Wang and Sapient Intelligence’s team rocked the AI world, showcasing a highly powerful LLM model that blew concurrent reasoning models out of the water.
This new LLM architecture, called the Hierarchical Reasoning Model (HRM), promises extraordinary accuracy along with efficiency and stability, challenging our existing ideas of training models and receiving outputs.

The release went viral within the AI community, with X/Twitter discussions hitting over 4 million views and tens of thousands of likes, while YouTube videos dissecting the work surpassed 475K views.
But unlike typical AI breakthroughs that require massive computational resources and internet-scale datasets, HRM achieves its remarkable results with just 27 million parameters and 1,000 training examples.
This sounds like a lot to the untrained eye, but consider that models like ChatGPT and Llama rely on billions of parameters and huge training data sets.
With the paper only coming out two months ago, there isn’t much supporting evidence, apart from what’s included in the paper, to explore.