New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

A groundbreaking innovation from Singapore's Sapient Intelligence has introduced a new AI architecture that is capable of matching or even surpassing the performance of large language models (LLMs) in complex reasoning tasks, all while being significantly smaller and more data-efficient. Named the Hierarchical Reasoning Model (HRM), this architecture draws inspiration from the human brain's distinct systems for slow, thoughtful planning and rapid, intuitive calculations. What sets HRM apart is its remarkable efficiency, achieving exceptional results with only a fraction of the data and memory that current LLMs require. This could greatly benefit enterprise applications where data is limited and computational resources are constrained. Traditional LLMs often depend on a method called chain-of-thought (CoT) prompting, which breaks problems into text-based steps, essentially making the model “think out loud.” However, the researchers at Sapient Intelligence argue that this approach has significant limitations, as it relies too heavily on human-defined steps that can easily lead to errors. The researchers propose that a more effective approach is needed to reduce data requirements and enhance reasoning capabilities. They explored the concept of “latent reasoning,” where the model utilizes an internal, abstract representation of the problem instead of generating explicit language tokens. This aligns more closely with human cognitive processes, allowing for a deeper, more coherent chain of reasoning without constant translation into language. However, achieving this level of reasoning within AI poses challenges. Conventional methods such as stacking additional layers in deep learning models can result in a “vanishing gradient” problem, while recurrent architectures can lead to premature convergence on solutions. To overcome these issues, the Sapient team took cues from neuroscience, designing HRM to feature two interconnected recurrent modules: a high-level (H) module for abstract planning and a low-level (L) module for detailed computations. This structure facilitates what the team calls “hierarchical convergence,” allowing for a more nuanced problem-solving process. The L-module engages with parts of the problem, refining its approach until a stable solution is reached. At that point, the H-module assesses the results and adjusts the overall strategy, guiding the L-module through a new set of refined tasks. This innovative design prevents the L-module from getting stuck and enables a series of reasoning steps without the need for lengthy CoT prompts. In testing, HRM demonstrated impressive performance against complex benchmarks requiring extensive searches, such as the Abstraction and Reasoning Corpus (ARC-AGI) and challenging Sudoku puzzles. Remarkably, while traditional CoT models scored zero on certain tasks, HRM achieved near-perfect results after being trained on only 1,000 examples. It also outperformed leading CoT-based models in abstract reasoning tests, showcasing its efficiency and power. According to Guan Wang, Founder and CEO of Sapient Intelligence, the real-world applications of HRM extend beyond mere puzzle-solving. For complex decision-making tasks in areas like robotics, embodied AI, and scientific exploration, HRM proves to be a superior alternative to LLMs, with the potential for a staggering 100x increase in task completion speed. This acceleration translates to significant cost savings and efficiency, allowing businesses to tackle specialized problems with limited data and resources. Looking to the future, Sapient Intelligence is already working on evolving HRM into a more versatile reasoning module. Initial developments show promise in fields like healthcare, climate forecasting, and robotics, with plans to incorporate self-correcting capabilities. The research suggests that the future of AI may not lie in larger models, but in creating smarter, more structured systems inspired by the complex reasoning abilities of the human brain.

Sources : VentureBeat

Published On : Jul 26, 2025, 24:20

Computing
Satellite Showdown: FCC Chair Takes Sides in SpaceX vs. Amazon Feud

In the competitive landscape of satellite communications, disputes over orbital territories and electromagnetic spectrum...

Ars Technica | Mar 11, 2026, 22:05
Satellite Showdown: FCC Chair Takes Sides in SpaceX vs. Amazon Feud
Cybersecurity
Massive Botnet of 14,000 Routers Compromised by Resilient Malware

In a significant security breach, researchers have identified a robust botnet comprising 14,000 routers and various netw...

Ars Technica | Mar 11, 2026, 21:30
Massive Botnet of 14,000 Routers Compromised by Resilient Malware
AI
Netflix's Bold Move: $600 Million Investment in Ben Affleck's AI Venture

In a significant development last week, Netflix revealed its acquisition of InterPositive, an innovative AI company co-f...

TechCrunch | Mar 11, 2026, 22:30
Netflix's Bold Move: $600 Million Investment in Ben Affleck's AI Venture
Startups
Fi Neobank Shifts Focus as Banking Services Come to an End

Fi, the Indian neobank that gained traction over the past four years, is officially winding down its banking operations....

TechCrunch | Mar 11, 2026, 22:30
Fi Neobank Shifts Focus as Banking Services Come to an End
Computing
FCC Chair Critiques Amazon for Delayed Satellite Goals Amidst SpaceX Opposition

In a pointed critique, Federal Communications Commission Chairman Brendan Carr called out Amazon for its opposition to S...

CNBC | Mar 11, 2026, 22:30
FCC Chair Critiques Amazon for Delayed Satellite Goals Amidst SpaceX Opposition
View All News