๐ฆ๐ฐ๐ถ๐ฒ๐ป๐๐ถ๐๐๐ ๐ต๐ฎ๐๐ฒ ๐ท๐๐๐ ๐ฑ๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐ฒ๐ฑ ๐ฎ ๐ป๐ฒ๐ ๐๐ ๐บ๐ผ๐ฑ๐ฒ๐น ๐ถ๐ป๐๐ฝ๐ถ๐ฟ๐ฒ๐ฑ ๐ฏ๐ ๐๐ต๐ฒ ๐ต๐๐บ๐ฎ๐ป ๐ฏ๐ฟ๐ฎ๐ถ๐ป, ๐ฎ๐ป๐ฑ ๐ถ๐โ๐ ๐ฎ๐น๐ฟ๐ฒ๐ฎ๐ฑ๐ ๐บ๐ฎ๐ธ๐ถ๐ป๐ด ๐๐ฎ๐๐ฒ๐. ๐๐ฎ๐ฟ๐น๐ ๐๐ฒ๐๐๐ ๐๐ต๐ผ๐ ๐๐ต๐ฎ๐ ๐๐ต๐ถ๐ ๐ฏ๐ฟ๐ฒ๐ฎ๐ธ๐๐ต๐ฟ๐ผ๐๐ด๐ต ๐๐ ๐ถ๐ ๐ผ๐๐๐ฝ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ถ๐ป๐ด ๐ฒ๐๐ฒ๐ป ๐ฎ๐ฑ๐๐ฎ๐ป๐ฐ๐ฒ๐ฑ ๐๐๐ ๐ ๐น๐ถ๐ธ๐ฒ ๐๐ต๐ฎ๐๐๐ฃ๐ง ๐๐ต๐ฒ๐ป ๐ถ๐ ๐ฐ๐ผ๐บ๐ฒ๐ ๐๐ผ ๐ฐ๐ผ๐บ๐ฝ๐น๐ฒ๐ ๐ฟ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด ๐๐ฎ๐๐ธ๐.
๐ณ๐๐พ ๐ง๐๐พ๐๐บ๐๐ผ๐๐๐ผ๐บ๐ ๐ฑ๐พ๐บ๐๐๐๐๐๐ ๐ฌ๐๐ฝ๐พ๐ (๐ง๐ฑ๐ฌ) ๐๐ ๐ฝ๐พ๐๐๐๐๐พ๐ฝ ๐๐ ๐๐๐๐๐ผ ๐๐๐ ๐๐๐พ ๐๐๐๐บ๐ ๐ป๐๐บ๐๐ ๐๐๐๐ผ๐พ๐๐๐พ๐ ๐ผ๐๐๐๐ ๐พ๐ ๐๐๐ฟ๐๐๐๐บ๐๐๐๐, ๐บ๐๐ฝ ๐๐ ๐๐บ๐ ๐๐บ๐๐บ๐๐พ๐ฝ ๐๐ ๐๐๐๐๐พ๐๐ฟ๐๐๐ ๐๐๐๐พ ๐๐ฟ ๐๐๐พ ๐ ๐พ๐บ๐ฝ๐๐๐ ๐ซ๐ซ๐ฌ๐ ๐๐ ๐บ ๐ป๐พ๐๐ผ๐๐๐บ๐๐ ๐๐๐บ๐โ๐ ๐๐๐๐๐ ๐ฟ๐๐ ๐ป๐พ๐๐๐ ๐พ๐๐๐๐พ๐๐พ๐ ๐ ๐๐๐๐๐ ๐๐ ๐ป๐พ๐บ๐.
๐ฒ๐ผ๐๐พ๐๐๐๐๐๐ ๐๐บ๐๐พ ๐ผ๐๐พ๐บ๐๐พ๐ฝ ๐บ ๐๐พ๐ ๐๐๐๐พ ๐๐ฟ ๐ ๐จ ๐๐๐ฝ๐พ๐ ๐๐๐บ๐ ๐บ๐๐๐๐๐บ๐ผ๐๐พ๐ ๐๐พ๐บ๐๐๐๐๐๐ ๐๐ ๐บ ๐ผ๐๐๐๐ ๐พ๐๐พ๐ ๐ ๐ฝ๐๐ฟ๐ฟ๐พ๐๐พ๐๐ ๐๐บ๐ ๐ผ๐๐๐๐บ๐๐พ๐ฝ ๐๐ ๐๐๐๐ ๐ ๐บ๐๐๐พ ๐ ๐บ๐๐๐๐บ๐๐พ ๐๐๐ฝ๐พ๐ ๐ (๐ซ๐ซ๐ฌ๐) ๐ ๐๐๐พ ๐ข๐๐บ๐๐ฆ๐ฏ๐ณ. ๐ณ๐๐บ๐๐๐ ๐๐ ๐๐๐๐ ๐๐พ๐ ๐บ๐๐๐๐๐บ๐ผ๐, ๐๐๐พ ๐๐๐ฝ๐พ๐ ๐ฝ๐พ๐ ๐๐๐พ๐๐ ๐๐๐๐๐๐ฟ๐๐ผ๐บ๐๐๐ ๐ ๐ป๐พ๐๐๐พ๐ ๐๐พ๐๐ฟ๐๐๐๐บ๐๐ผ๐พ ๐๐ ๐๐พ๐๐พ๐๐บ๐ ๐๐พ๐ ๐ป๐พ๐๐ผ๐๐๐บ๐๐๐.
The new reasoning AI, known as the {๐๐ถ๐ฒ๐ฟ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐ฐ๐ฎ๐น ๐ฅ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด ๐ ๐ผ๐ฑ๐ฒ๐น (๐๐ฅ๐ )https://pmc.ncbi.nlm.nih.gov/articles/PMC11665873/}, is inspired by how the human ๐ฏ๐ฟ๐ฎ๐ถ๐ป ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ฒ๐ ๐ถ๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป โ integrating data across different time scales, from milliseconds to minutes.
According to scientists at ๐ฆ๐ฎ๐ฝ๐ถ๐ฒ๐ป๐, an AI company based in Singapore, this model not only delivers ๐ฏ๐ฒ๐๐๐ฒ๐ฟ ๐ฝ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ฎ๐ป๐ฐ๐ฒ but also works ๐บ๐ผ๐ฟ๐ฒ ๐ฒ๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐๐น๐. Thatโs because it needs ๐ณ๐ฒ๐๐ฒ๐ฟ ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐ and ๐น๐ฒ๐๐ ๐๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด ๐ฑ๐ฎ๐๐ฎ compared to traditional models.
According to the study uploaded on ๐๐๐ป๐ฒ ๐ฎ๐ฒ to the (๐ฎ๐ฟ๐ซ๐ถ๐)https://arxiv.org/abs/2506.21734 preprint database (still awaiting peer review), the ๐๐ฅ๐ ๐บ๐ผ๐ฑ๐ฒ๐น uses just ๐ฎ๐ณ ๐บ๐ถ๐น๐น๐ถ๐ผ๐ป ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐ and was trained on only ๐ญ,๐ฌ๐ฌ๐ฌ ๐๐ฎ๐บ๐ฝ๐น๐ฒ๐. In contrast, most advanced ๐๐๐ ๐ rely on ๐ฏ๐ถ๐น๐น๐ถ๐ผ๐ป๐ โ ๐ฒ๐๐ฒ๐ป ๐๐ฟ๐ถ๐น๐น๐ถ๐ผ๐ป๐ โ ๐ผ๐ณ ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐. For comparison, while the exact number isnโt public, estimates suggest that the newly released ๐๐ฃ๐ง-๐ฑ could have anywhere between ๐ฏ ๐๐ฟ๐ถ๐น๐น๐ถ๐ผ๐ป ๐ฎ๐ป๐ฑ ๐ฑ ๐๐ฟ๐ถ๐น๐น๐ถ๐ผ๐ป ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐.
๐ ๐ป๐ฒ๐ ๐๐ฎ๐ ๐ผ๐ณ ๐๐ต๐ถ๐ป๐ธ๐ถ๐ป๐ด ๐ณ๐ผ๐ฟ ๐๐
Scientists have developed a ๐ฟ๐ฒ๐๐ผ๐น๐๐๐ถ๐ผ๐ป๐ฎ๐ฟ๐ ๐๐ ๐บ๐ผ๐ฑ๐ฒ๐น designed to think more like the ๐ต๐๐บ๐ฎ๐ป ๐ฏ๐ฟ๐ฎ๐ถ๐ปโ and itโs already ๐ฏ๐ฒ๐ฎ๐๐ถ๐ป๐ด ๐๐ผ๐บ๐ฒ ๐ผ๐ณ ๐๐ต๐ฒ ๐๐ผ๐ฟ๐น๐ฑโ๐ ๐บ๐ผ๐๐ ๐ฎ๐ฑ๐๐ฎ๐ป๐ฐ๐ฒ๐ฑ ๐น๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐บ๐ผ๐ฑ๐ฒ๐น๐, including ChatGPT, in complex reasoning tests.
The new system, called the ๐๐ถ๐ฒ๐ฟ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐ฐ๐ฎ๐น ๐ฅ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด ๐ ๐ผ๐ฑ๐ฒ๐น (๐๐ฅ๐ ), is inspired by how the brain ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ฒ๐ ๐ฎ๐ป๐ฑ ๐ถ๐ป๐๐ฒ๐ด๐ฟ๐ฎ๐๐ฒ๐ ๐ถ๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป across different time scales โ from ๐บ๐ถ๐น๐น๐ถ๐๐ฒ๐ฐ๐ผ๐ป๐ฑ๐ ๐๐ผ ๐บ๐ถ๐ป๐๐๐ฒ๐. Unlike traditional large language models (LLMs) that depend on brute-force computation, HRM focuses on ๐๐บ๐ฎ๐ฟ๐๐ฒ๐ฟ, ๐๐๐ฟ๐๐ฐ๐๐๐ฟ๐ฒ๐ฑ ๐ฟ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด.
Researchers at ๐ฆ๐ฎ๐ฝ๐ถ๐ฒ๐ป๐, an AI company based in Singapore, say HRM not only ๐ฝ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ ๐ฏ๐ฒ๐๐๐ฒ๐ฟ but also ๐๐ผ๐ฟ๐ธ๐ ๐บ๐ผ๐ฟ๐ฒ ๐ฒ๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐๐น๐. Unlike models with massive architectures, HRM uses just ๐ฎ๐ณ ๐บ๐ถ๐น๐น๐ถ๐ผ๐ป ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐ and was trained on only ๐ญ,๐ฌ๐ฌ๐ฌ ๐๐ฎ๐บ๐ฝ๐น๐ฒ๐ โ a fraction of what modern LLMs need. For comparison, todayโs cutting-edge models, like ๐๐ฃ๐ง-๐ฑ, are estimated to have between ๐ฏ ๐๐ฟ๐ถ๐น๐น๐ถ๐ผ๐ป ๐ฎ๐ป๐ฑ ๐ฑ ๐๐ฟ๐ถ๐น๐น๐ถ๐ผ๐ป ๐ฝ๐ฎ๐ฟ๐ฎ๐บ๐ฒ๐๐ฒ๐ฟ๐.
When tested on the ๐๐ฅ๐-๐๐๐ ๐ฏ๐ฒ๐ป๐ฐ๐ต๐บ๐ฎ๐ฟ๐ธ โ an extremely challenging test designed to measure how close AI is to achieving ๐ฎ๐ฟ๐๐ถ๐ณ๐ถ๐ฐ๐ถ๐ฎ๐น ๐ด๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐น ๐ถ๐ป๐๐ฒ๐น๐น๐ถ๐ด๐ฒ๐ป๐ฐ๐ฒ (๐๐๐) โ HRM delivered impressive results.
๐๐ฅ๐-๐๐๐-๐ญ: HRM scored ๐ฐ๐ฌ.๐ฏ%
(๐ท๐ด. ๐๐ฑ๐ฆ๐ฏ๐๐โ๐ด ๐ฐ3-๐ฎ๐ช๐ฏ๐ช-๐ฉ๐ช๐จ๐ฉ ๐ข๐ต 34.5%, ๐๐ญ๐ข๐ถ๐ฅ๐ฆ 3.7 ๐ข๐ต 21.2%, ๐ข๐ฏ๐ฅ ๐๐ฆ๐ฆ๐ฑ๐๐ฆ๐ฆ๐ฌ ๐1 ๐ข๐ต 15.8%)
๐๐ฅ๐-๐๐๐-๐ฎ: HRM achieved ๐ฑ%
(๐ค๐ฐ๐ฎ๐ฑ๐ข๐ณ๐ฆ๐ฅ ๐ต๐ฐ 3% ๐ง๐ฐ๐ณ ๐ฐ3-๐ฎ๐ช๐ฏ๐ช-๐ฉ๐ช๐จ๐ฉ, 1.3% ๐ง๐ฐ๐ณ ๐๐ฆ๐ฆ๐ฑ๐๐ฆ๐ฆ๐ฌ ๐1, ๐ข๐ฏ๐ฅ ๐ซ๐ถ๐ด๐ต 0.9% ๐ง๐ฐ๐ณ ๐๐ญ๐ข๐ถ๐ฅ๐ฆ 3.7)
Most LLMs, including ChatGPT, rely on ๐ฐ๐ต๐ฎ๐ถ๐ป-๐ผ๐ณ-๐๐ต๐ผ๐๐ด๐ต๐ (๐๐ผ๐ง) reasoning, which breaks complex problems into smaller, natural-language steps. While this works well, HRM takes a ๐ฑ๐ถ๐ณ๐ณ๐ฒ๐ฟ๐ฒ๐ป๐, ๐ฏ๐ฟ๐ฎ๐ถ๐ป-๐ถ๐ป๐๐ฝ๐ถ๐ฟ๐ฒ๐ฑ ๐ฎ๐ฝ๐ฝ๐ฟ๐ผ๐ฎ๐ฐ๐ต, allowing it to ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐ ๐ถ๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป ๐ต๐ถ๐ฒ๐ฟ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐ฐ๐ฎ๐น๐น๐ and solve difficult reasoning tasks with ๐ณ๐ฒ๐๐ฒ๐ฟ ๐ฟ๐ฒ๐๐ผ๐๐ฟ๐ฐ๐ฒ๐.
Experts believe this breakthrough could mark a ๐๐ถ๐ด๐ป๐ถ๐ณ๐ถ๐ฐ๐ฎ๐ป๐ ๐น๐ฒ๐ฎ๐ฝ ๐๐ผ๐๐ฎ๐ฟ๐ฑ ๐ต๐๐บ๐ฎ๐ป-๐น๐ถ๐ธ๐ฒ ๐๐ ๐ฟ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด โ and possibly bring us ๐ฐ๐น๐ผ๐๐ฒ๐ฟ ๐๐ผ ๐๐๐ than ever before.