Term of the Moment

DSLR


Look Up Another Term


Definition: large reasoning model


Considered the next generation of AI, large reasoning models (LRMs) are said to "think" rather than only predict. Although true machine thinking has been a highly debated hot topic within the AI world for decades, pundits claim LRMs just utilize more advanced pattern recognition algorithms than today's large language models (LLMs).

It is quite possible that a combination of reasoning model and large language model (LLM) architecture will be widely used, in which case the reasoning part in these hybrid models provides the deduction and conclusions and the LLM delivers the required human prose to explain it to mere mortals. See large language model.

The Reasoning Revolution
In 2025, MIT Technology Review claimed that reasoning models are qualitatively different than LLMs. It said that "LRMs can explore different hypotheses, assess if answers are consistently correct, and adjust their approach accordingly."

LRM Examples
Major LRMs are Google's AlphaGeometry and AlphaZero, China's DeepSeek R1, OpenAI's o3 and Anthropic's Claude 3.5 Sonnet and Opus.

Will We Have to Wait Longer?
LRMs generally take longer to generate answers than LLMs. With ChatGPT and other chatbots, people are getting used to an immediate response after they enter their prompt. If LRMs become a major part of AI, users may have to get used to waiting a few seconds or even a couple minutes for an answer. See AI glossary.