Reasoning Model
ELI5 — The Vibe Check
An AI model that actually thinks before it speaks. Normal models just predict the next word, but reasoning models take extra time to work through problems step by step. They're slower but way smarter for hard tasks — like the difference between quick mental math and sitting down with a calculator.
Real Talk
Reasoning models are LLMs specifically trained or prompted to perform explicit multi-step reasoning before generating outputs. Examples include OpenAI's o1/o3 series and models using extended chain-of-thought. They trade latency for accuracy on complex tasks like math, coding, and planning.
When You'll Hear This
"Use a reasoning model for the algorithm problem — it needs to think." / "Reasoning models are overkill for simple text generation."
Related Terms
Chain of Thought (CoT)
Chain of Thought is when you tell the AI 'show your work' like a math teacher.
Inference
Inference is when the AI actually runs and generates output — as opposed to training, which is when it's learning.
LLM (Large Language Model)
An LLM is a humongous AI that read basically the entire internet and learned to predict what words come next, really really well.
Token
In AI-land, a token is a chunk of text — roughly 3/4 of a word.