🎯 Core Goals
- Learn why “thinking step by step” makes AI smarter.
- Understand the difference between prompting for reasoning and native “thinking” models.
For complex tasks, don’t ask for the answer immediately. Ask the AI to “think step by step.” This forces it to build its internal logic before committing to a final conclusion.
👁️ Visuals & Interactives
Show Your Work
See how thinking step-by-step prevents logic errors.
Direct Answer
Question: "If I have 3 apples, eat 1, and buy 5 more—how many?"
AI Result: "You have 8 apples." (Wrong! It missed the subtraction step.)
keyboard_double_arrow_down
Chain of Thought
✨ Much More Accurate
Question: "If I have 3 apples, eat 1, and buy 5 more—how many? Let's think step-by-step."
psychology
Thinking Process
1. Start with 3 apples.2. Eat 1 apple: 3 - 1 = 2 apples remaining.
3. Buy 5 more: 2 + 5 = 7 apples.
4. Final count: 7.
AI Output: "You have 7 apples." (Correct!)
📝 Key Concepts
- Chain-of-Thought (CoT): Prompting for step-by-step reasoning.
- Thinking Tokens: Modern models (connected to what we learned about model training) are trained to do this internally before they even start typing their final answer.
- The “Scratchpad” Effect: Generating intermediate steps provides additional context for the final answer, making it more reliable.
Even when the reasoning chain contains errors, the simple act of spending more tokens on reasoning improves final accuracy. It’s like how scribbling on scratch paper helps you solve a math problem, even if your scribbles aren’t perfectly organized. So in some sense, reasoning is more like an “effort” thing. If you want a smarter model, set the reasoning effort to high.
Is reasoning the same thing as thinking?