HomeGlossaryChain of Thought
Techniques & MethodsCoT

Chain of Thought

A prompting technique that asks an LLM to reason step by step before giving a final answer, improving complex reasoning.

Chain of Thought (CoT) prompting, introduced by Google researchers in 2022, dramatically improves LLM performance on reasoning tasks by asking the model to show its work. Instead of jumping straight to the answer, the model generates intermediate steps.

Just adding "Let's think step by step" before the answer can boost math and logic performance significantly. More elaborate versions provide worked examples in the prompt to show the model what reasoning should look like.

Why it works: generating intermediate steps gives the model more compute budget per final answer and lets it catch its own errors mid-reasoning.

CoT is now a foundational prompting technique. Modern reasoning models like o1 take this further, generating long internal chains of thought before producing a response.

Related Terms

← Back to Glossary