Chain of Thought
A prompting technique that asks an LLM to reason step by step before giving a final answer, improving complex reasoning.
Chain of Thought (CoT) prompting, introduced by Google researchers in 2022, dramatically improves LLM performance on reasoning tasks by asking the model to show its work. Instead of jumping straight to the answer, the model generates intermediate steps.
Just adding "Let's think step by step" before the answer can boost math and logic performance significantly. More elaborate versions provide worked examples in the prompt to show the model what reasoning should look like.
CoT is now a foundational prompting technique. Modern reasoning models like o1 take this further, generating long internal chains of thought before producing a response.