HomeGlossaryIn-Context Learning
Applications

In-Context Learning

The ability of a model to learn patterns from instructions and examples provided inside the current prompt without updating its weights.

In-context learning is the phenomenon where a model adapts to a task based only on the text in the current prompt. The model does not update its internal parameters; it simply uses the examples and instructions in context to infer what to do.

This is one of the most surprising properties of modern LLMs. A single model can translate, summarize, classify, extract data, and write code, all because it can interpret the task from context rather than needing a separate trained model for each job.

Important distinction: in-context learning changes behavior temporarily through prompting. Fine-tuning changes behavior persistently by updating model weights.

How Teams Use In-Context Learning

  • Rapid prototyping — test workflows without retraining
  • Few-shot examples — teach the desired pattern inside the prompt
  • Task switching — reuse one model across many functions
  • Light customization — adapt outputs with instructions and context

In-context learning is a big reason general-purpose LLMs are so flexible. It shifts much of the work from model training to prompt design and application architecture.

Related Terms

← Back to Glossary