HomeGlossaryInstruction Tuning
Training

Instruction Tuning

A fine-tuning approach where a model is trained on many instruction-and-response examples to improve its ability to follow user requests.

Instruction tuning is the process of training a model on a large dataset of prompts and ideal responses so it becomes better at following human instructions. It is one of the key steps that turns a raw base model into a useful assistant.

Before instruction tuning, a model may be good at continuing text but poor at directly answering questions or following formatting requests. After instruction tuning, it becomes much more responsive to user intent.

What changes: the model shifts from "continue whatever text comes next" to "helpfully respond to what the user asked for."

Why Instruction Tuning Matters

  • Better usability — models become easier to interact with directly
  • Stronger formatting control — more likely to follow requested structure
  • Foundation for assistants — often precedes alignment steps like RLHF
  • Broader generalization — works across many task types

Instruction tuning is usually done after pretraining and before or alongside stronger alignment methods such as RLHF. It is a major milestone in the development of chat-style AI systems.

Related Terms

← Back to Glossary