HomeGlossaryPre-trained Model
Training

Pre-trained Model

A model that has already been trained on broad data and can then be adapted or used for downstream tasks.

A pre-trained model is a model that has already learned broad patterns from large-scale data before being used in a product or specialized task. This initial training gives it a general base of knowledge and capability.

Instead of starting from scratch, teams can build on a pre-trained model and adapt it with prompting, fine-tuning, or retrieval systems. This saves enormous time, cost, and compute.

Modern AI development usually starts here: use a strong pre-trained foundation, then adapt it for your application.

Why Pre-trained Models Are Valuable

  • Lower cost — avoids full training from zero
  • Faster development — start from a capable baseline
  • Broad knowledge — captures language and pattern structure from massive data
  • Adaptability — works with prompts, RAG, or fine-tuning

Many popular LLMs and open-weight models are distributed as pre-trained checkpoints. Teams then layer on alignment, instruction-following, or domain-specific customization to make them production-ready.

Related Terms

← Back to Glossary