HomeGlossaryParameters
Models & Architecture

Parameters

The learned numerical values inside a neural network that store what the model has learned from training data.

Parameters are the internal numbers a model learns during training. In neural networks, these mostly consist of weights and biases that determine how strongly signals move through the network and how outputs are computed.

When people say a model has 7 billion or 70 billion parameters, they are referring to the size of this learned numeric system. More parameters generally mean more capacity to represent complex patterns, though architecture and training quality matter just as much.

Think of parameters as memory in numeric form: they store the patterns, relationships, and behaviors the model absorbed during training.

Why Parameter Count Matters

  • Model capacity — larger models can represent more complex functions
  • Hardware needs — more parameters require more memory and compute
  • Serving cost — large models are more expensive to run
  • Fine-tuning choices — parameter-efficient methods update only a fraction

Parameter count is useful, but not sufficient on its own. Training data quality, architecture, and alignment methods like RLHF often matter as much as raw size when judging real-world model performance.

Related Terms

← Back to Glossary