HomeGlossaryModel Weights
Models & Architecture

Model Weights

The learned parameter values in a neural network that determine how input signals are transformed into outputs.

Model weights are the learned coefficients inside a neural network that control how strongly one neuron influences another. During training, these weights are adjusted repeatedly so the model produces better predictions.

If parameters are the model's learned memory, weights are the largest and most important part of that memory. They encode the relationships the model has discovered in data, from grammatical structure to visual patterns to coding conventions.

In simple terms: weights tell the model what to pay attention to and how much each input should influence the output.

How Weights Are Used

  • During training — updated to reduce error on examples
  • During inference — kept fixed while the model generates outputs
  • During fine-tuning — adjusted further for a specific domain or task
  • During model sharing — often distributed as downloadable checkpoint files

When developers talk about "loading model weights," they usually mean loading a trained checkpoint into memory so the model can run. That checkpoint is the practical artifact of everything learned during training.

Related Terms

← Back to Glossary