HomeGlossaryModel Card
Safety & Alignment

Model Card

A standardized document that describes a model's purpose, capabilities, limitations, training data, and intended use.

Model cards, proposed by Google researchers in 2019, are structured documentation for AI models. They describe what a model can and cannot do, how it was trained, what data it used, how it was evaluated, and what risks it carries.

The goal is transparency — giving users, researchers, and regulators the information they need to understand and responsibly deploy a model.

Typical sections: intended use, training data, evaluation metrics, ethical considerations, known limitations, and license terms.

Model cards have become standard practice for responsible AI. Major labs like Anthropic, OpenAI, and Meta release model cards alongside their foundation models, though the level of detail varies widely.

Related Terms

← Back to Glossary