HomeGlossaryF1 Score
Evaluation & Metrics

F1 Score

A classification metric that combines precision and recall into a single score, balancing false positives and false negatives.

The F1 score is the harmonic mean of precision (what fraction of positive predictions are correct) and recall (what fraction of actual positives the model caught). It's especially useful when classes are imbalanced.

Formula: F1 = 2 × (Precision × Recall) / (Precision + Recall). The score ranges from 0 (worst) to 1 (perfect).

Why harmonic mean: it punishes extreme values. A model with 100% precision but 10% recall gets a low F1, unlike a simple average.

F1 is the go-to metric for classification tasks where both false positives and false negatives matter, like spam detection, medical diagnosis, and information retrieval.

Related Terms

← Back to Glossary