HomeGlossaryCross-Validation
Evaluation & Metrics

Cross-Validation

A technique for evaluating model performance by splitting data into multiple folds and testing on each fold in turn.

Cross-validation provides a more reliable estimate of model performance than a single train/test split. In k-fold cross-validation, the data is divided into k equal parts; the model is trained on k-1 parts and tested on the remaining one, rotating through all combinations.

The final performance is averaged across all folds, giving a more stable estimate that uses every data point for both training and testing.

Typical: 5-fold or 10-fold cross-validation is standard. Stratified versions preserve class distributions across folds.

Cross-validation is the gold standard for model comparison in traditional ML. It's less common for deep learning due to computational cost but remains essential in medical and scientific applications where data is scarce.

Related Terms

← Back to Glossary