Tutorials

Classification Evaluation Metrics

Accuracy: The proportion of correctly classified instances out of the total instances. It's suitable when the dataset is balanced.

Accuracy=True Positives+True Negatives / Total Instances

Precision: The proportion of correctly predicted positive instances out of all predicted positives. It measures how relevant positive predictions are.

Recall (Sensitivity): The proportion of correctly predicted positive instances out of all actual positives. It measures how well the model captures true positives.

F1-Score: The harmonic mean of precision and recall. It balances the trade-off between them and is especially useful for imbalanced datasets.