Model Evaluation & Metrics (ML)

Machine Learning Tutorial Part 9: Model Evaluation & Metrics

Machine Learning Tutorial Part 9: Model Evaluation & Metrics

๐ŸŽฏ Why Model Evaluation Matters

Model evaluation helps you understand how well your machine learning model performs on unseen data. It allows you to choose the best model, avoid overfitting, and fine-tune performance.

๐Ÿ“Š Common Classification Metrics

  • Accuracy: Ratio of correctly predicted observations.
  • Precision: How many predicted positives are actually correct.
  • Recall (Sensitivity): How many actual positives are correctly predicted.
  • F1-Score: Harmonic mean of precision and recall.
  • Confusion Matrix: Table showing TP, TN, FP, FN values.

๐Ÿงช Evaluation Example in Python

from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix

y_true = [1, 0, 1, 1, 0, 1, 0]
y_pred = [1, 0, 1, 0, 0, 1, 1]

print("Accuracy:", accuracy_score(y_true, y_pred))
print("Precision:", precision_score(y_true, y_pred))
print("Recall:", recall_score(y_true, y_pred))
print("F1 Score:", f1_score(y_true, y_pred))
print("Confusion Matrix:\n", confusion_matrix(y_true, y_pred))

๐Ÿ“ˆ ROC Curve & AUC

The ROC curve plots the true positive rate against the false positive rate. AUC (Area Under Curve) measures the overall performance — the closer to 1, the better.

from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt

fpr, tpr, thresholds = roc_curve(y_true, y_pred)
roc_auc = auc(fpr, tpr)

plt.plot(fpr, tpr, label=f"AUC = {roc_auc:.2f}")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("ROC Curve")
plt.legend()
plt.show()

๐Ÿ“Œ Summary

  • Use accuracy_score for balanced datasets.
  • Use precision, recall, and f1_score for imbalanced classes.
  • Use a confusion_matrix to visualize prediction errors.
  • Use ROC-AUC for binary classification evaluation.

Comments