Machine Learning Tutorial Part 12: Model Evaluation Metrics
📏 Why Evaluate a Model?
Evaluating a machine learning model ensures it performs well not only on training data but also generalizes to unseen data. Let’s explore key metrics used for classification tasks.
✅ Accuracy
Measures how often the model makes correct predictions:
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_true, y_pred)
print("Accuracy:", accuracy)
Limitation: Not reliable with imbalanced datasets.
🎯 Precision & Recall
Precision: Of all predicted positives, how many are truly positive.
Recall: Of all actual positives, how many were correctly predicted.
from sklearn.metrics import precision_score, recall_score
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
📊 F1 Score
Harmonic mean of precision and recall. It’s useful when you need a balance between both.
from sklearn.metrics import f1_score
f1 = f1_score(y_true, y_pred)
print("F1 Score:", f1)
🔁 Confusion Matrix
A table to visualize the performance of the model by showing TP, FP, FN, TN.
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
cm = confusion_matrix(y_true, y_pred)
sns.heatmap(cm, annot=True, fmt='d')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()
📌 Summary
- Accuracy: Good for balanced classes.
- Precision: Important when false positives are costly.
- Recall: Important when missing positives is costly.
- F1 Score: Best when you want a balance.
- Confusion Matrix: Visual breakdown of prediction results.
Comments
Post a Comment