1. What does the ROC Curve (Receiver Operating Characteristic Curve) visualize?
A. The relationship between Precision and Recall
B. The trade-off between True Positive Rate (TPR) and False Positive Rate (FPR)
C. The correlation between actual values and predicted values
D. The distribution of the predicted probabilities
Answer: B
(ROC Curve plots the True Positive Rate (TPR) against the False Positive Rate (FPR))
2. What is the True Positive Rate (TPR) also known as?
A. Specificity
B. Recall or Sensitivity
C. Precision
D. False Negative Rate (FNR)
Answer: B
(TPR = Recall = TP / (TP + FN))
3. What does the False Positive Rate (FPR) represent in a confusion matrix?
A. The proportion of positive instances that are correctly classified as positive
B. The proportion of negative instances that are incorrectly classified as positive
C. The proportion of negative instances correctly classified as negative
D. The proportion of false positives relative to the total number of actual positives
Answer: B
(FPR = FP / (FP + TN))
4. Which of the following describes a perfect classifier in the context of an ROC curve?
A. A classifier with an ROC curve that passes through the origin (0,0)
B. A classifier with a curve that hugs the top-left corner, indicating 100% sensitivity and 0% false positive rate
C. A classifier with a straight diagonal line, indicating random guessing
D. A classifier with equal True Positive and False Positive Rates
Answer: B
(A perfect classifier would achieve 100% sensitivity and 0% false positive rate, with the ROC curve hugging the top-left corner)
5. What is the range of the Area Under the ROC Curve (AUC) value?
A. [0, 1]
B. [0, ∞)
C. (-∞, ∞)
D. [0, 0.5]
Answer: A
(AUC ranges from 0 to 1, with 1 being a perfect classifier and 0.5 indicating a random classifier)
6. What does an AUC value of 0.5 imply about a model?
A. The model is performing well and effectively distinguishing between classes
B. The model is randomly guessing
C. The model is perfectly classifying all instances
D. The model is unable to identify positive instances at all
Answer: B
(An AUC of 0.5 indicates no discrimination ability, which is equivalent to random guessing)
7. Which of the following is True about an ROC curve with a steep initial rise?
A. The model has a low True Positive Rate and high False Positive Rate
B. The model is poor at distinguishing between positive and negative classes
C. The model is making predictions with high accuracy early on
D. The model is showing a good balance between precision and recall
Answer: C
(A steep rise in the ROC curve indicates a good balance between correctly classifying positive instances and avoiding false positives)
8. When comparing two models using ROC curves, which model is considered better?
A. The model with the curve closest to the bottom-right corner
B. The model with the highest AUC value
C. The model with the curve closest to the bottom-left corner
D. The model with the curve that has the least area
Answer: B
(The model with the highest AUC value is considered better, as it demonstrates higher discriminatory power)
9. What does a decreasing ROC curve (i.e., a curve that moves toward the bottom-right corner) indicate?
A. The model is overfitting
B. The model has poor performance and is predicting incorrectly most of the time
C. The model is performing excellently
D. The model is underfitting
Answer: B
(A decreasing ROC curve suggests that the model is performing poorly, often misclassifying instances)
10. How is AUC interpreted when comparing different classification models?
A. Higher AUC values indicate better model performance, with 1.0 being perfect and 0.5 indicating random guessing
B. Higher AUC values indicate worse performance, with 1.0 being the worst and 0.5 being optimal
C. AUC is not a reliable metric for model performance
D. AUC values above 0.7 indicate a model is overfitting
Answer: A
(Higher AUC values indicate better model performance, with 1.0 being perfect and 0.5 being equivalent to random guessing)
11. Which of the following statements about the ROC curve is true?
A. The ROC curve can only be used for binary classification tasks
B. The ROC curve shows the relationship between precision and recall
C. The ROC curve plots False Negative Rate (FNR) against True Positive Rate (TPR)
D. The ROC curve plots True Positive Rate (TPR) against False Positive Rate (FPR)
Answer: D
(The ROC curve plots TPR vs. FPR)
12. What is the main purpose of the ROC curve in model evaluation?
A. To evaluate how well the model fits the data
B. To determine the optimal threshold for classification
C. To estimate the computational efficiency of the model
D. To visualize how well the model distinguishes between positive and negative classes
Answer: D
(The ROC curve is used to visualize the performance of a model in distinguishing between the positive and negative classes)
13. In the context of ROC analysis, what does Sensitivity represent?
A. The ability of the model to correctly identify negative instances
B. The proportion of true positives out of all actual positives
C. The proportion of false positives out of all actual negatives
D. The proportion of correctly classified instances in the test set
Answer: B
(Sensitivity, or TPR, represents the proportion of true positives out of all actual positives)
14. Which of the following is False regarding AUC?
A. AUC is a measure of how well the model distinguishes between classes
B. AUC can be used with both binary and multi-class classification problems
C. AUC values below 0.5 indicate that the model is performing better than random guessing
D. AUC is independent of the threshold used for classification
Answer: C
(AUC values below 0.5 indicate that the model is worse than random guessing)
15. If a model has an AUC of 0.85, what does this mean?
A. The model can correctly classify 85% of positive and negative instances
B. The model has a 15% chance of classifying a random positive instance as negative
C. The model has a 15% error rate
D. The model is better than random guessing, with 85% of its predictions being correct
Answer: B
(AUC of 0.85 means that the model has an 85% chance of correctly ranking a random positive instance higher than a random negative instance)