Evaluation and Validation MCQs January 8, 2026November 19, 2024 by u930973931_answers 10 min Score: 0 Attempted: 0/10 Subscribe 1. What is the purpose of cross-validation in machine learning? (A) To reduce the size of the dataset (B) To increase the number of training examples (C) To improve the model’s accuracy by adding more features (D) To evaluate the model’s performance and reduce overfitting 2. Which of the following is a common evaluation metric for classification models? (A) Root Mean Squared Error (RMSE) (B) Adjusted R-Squared (C) F1 Score (D) Mean Absolute Error (MAE) 3. In a confusion matrix, what does the term “True Positive” (TP) refer to? (A) Instances incorrectly classified as negative (B) Instances correctly classified as positive (C) Instances incorrectly classified as positive (D) Instances correctly classified as negative 4. Which of the following is used to assess the performance of a regression model? (A) Accuracy (B) Precision (C) F1 Score (D) Mean Squared Error (MSE) 5. Which technique is commonly used to evaluate performance on imbalanced datasets? (A) Confusion Matrix (B) ROC Curve and AUC (C) Accuracy (D) Root Mean Squared Error (RMSE) 6. What does the “AUC” (Area Under the Curve) represent in ROC analysis? (A) Ability to distinguish between positive and negative classes (B) Number of positive predictions (C) Threshold value for decisions (D) Amount of training data 7. What is a common issue with using accuracy on imbalanced datasets? (A) It is unsuitable for multiclass problems (B) It always overestimates performance (C) It cannot handle missing values (D) It can be misleading for minority class performance 8. In k-fold cross-validation, why is the dataset split into k subsets? (A) To evaluate the model on all data portions and reduce overfitting (B) To test on a smaller dataset (C) To speed up training (D) To balance the dataset 9. What does the term “overfitting” refer to? (A) Model too simple and underperforms on training data (B) Performs well on training and test data (C) Performs well on training data but poorly on unseen data (D) Performs well on test data but poorly on training data 10. What is the primary goal of model validation? (A) To improve performance on unseen data (B) To adjust model parameters (C) To test model speed (D) To reduce training data size