k-Nearest Neighbors (k-NN) MCQs January 8, 2026November 18, 2024 by u930973931_answers 10 min Score: 0 Attempted: 0/10 Subscribe 1. Which of the following is true about the k-Nearest Neighbors (k-NN) algorithm?Explanation: k-NN is a supervised learning algorithm used for both classification and regression tasks. (A) It is a generative model. (B) It requires a training phase. (C) It only works with numerical data. (D) It is a supervised learning algorithm. 2. What does the βkβ in k-Nearest Neighbors represent?Explanation: βkβ represents the number of neighbors the algorithm uses to make a decision for classifying or predicting a data point. (A) The number of features in the dataset. (B) The number of nearest neighbors to consider for classification or regression. (C) The number of classes in the target variable. (D) The number of dimensions in the feature space. 3. Which of the following distance metrics is most commonly used in k-NN?Explanation: Euclidean distance is the most commonly used metric to measure the distance between points in the feature space in k-NN. (A) Euclidean distance (B) Manhattan distance (C) Hamming distance (D) Cosine similarity 4. How does k-NN handle classification tasks?Explanation: In classification, k-NN assigns the class label based on the majority vote of the k-nearest neighbors. (A) By fitting a decision boundary between classes (B) By maximizing the likelihood of the features given the class (C) By minimizing the loss function (D) By assigning the most common class among the k-nearest neighbors 5. What happens when the value of βkβ is too small in k-NN?Explanation: A small βkβ can cause the model to be sensitive to noise, resulting in overfitting and poor generalization. (A) The model becomes less sensitive to noise. (B) The model may underfit the data. (C) The model may overfit to the training data. (D) The model performs better with new data. 6. What happens when the value of βkβ is too large in k-NN?Explanation: A large βkβ means the model considers more neighbors, which can reduce sensitivity to underlying patterns and lead to underfitting. (A) The model may overfit to the training data. (B) The model may underfit the data. (C) The model becomes highly sensitive to outliers. (D) The model works better with a smaller dataset. 7. Which of the following is a disadvantage of the k-NN algorithm?Explanation: k-NN requires storing all training data and calculating distances for every new point, which can be computationally intensive for large datasets. (A) It performs poorly with high-dimensional data. (B) It is computationally expensive during both training and prediction. (C) It is very sensitive to missing values in the dataset. (D) It is not suitable for regression tasks. 8. Which technique can help improve the performance of k-NN on high-dimensional data?Explanation: Feature scaling ensures all features contribute equally to distance calculations, improving k-NN performance in high-dimensional spaces. (A) Using a smaller value for βkβ (B) Feature scaling (e.g., normalization or standardization) (C) Using the Manhattan distance instead of Euclidean distance (D) Increasing the number of neighbors 9. In k-NN, what does the βvotingβ process refer to in classification?Explanation: In classification, the class label is assigned based on a majority vote from the k-nearest neighbors. (A) Selecting the feature that best separates the classes (B) Counting the number of times a particular class appears in the k-nearest neighbors and assigning the class with the highest count (C) Assigning the class with the maximum probability based on Bayesβ theorem (D) Assigning the class based on the average of the target labels of the nearest neighbors 10. What is a common method to handle ties in k-NN classification (when multiple classes have the same number of neighbors)?Explanation: In case of a tie, k-NN typically resolves it by choosing the class of the closest neighbor among the tied classes. (A) Randomly assign a class label (B) Choose the class based on the distance to the neighbors (C) Assign the class with the largest probability (D) Ignore the instance and make no prediction