Support Vector Machines (SVM) MCQs January 8, 2026November 18, 2024 by u930973931_answers 10 min Score: 0 Attempted: 0/10 Subscribe 1. What is the main goal of the Support Vector Machine (SVM) algorithm?Explanation: The main goal of SVM is to find a decision boundary (hyperplane) that maximizes the margin between the two classes, leading to better generalization. (A) To maximize the margin between classes while minimizing classification errors (B) To minimize the distance between the support vectors and the decision boundary (C) To calculate the probability of each class (D) To reduce the dimensionality of the feature space 2. In SVM, what is the “support vector”?Explanation: Support vectors are the critical points that influence the placement of the decision boundary in SVM. (A) A data point that is located far from the decision boundary (B) A data point used for cross-validation (C) A data point that lies on or near the margin and affects the position of the decision boundary (D) A data point that is misclassified by the algorithm 3. Which of the following is true about the kernel trick in SVM?Explanation: The kernel trick transforms data into a higher-dimensional space where a linear hyperplane can separate the data, even if it’s not linearly separable in the original space. (A) It is used to make the SVM algorithm more interpretable. (B) It is only applicable for binary classification tasks. (C) It reduces the computational cost of training an SVM. (D) It maps the data into a higher-dimensional space to make it linearly separable. 4. Which kernel is commonly used in SVM for non-linear classification problems?Explanation: The RBF kernel is widely used for non-linear classification as it can handle data that is not linearly separable. (A) Linear kernel (B) Radial basis function (RBF) kernel (C) Sigmoid kernel (D) Polynomial kernel 5. What does the “margin” in SVM refer to?Explanation: The margin is the distance between the hyperplane and the nearest points from either class. SVM aims to maximize this margin. (A) The difference between the maximum and minimum values of the features (B) The distance between the decision boundary and the nearest data points of each class (C) The gap between training and testing data (D) The number of support vectors used by the algorithm 6. In the context of SVM, what does the parameter “C” control?Explanation: A larger C reduces misclassifications but narrows the margin; a smaller C allows more misclassifications but a wider margin. (A) The size of the margin between classes (B) The complexity of the decision boundary (C) The choice of kernel to use in the model (D) The regularization of the SVM, controlling the trade-off between achieving a larger margin and allowing some misclassification 7. Which of the following is a common disadvantage of using SVM?Explanation: The performance of SVM depends heavily on choosing the appropriate kernel and hyperparameters. (A) SVM is highly interpretable and easy to explain. (B) SVM is not effective for high-dimensional data. (C) SVM is sensitive to the choice of kernel. (D) SVM is only suitable for binary classification problems. 8. In SVM, what does the “slack variable” represent in the context of soft margin classification?Explanation: Slack variables allow some misclassification for non-separable data, balancing margin width and errors. (A) The number of support vectors (B) The penalty for misclassification (C) The margin width (D) The error tolerance for non-separable data 9. Which of the following would be an ideal application for a Support Vector Machine?Explanation: SVM works best with linearly separable data or data that can be transformed to be linearly separable. (A) When you have a very large dataset with many features but few samples (B) When the data is linearly separable and you want to classify it efficiently (C) When the task is unsupervised learning (D) When the data is highly unstructured, such as images or text data 10. What is the difference between “hard margin” and “soft margin” in SVM?Explanation: Hard margin SVM requires perfect separability, while soft margin SVM is flexible with real-world datasets. (A) Hard margin allows for misclassification, while soft margin does not. (B) Hard margin does not allow any misclassification, while soft margin allows for some errors in the case of non-separable data. (C) Hard margin is used only in binary classification, while soft margin can be used in multi-class classification. (D) Hard margin requires a larger dataset compared to soft margin.