Naive Bayes MCQs January 8, 2026November 18, 2024 by u930973931_answers 10 min Score: 0 Attempted: 0/10 Subscribe 1. Which of the following is a key assumption made by the Naive Bayes classifier?Explanation: Naive Bayes assumes that features are conditionally independent, given the class label, which simplifies computation and makes the algorithm efficient. (A) Features are independent of each other. (B) Features are dependent on each other. (C) The target variable is normally distributed. (D) The features are linearly related to the target variable. 2. In Naive Bayes, which distribution is commonly assumed for continuous data?Explanation: For continuous data, Naive Bayes typically assumes that features follow a normal (Gaussian) distribution. (A) Poisson distribution (B) Normal distribution (C) Exponential distribution (D) Binomial distribution 3. What is the main advantage of the Naive Bayes classifier?Explanation: Naive Bayes is a probabilistic classifier that performs well with relatively small datasets, especially when features are conditionally independent. (A) It can only be used for binary classification tasks. (B) It performs well even with a small amount of training data. (C) It is sensitive to irrelevant features. (D) It requires a lot of computational resources. 4. In Naive Bayes, what is the role of Bayes’ theorem?Explanation: Bayes’ theorem calculates the posterior probability of a class given the features, which is central to the Naive Bayes algorithm. (A) It is used to optimize the hyperparameters of the model. (B) It helps in calculating the probability of a feature belonging to a particular class. (C) It is used for feature selection. (D) It calculates the loss function for model training. 5. Which type of data can Naive Bayes be applied to?Explanation: Naive Bayes can handle both categorical (using multinomial distribution) and continuous data (assuming Gaussian distribution). (A) Only numerical data (B) Only categorical data (C) Only data with binary features (D) Both categorical and numerical data 6. What is the primary disadvantage of the Naive Bayes classifier?Explanation: The independence assumption may not hold in real-world data, which can reduce performance. (A) It requires a large amount of training data. (B) It performs poorly with large datasets. (C) It is computationally expensive. (D) It assumes that features are independent, which is often unrealistic. 7. In the context of Naive Bayes, what does the term “likelihood” refer to?Explanation: Likelihood is the probability of observing the given features under each class in Naive Bayes. (A) The probability of the features given the class (B) The probability of a class given the features (C) The prior probability of a class (D) The probability of the features occurring 8. What is the purpose of Laplace smoothing in Naive Bayes?Explanation: Laplace smoothing adds a small constant to probability estimates to avoid zero probabilities for unseen feature combinations. (A) To ensure that no probability is zero when a feature value does not appear in the training data. (B) To reduce the computational complexity of the model. (C) To optimize the prior probabilities. (D) To handle missing data. 9. Naive Bayes is particularly suited for which of the following tasks?Explanation: Naive Bayes is effective for multiclass classification, efficiently calculating probabilities for multiple classes. (A) Regression with large datasets (B) Multiclass classification problems (C) Feature engineering for deep learning models (D) Clustering similar data points 10. Which of the following would likely reduce the performance of a Naive Bayes classifier?Explanation: Naive Bayes assumes feature independence, so highly correlated features violate this assumption and can reduce model performance. (A) Using a smaller training dataset (B) Using features that are highly correlated (C) Using a very large number of features (D) Using Laplace smoothing