Natural Language Processing MCQs December 22, 2025August 10, 2024 by u930973931_answers 20 min Score: 0 Attempted: 0/20 Subscribe 1. What does NLP stand for? (A) Natural Language Processing (B) Neural Language Processing (C) Numerical Language Processing (D) Nonlinear Language Processing 2. Which of the following is a common task in NLP? (A) Clustering (B) Image classification (C) Time series forecasting (D) Sentiment analysis 3. What is “tokenization” in NLP? (A) The process of breaking down text into individual words or phrases (B) The process of encoding text into numerical format (C) The process of translating text into another language (D) The process of summarizing text 4. Which technique is used for converting words into numerical vectors in NLP? (A) One-hot encoding (B) K-means clustering (C) Principal Component Analysis (PCA) (D) Convolutional Neural Networks (CNNs) 5. What is the purpose of “stemming” in NLP? (A) To translate text into different languages (B) To categorize text into predefined topics (C) To reduce words to their base or root form (D) To correct grammatical errors 6. Which of the following is an example of a “stop word”? (A) “model” (B) “machine” (C) “learning” (D) “the” 7. What is “Named Entity Recognition” (NER) used for in NLP? (A) To identify and classify entities such as people, organizations, and locations in text (B) To generate new text based on the given input (C) To translate text into another language (D) To summarize large documents 8. Which model is commonly used for generating word embeddings? (A) K-means clustering (B) Word2Vec (C) Support Vector Machines (SVM) (D) Random Forest 9. What does the term “n-gram” refer to in NLP? (A) A method for translating languages (B) A type of neural network layer (C) A text summarization technique (D) A contiguous sequence of n items from a given text 10. Which technique is used to improve text classification accuracy by combining multiple models? (A) Hyperparameter tuning (B) Data augmentation (C) Dimensionality reduction (D) Ensemble learning 11. What is “part-of-speech tagging” (POS tagging) in NLP? (A) Summarizing text into shorter sentences (B) Translating text into different languages (C) Assigning parts of speech (e.g., nouns, verbs) to each word in a text (D) Detecting the sentiment of a text 12. Which algorithm is often used for text classification tasks in NLP? (A) Decision Trees (B) Naive Bayes (C) K-means clustering (D) Principal Component Analysis (PCA) 13. What is “word sense disambiguation” in NLP? (A) The process of identifying named entities in text (B) The process of determining the correct meaning of a word based on context (C) The process of tokenizing text into individual words (D) The process of translating text into another language 14. Which of the following is a common evaluation metric for text classification models? (A) Mean Squared Error (MSE) (B) F1-score (C) Accuracy (D) Root Mean Squared Error (RMSE) 15. What does “LSTM” stand for in the context of NLP? (A) Long Short-Term Memory (B) Linear Short-Term Memory (C) Logistic Sequential Temporal Model (D) Layered Statistical Text Model 16. Which technique is commonly used for text summarization? (A) Extractive summarization (B) Generative adversarial networks (GANs) (C) K-means clustering (D) Principal Component Analysis (PCA) 17. What is “TF-IDF” in NLP? (A) Term Frequency-Inverse Document Frequency, a statistical measure used to evaluate the importance of a word in a document (B) Temporal Frequency-Inverse Document Factor, a method for text classification (C) Textual Frequency-Inverse Data Feature, used for summarization (D) Term Factor-Inverse Document Frequency, a type of embedding 18. What does “semantic similarity” refer to in NLP? (A) The degree to which two pieces of text have similar meanings (B) The grammatical structure of sentences (C) The number of unique words in a text (D) The syntactic structure of sentences 19. Which of the following models is used for machine translation tasks? (A) Random Forest (B) K-means clustering (C) Decision Trees (D) Transformer 20. What is “attention mechanism” in NLP models? (A) A technique that allows the model to focus on different parts of the input text when making predictions (B) A type of data preprocessing (C) A method for feature extraction (D) A type of regularization