Neural networks MCQs January 8, 2026November 18, 2024 by u930973931_answers 15 min Score: 0 Attempted: 0/15 Subscribe 1. What is the primary function of an activation function in a neural network? (A) To calculate the loss of the model (B) To prevent overfitting (C) To introduce non-linearity into the model (D) To optimize the weights during training 2. Which of the following is a common activation function used in deep neural networks? (A) ReLU activation (B) Sigmoid activation (C) Tanh activation (D) Linear activation 3. What is the purpose of backpropagation in neural networks? (A) To compute the output of the network (B) To optimize the activation function (C) To update the weights of the network based on the error (D) To select the training dataset 4. Which of the following optimization algorithms is commonly used in training neural networks? (A) Decision Tree (B) K-means Clustering (C) Gradient Descent (D) Naive Bayes 5. What does the term “epoch” refer to in the context of neural network training? (A) A single iteration of the forward pass (B) A single pass through the entire training dataset (C) A measure of how many layers the neural network has (D) The number of units in the output layer 6. In a neural network, what is the purpose of the output layer? (A) To produce a set of features for the next layer (B) To calculate the loss function (C) To extract the weights from the network (D) To provide the final prediction or classification of the model 7. What is the vanishing gradient problem in neural networks? (A) The gradients of the activation functions become too large and destabilize the network. (B) The weights of the network become too large. (C) The network learns too quickly and overfits the data. (D) The gradients become too small, leading to minimal weight updates and slow or no learning. 8. Which of the following is an example of a loss function commonly used in classification problems? (A) Mean Squared Error (MSE) (B) Both B and C (C) Hinge Loss (D) Cross-Entropy Loss 9. What is the role of hidden layers in a neural network? (A) They directly produce the final output of the model. (B) They transform the input data into a format that can be processed by the output layer. (C) They store the weights of the network. (D) They calculate the loss function. 10. Which of the following techniques is commonly used to prevent overfitting in neural networks? (A) All of the above (B) Data augmentation (C) Early stopping (D) Regularization (L1 or L2) 11. What is the purpose of the dropout technique in neural networks? (A) To speed up training by skipping certain neurons (B) To reduce overfitting by randomly “dropping out” some neurons during training (C) To increase the complexity of the model by adding more neurons (D) To calculate the loss of the model 12. Which of the following is a common type of neural network used for image recognition tasks? (A) Convolutional Neural Networks (CNN) (B) Recurrent Neural Networks (RNN) (C) Long Short-Term Memory (LSTM) (D) Multilayer Perceptron (MLP) 13. What does the term “backpropagation” refer to in neural networks? (A) The process of feeding inputs through the network (B) The evaluation of the network’s performance (C) The initialization of weights before training (D) The process of updating weights in the network by calculating gradients 14. In neural networks, what is the difference between a deep network and a shallow network? (A) A deep network has more layers than a shallow network. (B) A shallow network has more layers than a deep network. (C) Deep networks do not require backpropagation. (D) Shallow networks are only used for regression tasks. 15. Which of the following neural network architectures is specifically designed to handle sequential data? (A) Recurrent Neural Networks (RNN) (B) Convolutional Neural Networks (CNN) (C) Autoencoders (D) Generative Adversarial Networks (GAN)