Hierarchical clustering MCQs

1. What is the primary objective of hierarchical clustering?

  • A) To partition data into a predefined number of clusters
  • B) To form a hierarchy of clusters from the dataset
  • C) To classify data into categories based on predefined labels
  • D) To minimize the number of clusters

Answer: B) To form a hierarchy of clusters from the dataset
Explanation: Hierarchical clustering creates a hierarchy of clusters that can be visualized as a tree structure, known as a dendrogram.


2. Which of the following is true about agglomerative hierarchical clustering?

  • A) It starts with a single cluster containing all data points.
  • B) It starts with each data point in its own cluster.
  • C) It merges clusters until only one remains.
  • D) It does not require a distance metric.

Answer: B) It starts with each data point in its own cluster.
Explanation: Agglomerative hierarchical clustering is a bottom-up approach that starts with each point as its own cluster and successively merges the closest clusters.


3. What is the main difference between agglomerative and divisive hierarchical clustering?

  • A) Agglomerative starts with one large cluster, while divisive starts with individual points.
  • B) Agglomerative is a bottom-up approach, while divisive is a top-down approach.
  • C) Divisive clustering uses a predefined number of clusters, while agglomerative does not.
  • D) Agglomerative is more computationally efficient than divisive.

Answer: B) Agglomerative is a bottom-up approach, while divisive is a top-down approach.
Explanation: Agglomerative clustering starts with individual points as separate clusters and merges them, while divisive clustering starts with one large cluster and divides it.


4. In hierarchical clustering, what is a dendrogram?

  • A) A method for calculating the optimal number of clusters
  • B) A visual representation of the hierarchy of clusters
  • C) A data transformation technique used before clustering
  • D) A measure of the distances between clusters

Answer: B) A visual representation of the hierarchy of clusters
Explanation: A dendrogram is a tree-like diagram used to visualize the results of hierarchical clustering, showing the merging or splitting of clusters.


5. Which of the following linkage methods is based on the minimum distance between points in different clusters?

  • A) Single linkage
  • B) Complete linkage
  • C) Average linkage
  • D) Ward’s linkage

Answer: A) Single linkage
Explanation: Single linkage (nearest point linkage) calculates the distance between two clusters as the minimum distance between any two points in the clusters.


6. What does “complete linkage” refer to in hierarchical clustering?

  • A) The maximum distance between points in different clusters
  • B) The average distance between points in different clusters
  • C) The minimum distance between points in different clusters
  • D) The sum of distances between clusters

Answer: A) The maximum distance between points in different clusters
Explanation: Complete linkage calculates the distance between two clusters as the maximum distance between any two points in those clusters.


7. Which of the following methods is NOT typically used to calculate the distance between clusters in hierarchical clustering?

  • A) Euclidean distance
  • B) Manhattan distance
  • C) Cosine similarity
  • D) Jaccard similarity

Answer: D) Jaccard similarity
Explanation: Jaccard similarity is used for comparing sets, not typically for calculating distances between clusters in hierarchical clustering, which generally uses Euclidean or Manhattan distances.


8. What is Ward’s linkage method in hierarchical clustering?

  • A) It minimizes the variance within each cluster.
  • B) It maximizes the distance between clusters.
  • C) It uses the nearest point to merge clusters.
  • D) It is not based on any distance metric.

Answer: A) It minimizes the variance within each cluster.
Explanation: Ward’s method minimizes the total within-cluster variance when merging clusters, aiming for more compact clusters.


9. How does the computational complexity of hierarchical clustering compare to K-means clustering?

  • A) Hierarchical clustering is more efficient than K-means.
  • B) Hierarchical clustering is less efficient than K-means.
  • C) Both algorithms have the same computational complexity.
  • D) Hierarchical clustering has lower space complexity than K-means.

Answer: B) Hierarchical clustering is less efficient than K-means.
Explanation: Hierarchical clustering has higher computational complexity (O(n^3)) compared to K-means (O(n * k * I)), making K-means more scalable for large datasets.


10. Which distance metric is most commonly used in hierarchical clustering?

  • A) Euclidean distance
  • B) Manhattan distance
  • C) Minkowski distance
  • D) Hamming distance

Answer: A) Euclidean distance
Explanation: Euclidean distance is the most commonly used distance metric for hierarchical clustering, especially when dealing with continuous data.


11. What does “average linkage” in hierarchical clustering compute?

  • A) The average distance between the nearest points in two clusters
  • B) The average distance between all points in two clusters
  • C) The minimum distance between any point in one cluster and any point in the other cluster
  • D) The maximum distance between any point in one cluster and any point in the other cluster

Answer: B) The average distance between all points in two clusters
Explanation: Average linkage calculates the average of the distances between all pairs of points, one from each of the two clusters.


12. Which of the following is true about hierarchical clustering?

  • A) It requires the number of clusters to be specified before running the algorithm.
  • B) It always produces the same results regardless of the linkage method used.
  • C) It can be computationally expensive for large datasets.
  • D) It cannot be visualized.

Answer: C) It can be computationally expensive for large datasets.
Explanation: Hierarchical clustering can be computationally expensive, especially when dealing with large datasets, because it involves calculating distances between all pairs of data points.


13. Which of the following clustering algorithms does not require the number of clusters to be predefined?

  • A) K-means clustering
  • B) DBSCAN
  • C) Agglomerative hierarchical clustering
  • D) Both B and C

Answer: D) Both B and C
Explanation: Both DBSCAN (Density-Based Spatial Clustering of Applications with Noise) and agglomerative hierarchical clustering do not require the number of clusters to be specified beforehand.


14. In hierarchical clustering, which of the following linkage methods is most likely to produce compact, spherical clusters?

  • A) Single linkage
  • B) Complete linkage
  • C) Average linkage
  • D) Ward’s linkage

Answer: D) Ward’s linkage
Explanation: Ward’s linkage method tends to produce compact, spherical clusters by minimizing within-cluster variance.


15. Which of the following is NOT an advantage of hierarchical clustering?

  • A) It is easy to visualize with dendrograms.
  • B) It does not require a predefined number of clusters.
  • C) It is computationally efficient for large datasets.
  • D) It can produce a hierarchy of clusters.

Answer: C) It is computationally efficient for large datasets.
Explanation: Hierarchical clustering is computationally expensive for large datasets, making it less efficient than other clustering algorithms like K-means for large-scale problems.

Leave a Reply

Your email address will not be published. Required fields are marked *