-
Introduction
In the field of machine learning, clustering is an unsupervised learning technique used to group a dataset into several categories, where data points within the same group are more similar to each other. This technique is widely applied in various fields, such as customer segmentation, image compression, and market analysis. The goal of clustering is to maximize the similarity between data points within the same group, while minimizing the similarity between data points from different groups.
K-Means Clustering is an iterative optimization algorithm based on distance metrics that groups data by selecting several centroids. It minimizes the sum of squared distances from each data point to the centroid of the cluster it belongs to. Due to its simplicity and efficiency, K-Means is widely used in data analysis.
In real life, K-Means clustering can be applied to customer segmentation, helping businesses identify customer groups with similar buying behaviors, or in image compression by clustering image pixels to reduce the number of colors. In this article, we will explore the mathematical principles of K-Means clustering, the steps involved in the algorithm, and provide Python code examples to help readers understand its practical applications.
-
What is K-Means Clustering?
K-Means clustering is a centroid-based algorithm that iteratively assigns data points to one of the K clusters. Each centroid represents the center of a cluster, and the algorithm continuously adjusts the positions of the centroids until certain convergence conditions are met. The goal of K-Means clustering is to minimize the sum of squared distances from each data point to its assigned centroid.
Specifically, the steps of K-Means clustering can be summarized as follows:
-
Randomly select K initial centroids.
-
Assign each data point to the cluster whose centroid is closest.
-
Recalculate the centroid for each cluster by averaging the data points in the cluster.
-
Repeat steps 2 and 3 until the centroids stop changing or the maximum number of iterations is reached.
The final result of K-Means clustering is K clusters, where each cluster consists of a centroid and all data points assigned to that cluster. The goal is to minimize the total distance between data points and centroids within each cluster.
-
Mathematical Principles of K-Means Clustering
The goal of K-Means clustering is to minimize the sum of squared errors (SSE), which is the sum of the squared distances from each data point to its assigned centroid:
Where:
-
is the number of clusters.
-
is the i-th cluster.
-
is the centroid of the i-th cluster.
-
is a data point in cluster .
This optimization problem is solved by iteratively adjusting the centroids of each cluster to minimize the SSE. The process alternates between two steps: assignment (assigning points to the nearest centroid) and update (recalculating centroids), until convergence.
-
Detailed Algorithm Steps
The K-Means clustering algorithm consists of the following steps:
-
Step 1: Choose the value of K
The value of K represents the number of clusters into which the data will be divided. Choosing an appropriate value of K is a critical step in K-Means clustering because an incorrect value of K can affect the clustering results. Typically, the "Elbow Method" is used to determine the optimal K. -
Step 2: Initialize the centroids
You can randomly choose K data points as initial centroids, or use heuristics like K-Means++ to better initialize centroids and reduce the randomness in the clustering results. -
Step 3: Assign data points to clusters
Each data point is assigned to the cluster whose centroid is closest. The Euclidean distance is commonly used to calculate the distance between a data point and a centroid. -
Step 4: Update centroids
For each cluster, the centroid is recalculated by averaging the coordinates of all data points in the cluster. -
Step 5: Convergence check
If the centroids no longer change or the maximum number of iterations is reached, the algorithm stops. The clustering result at this point is the final partition of the data.
-
Python Code Implementation
Below is the Python implementation of the K-Means clustering algorithm using the popular libraries NumPy and Matplotlib:
import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs # Generate synthetic dataset np.random.seed(42) X, y = make_blobs(n_samples=300, centers=4, cluster_std=0.6, random_state=0) # Visualize the dataset plt.scatter(X[:, 0], X[:, 1], s=50) plt.xlabel('Feature 1') plt.ylabel('Feature 2') plt.title('Generated Data') plt.show() # Define KMeans algorithm class class KMeans: def __init__(self, k=3, max_iters=100, tol=1e-4): self.k = k self.max_iters = max_iters self.tol = tol def fit(self, X): # Randomly initialize centroids self.centroids = X[np.random.choice(range(X.shape[0]), self.k, replace=False)] for _ in range(self.max_iters): # Assign data points to the nearest centroid self.clusters = self._assign_clusters(X) # Recalculate centroids new_centroids = self._compute_centroids(X) # Check if centroids have converged if np.all(np.linalg.norm(self.centroids - new_centroids, axis=1) < self.tol): break self.centroids = new_centroids def _assign_clusters(self, X): distances = np.linalg.norm(X[:, np.newaxis] - self.centroids, axis=2) return np.argmin(distances, axis=1) def _compute_centroids(self, X): return np.array([X[self.clusters == i].mean(axis=0) for i in range(self.k)]) def predict(self, X): distances = np.linalg.norm(X[:, np.newaxis] - self.centroids, axis=2) return np.argmin(distances, axis=1) # Train model kmeans = KMeans(k=4) kmeans.fit(X) # Predict cluster assignments y_pred = kmeans.predict(X) # Visualize clustering results plt.scatter(X[:, 0], X[:, 1], c=y_pred, cmap='viridis', s=50) plt.scatter(kmeans.centroids[:, 0], kmeans.centroids[:, 1], s=200, c='red', marker='X') plt.xlabel('Feature 1') plt.ylabel('Feature 2') plt.title('K-Means Clustering Results') plt.show()
-
Choosing K: The Elbow Method
Choosing the right value of K is an important step in K-Means clustering. The Elbow Method is a common approach, which calculates the change in the sum of squared errors (SSE) for different values of K to select the optimal K. As K increases, SSE decreases, but when the rate of decrease slows significantly, the optimal K corresponds to the "elbow point."
Below is an example of using the Elbow Method in Python:
sse = [] for k in range(1, 10): kmeans = KMeans(k=k) kmeans.fit(X) sse.append(sum(np.min(np.linalg.norm(X[:, np.newaxis] - kmeans.centroids, axis=2), axis=1) ** 2)) # Visualize the Elbow Method plt.plot(range(1, 10), sse, marker='o') plt.xlabel('Number of Clusters (K)') plt.ylabel('SSE') plt.title('Elbow Method for Optimal K') plt.show()
7. Advantages and Disadvantages of K-Means Clustering
Advantages:
-
Simplicity and Intuitiveness: The K-Means clustering algorithm is simple, intuitive, and easy to implement.
-
Efficiency: For larger datasets, the K-Means algorithm is computationally efficient.
Disadvantages:
-
Sensitivity to Initialization: The algorithm is sensitive to the initial positions of centroids, which can lead to local optima. K-Means++ is an improved method that helps select better initial centroids.
-
Requires Predefined K Value: The number of clusters, , must be specified in advance, which can be challenging for users unfamiliar with the data structure.
-
Vulnerability to Outliers: Outliers significantly affect the computation of centroids, potentially skewing the results.
8. Improvements with K-Means++
To reduce sensitivity to the choice of initial centroids, K-Means++ provides an enhanced strategy that ensures initial centroids are as spread out as possible, minimizing the likelihood of reaching a local optimum. The KMeans implementation in the Scikit-Learn library uses K-Means++ as the default method for centroid initialization.
from sklearn.cluster import KMeans # K-Means++ initialization kmeans = KMeans(n_clusters=4, init='k-means++', max_iter=300, n_init=10, random_state=0) y_kmeans = kmeans.fit_predict(X) # Visualizing clustering results plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, cmap='viridis', s=50) plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=200, c='red', marker='X') plt.xlabel('Feature 1') plt.ylabel('Feature 2') plt.title('K-Means++ Clustering Results') plt.show()
9. Practical Applications
K-Means clustering has a wide range of practical applications, including:
-
Customer Segmentation: Classify customers based on purchasing behavior or browsing habits to enable targeted marketing.
-
Image Compression: Cluster image pixels to group similar colors, reducing the number of colors used and compressing the image.
-
Market Analysis: Identify similar products across different markets using K-Means clustering.
10. Conclusion
K-Means clustering is a powerful and straightforward clustering algorithm suitable for structured numerical data. It performs well in many applications but has limitations, such as sensitivity to initial values and vulnerability to outliers. By combining methods like the Elbow Method and improvements such as K-Means++, the clustering performance can be significantly enhanced.
We hope this article has provided you with a deeper understanding of the principles and implementation of K-Means clustering, and that you can apply the code to perform clustering analysis in your own projects.