Clustering | Different Methods, and Applications (Updated 2024)

saurav kaushik Last Updated : 05 Nov, 2024
9 min read

When encountering an unsupervised learning problem initially, confusion may arise as you aren’t seeking specific insights but rather identifying data structures. This process, known as clustering or cluster analysis, identifies similar groups within a dataset.

It is one of the most popular clustering techniques in data science used by data scientists. Entities in each group are comparatively more similar to entities of that group than those of the other groups. In this article, I will be taking you through the types of clustering, different clustering algorithms, and a comparison between two of the most commonly used methods of clustering techniques in machine learning.

In this article, you will explore the various types of clustering in data clustering. We will discuss different clustering methods and techniques, highlighting their applications and benefits in data analysis.

Note: To learn more about clustering and other machine learning algorithms (both supervised and unsupervised) check out the following courses-

Learning Objectives

  • Learn about Clustering in machine learning, one of the most popular unsupervised classification techniques.
  • Get to know K means and hierarchical clustering and the difference between the two.

What Is Clustering in Machine Learning?

Clustering techniques in machine learning is the task of dividing the unlabeled data or data points into different clusters such that similar data points fall in the same cluster than those which differ from the others. In simple words, the aim of the clustering process is to segregate groups with similar traits and assign them into clusters.

Let’s understand this with an example. Suppose you are the head of a rental store and wish to understand the preferences of your customers to scale up your business. Is it possible for you to look at the details of each customer and devise a unique business strategy for each one of them? Definitely not. But, what you can do is cluster all of your customers into, say 10 groups based on their purchasing habits and use a separate strategy for customers in each of these 10 groups. And this is what we call clustering methods.

Now that we understand what clustering is. Let’s take a look at its different types.

Types of Clustering Techniques in Machine Learning

Clustering broadly divides into two subgroups:

  • Hard Clustering: Each input data point either fully belongs to a cluster or not. For instance, in the example above, every customer is assigned to one group out of the ten.
  • Soft Clustering: Rather than assigning each input data point to a distinct cluster, it assigns a probability or likelihood of the data point being in those clusters. For example, in the given scenario, each customer receives a probability of being in any of the ten retail store clusters.

Different Types of Clustering Algorithms

Since the task of clustering methods is subjective, the means that can be used for achieving this goal are plenty. Every methodology follows a different set of rules for defining the ‘similarity’ among data points. In fact, there are more than 100 clustering algorithms known. But few of the algorithms are used popularly. Let’s look at them in detail:

Connectivity Models

As the name suggests, these models are based on the notion that the data points closer in data space exhibit more similarity to each other than the data points lying farther away. These models can follow two approaches. In the first approach, they start by classifying all data points into separate clusters & then aggregating them as the distance decreases. In the second approach, all data points are classified as a single cluster and then partitioned as the distance increases. Also, the choice of distance function is subjective. These models are very easy to interpret but lack scalability for handling big datasets. Examples of these models are the hierarchical clustering algorithms and their variants.

Centroid Models

These clustering algorithms iterate, deriving similarity from the proximity of a data point to the centroid or cluster center. The k-Means clustering algorithm, a popular example, falls into this category. These models necessitate specifying the number of clusters beforehand, requiring prior knowledge of the dataset. They iteratively run to discover local optima.

Distribution Models

These clustering models are based on the notion of how probable it is that all data points in the cluster belong to the same distribution (For example: Normal, Gaussian). These models often suffer from overfitting. A popular example of these models is the Expectation-maximization algorithm which uses multivariate normal distributions.

Density Models

These models search the data space for areas of the varied density of data points in the data space. They isolate different dense regions and assign the data points within these regions to the same cluster. Popular examples of density models are DBSCAN and OPTICS. These models are particularly useful for identifying clusters of arbitrary shape and detecting outliers, as they can detect and separate points that are located in sparse regions of the data space, as well as points that belong to dense regions.

Now I will be taking you through two of the most popular clustering algorithms in detail – K Means and Hierarchical. Let’s begin.

K Means Clustering

K means is an iterative clustering algorithm that aims to find local maxima in each iteration. This algorithm works in these 5 steps:

Step1:

Specify the desired number of clusters K: Let us choose k=2 for these 5 data points in 2-D space.

clustering, k-means | Clustering in Machine Learning

Step 2:

Randomly assign each data point to a cluster: Let’s assign three points in cluster 1, shown using red color, and two points in cluster 2, shown using grey color.

k-means clustering | data clustering

Step 3:

Compute cluster centroids: The centroid of data points in the red cluster is shown using the red cross, and those in the grey cluster using a grey cross.

k-means, centroid

Step 4:

Re-assign each point to the closest cluster centroid: Note that only the data point at the bottom is assigned to the red cluster, even though it’s closer to the centroid of the grey cluster. Thus, we assign that data point to the grey cluster.

centroid | Clustering in Machine Learning

Step 5:

Re-compute cluster centroids: Now, re-computing the centroids for both clusters.

clustering, centroid, k-means | Clustering in Machine Learning

Repeat steps 4 and 5 until no improvements are possible: Similarly, we’ll repeat the 4th and 5th steps until we’ll reach global optima, i.e., when there is no further switching of data points between two clusters for two successive repeats. It will mark the termination of the algorithm if not explicitly mentioned.

Hierarchical Clustering

Hierarchical clustering methods, as the name suggests, is an algorithm that builds a hierarchy of clusters. This algorithm starts with all the data points assigned to a cluster of their own. Then two nearest clusters are merged into the same cluster. In the end, this algorithm terminates when there is only a single cluster left.

The results of hierarchical clustering can be shown using a dendrogram. The dendrogram can be interpreted as:

hierarchical clustering, dendogram | Clustering in Machine Learning

At the bottom, we start with 25 data points, each assigned to separate clusters. The two closest clusters are then merged till we have just one cluster at the top. The height in the dendrogram at which two clusters are merged represents the distance between two clusters in the data space.

The decision of the no. of clusters that can best depict different groups can be chosen by observing the dendrogram. The best choice of the no. of clusters is the no. of vertical lines in the dendrogram cut by a horizontal line that can transverse the maximum distance vertically without intersecting a cluster.

In the above example, the best choice of no. of clusters will be 4 as the red horizontal line in the dendrogram below covers the maximum vertical distance AB.

hierarchical clustering | Clustering Methods

Important Points for Hierarchical Clustering

  • This algorithm has been implemented above using a bottom-up approach. It is also possible to follow a top-down approach starting with all data points assigned in the same cluster and recursively performing splits till each data point is assigned a separate cluster.
  • The decision to merge two clusters is taken on the basis of the closeness of these clusters. There are multiple metrics for deciding the closeness of two clusters:
    • Euclidean distance: ||a-b||2 = √(Σ(ai-bi))
    • Squared Euclidean distance: ||a-b||22 = Σ((ai-bi)2)
    • Manhattan distance: ||a-b||1 = Σ|ai-bi|
    • Maximum distance:||a-b||INFINITY = maxi|ai-bi|
    • Mahalanobis distance: √((a-b)T S-1 (-b))   {where, s : covariance matrix}

Difference Between K Means and Hierarchical Clustering

  • Hierarchical clustering methods can’t handle big data well, but K Means can. This is because the time complexity of K Means is linear, i.e., O(n), while that of hierarchical is quadratic, i.e., O(n2).
  • Since we start with a random choice of clusters, the results produced by running the algorithm multiple times might differ in K Means clustering. While in Hierarchical clustering, the results are reproducible.
  • K Means is found to work well when the shape of the clusters is hyperspherical (like a circle in 2D or a sphere in 3D).
  • K Means clustering requires prior knowledge of K, i.e., no. of clusters you want to divide your data into. But, you can stop at whatever number of clusters you find appropriate in hierarchical clustering by interpreting the dendrogram.

Applications of Clustering

Clustering has a large no. of application of clustering spread across various domains. Some of the most popular applications of clustering are recommendation engines, market segmentation, social network analysis, search result grouping, medical imaging, image segmentation, and anomaly detection.

Improving Supervised Learning Algorithms With Clustering

Clustering is an unsupervised machine learning approach, but can it be used to improve the accuracy of supervised machine learning algorithms as well by clustering the data points into similar groups and using these cluster labels as independent variables in the supervised machine learning algorithm? Let’s find out.

Let’s check out the impact of clustering on the accuracy of our model for the classification problem using 3000 observations with 100 predictors of stock data to predict whether the stock will go up or down using R. This dataset contains 100 independent variables from X1 to X100 representing the profile of a stock and one outcome variable Y with two levels: 1 for the rise in stock price and -1 for drop in stock price.

Let’s first try applying random forest without clustering in python.

#loading required libraries
library('randomForest')

library('Metrics')

#set random seedset.seed(101)

#loading dataset

data<-read.csv("train.csv",stringsAsFactors= T)

#checking dimensions of datadim(data)

## [1] 3000  101

#specifying outcome variable as factor


 data$Y<-as.factor(data$Y)

#dividing the dataset into train and testtrain<-data[1:2000,]
test<-data[2001:3000,]

#applying randomForest model_rf<-randomForest(Y~.,data=train)

preds<-predict(object=model_rf,test[,-101])

table(preds)

## preds
##  -1   1
## 453 547

#checking accuracy

auc(preds,test$Y)

## [1] 0.4522703

So, the accuracy we get is 0.45. Now let’s create five clusters based on values of independent variables using k-means and reapply random forest.

#combing test and train

all<-rbind(train,test)

#creating 5 clusters using K- means clustering

Cluster <- kmeans(all[,-101], 5)

#adding clusters as independent variable to the dataset.all$cluster<-as.factor(Cluster$cluster)

#dividing the dataset into train and testtrain<-all[1:2000,]
test<-all[2001:3000,]

#applying randomforestmodel_rf<-randomForest(Y~.,data=train)

preds2<-predict(object=model_rf,test[,-101])

table(preds2)

## preds2

## -1   1 

##548 452 

auc(preds2,test$Y)

## [1] 0.5345908

Whoo! In the above example, even though the final accuracy is poor but clustering has given our model a significant boost from an accuracy of 0.45 to slightly above 0.53.

This shows that clustering can indeed be helpful for supervised machine-learning tasks.

Conclusion

In this article, we have discussed the various clustering techniques in machine learning and explored different ways of performing clustering. We came across the application of clustering for unsupervised learning in a large number of domains and also examined how clustering techniques in machine learning can improve the accuracy of a supervised machine learning algorithm.

Although clustering is easy to implement, you need to take care of some important aspects, like treating outliers in your data and making sure each cluster has a sufficient population. These aspects of clustering are dealt with in great detail in this article.

Hope you find this information on clustering machine learning insightful and valuable for your understanding of clustering in big data and its applications in cluster analysis!

Key Takeaways

  • Clustering helps to identify patterns in data and is useful for exploratory data analysis, customer segmentation, anomaly detection, pattern recognition, and image segmentation.
  • It is a powerful tool for understanding data and can help to reveal insights that may not be apparent through other methods of analysis.
  • Its types include partition-based, hierarchical, density-based, and grid-based clustering.
  • The choice of clustering algorithm and the number of clusters to use depend on the nature of the data and the specific problem at hand.
Q1. What is clustering in machine learning?

A. Clustering in machine learning involves grouping similar data points together based on their features, allowing for pattern discovery without predefined labels.

Q2. What is clustering and its type?

A. Clustering is a method of unsupervised learning where data points are grouped based on similarity. Types include K-means, hierarchical, DBSCAN, and mean shift.

Q3. What is an example of clustering?

A. An example of clustering is customer segmentation, where a business groups customers based on purchasing behavior to tailor marketing strategies.

Q4. How does clustering work?

A. Clustering works by evaluating the distances or similarities between data points, then grouping them into clusters where intra-cluster similarity is maximized and inter-cluster similarity is minimized.

Saurav is a Data Science enthusiast, currently in the final year of his graduation at MAIT, New Delhi. He loves to use machine learning and analytics to solve complex data problems.

Responses From Readers

Clear

Ankit Gupta
Ankit Gupta

Very nice tutorial Saurav!

Richard Warnung
Richard Warnung

Nice, post! Please correc the last link - it is broken - thanks!

Sai Satheesh G
Sai Satheesh G

I accept that clustering may help in improving the supervised models. But here in the above: Clustering is performed on sample points (4361 rows). Is that right.? But I think correct way is to cluster features (X1-X100) and to represent data using cluster representatives and then perform supervised learning. Can you please elaborate further? Why samples are being clustered in the code (not independent variables)?

Flash Card

What is Clustering in Machine Learning?

Clustering is a way that machine learning groups similar data points together without any labels to start with. The idea is to find natural patterns and sort data into “clusters” so that data points that are alike end up in the same group.

In the image below:

  • Before Clustering: The data points are just scattered around with no clear organization.
  • After Clustering: The data points are grouped into clusters, with each cluster having points that share similar traits.
Clustering is super helpful for things like customer segmentation, image analysis, and exploring big data sets. It helps make sense of large amounts of data and makes it easier to spot patterns and understand how things relate to each other.

What is Clustering in Machine Learning?

Quiz

What is the main goal of clustering in machine learning?

Flash Card

What are the two main subgroups of clustering, and how do they differ?

Clustering is divided into two main subgroups: Hard Clustering and Soft Clustering. Hard Clustering assigns each data point to a single cluster, meaning it either fully belongs to a cluster or not. Soft Clustering assigns a probability or likelihood of a data point belonging to multiple clusters, allowing for more flexibility.

Quiz

How does Hard Clustering differ from Soft Clustering?

Flash Card

Can you describe the different types of clustering algorithms and their basis?

Connectivity Models: These are based on the idea that data points closer in data space are more similar than those farther away. Centroid Models: These algorithms, like K-means, derive similarity from the proximity of a data point to the centroid or cluster center. Distribution Models: These are based on the probability that all data points in a cluster belong to the same distribution. Density Models: These search for areas of varied density of data points in the data space.

Quiz

Which clustering model is based on the proximity of a data point to the centroid or cluster center?

Flash Card

What are the key differences between K-means and hierarchical clustering?

K-means can handle large datasets well due to its linear time complexity, while hierarchical clustering struggles with big data due to its quadratic time complexity. K-means results can vary with different runs due to random initial cluster choices, whereas hierarchical clustering results are reproducible. K-means works well with hyperspherical cluster shapes, while hierarchical clustering does not require predefined cluster numbers and can be stopped at any level.

What are the key differences between K-means and hierarchical clustering?

Quiz

Which clustering method is known for its reproducible results?

Flash Card

What are some common applications of clustering in various domains?

Clustering is widely used in recommendation engines to group similar items or users. It is applied in market segmentation to identify distinct customer groups. Social network analysis uses clustering to detect communities or groups. Clustering is also used in search result grouping, medical imaging, image segmentation, and anomaly detection.

Quiz

In which domain is clustering used to detect communities or groups?

Flash Card

How can clustering enhance supervised learning algorithms?

Clustering can improve supervised learning algorithms by using cluster labels as independent variables. This approach groups data points into similar clusters, which can then be used to enhance model accuracy. For example, in a stock prediction model, clustering improved accuracy from 0.45 to slightly above 0.53, demonstrating its potential benefits.

Quiz

How does clustering improve supervised learning algorithms?

Flash Card

Why is prior knowledge of the number of clusters required in K-means clustering?

K-means clustering requires prior knowledge of the number of clusters (K) because it needs to divide the data into a predefined number of clusters. This requirement helps the algorithm to initialize and iterate towards the optimal cluster centers. In contrast, hierarchical clustering does not need this prior knowledge and allows flexibility in choosing the number of clusters based on a dendrogram.

Quiz

Why does K-means clustering require prior knowledge of the number of clusters?

Flash Card

What is the impact of clustering on the accuracy of a classification model in the provided example?

In the example, clustering was used to improve the accuracy of a classification model predicting stock price movements. Although the final accuracy was not high, clustering increased the model's accuracy from 0.45 to slightly above 0.53. This demonstrates that clustering can provide a significant boost to model performance in supervised learning tasks.

Quiz

How did clustering impact the accuracy of the classification model in the example?

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details