Anomaly detection using Isolation Forest – A Complete Guide

akshara_416 Last Updated : 21 Nov, 2024
8 min read

Anomaly detection is crucial in data mining and machine learning, finding applications in fraud detection, network security, and more. The Isolation Forest algorithm, introduced by Fei Tony Liu and Zhi-Hua Zhou in 2008, stands out among anomaly detection methods. It uses decision trees to efficiently isolate anomalies by randomly selecting features and splitting data based on threshold values. This approach is effective in quickly identifying outliers, making it well-suited for large datasets where anomalies are rare and distinct.

As in this article, we delve into the workings of the Isolation Forest algorithm, its implementation in Python, and its role as a powerful tool in anomaly detection. We also explore the metrics used to evaluate its performance and discuss its applications across various domains.

In this article, you will get a clear understanding of the Isolation Forest algorithm in Python. We will look at how to use Isolation Forest for finding outliers in data. You will learn about the Isolation Forest Python library and see an easy Isolation Forest example. By the end, you will know how to use isolation forest outlier detection in your own projects.

Learning Outcomes

  • Understand the concept of Average Path Length (APL) in the context of decision trees and its relevance in anomaly detection.
  • Explain the structure and functionality of Binary Search Trees (BSTs) and their application in organizing data for efficient search operations.
  • Apply data analysis techniques to analyze and interpret data stored in a Pandas DataFrame.
  • Discuss the significance of the Eighth IEEE International Conference on Data Mining (ICDM) and its contributions to the field of data science.
  • Explore the contributions of Fei Tony Liu and Kai Ming to the development of the Isolation Forest (iForest) model for anomaly detection.
  • Describe the principles and workings of the Isolation Forest model and its advantages over other anomaly detection algorithms.
  • Implement an iTree (Isolation Tree) and explain its role in the construction of an Isolation Forest model.
  • Evaluate the effectiveness of the Isolation Forest model using appropriate metrics and interpret the results for anomaly detection.

This article was published as a part of the Data Science Blogathon.

What is Isolation Forest?

Isolation Forest is a method used to find unusual data points, known as anomalies or outliers, in a dataset. It is particularly good at spotting these anomalies in large amounts of data.

Since its introduction, Isolation Forest has gained popularity as a fast and reliable algorithm for anomaly detection in various fields such as cybersecurity, finance, and medical research.

Isolation Forests for Anomaly Detection

Isolation Forests(IF), similar to Random Forests, are build based on decision trees. And since there are no pre-defined labels here, it is an unsupervised model.

Isolation Forests were built based on the fact that anomalies are the data points that are “few and different”.

In an Isolation Forest, randomly sub-sampled data is processed in a tree structure based on randomly selected features. The samples that travel deeper into the tree are less likely to be anomalies as they required more cuts to isolate them. Similarly, the samples which end up in shorter branches indicate anomalies as it was easier for the tree to separate them from other observations.

Let’s take a deeper look at how this actually works.

How do Isolation Forests work?

As mentioned earlier, Isolation Forests outlier detection are nothing but an ensemble of binary decision trees. And each tree in an Isolation Forest is called an Isolation Tree(iTree). The algorithm starts with the training of the data, by generating Isolation Trees.

Step by Step Guide on How Isolation Forest Work

Let us look at the complete algorithm step by step:

  • When given a dataset, a random sub-sample of the data is selected and assigned to a binary tree.
  • Branching of the tree starts by selecting a random feature (from the set of all N features) first. And then branching is done on a random threshold ( any value in the range of minimum and maximum values of the selected feature).
  • If the value of a data point is less than the selected threshold, it goes to the left branch else to the right. And thus a node is split into left and right branches.
  • This process from step 2 is continued recursively till each data point is completely isolated or till max depth(if defined) is reached.
  • The above steps are repeated to construct random binary trees.

After creating an ensemble of iTrees (Isolation Forest), the model training is complete. During scoring, the system traverses a data point through all the trees that were trained earlier. Now, an ‘anomaly score’ is assigned to each of the data points based on the depth of the tree required to arrive at that point. This score is an aggregation of the depth obtained from each of the iTrees. An anomaly score of -1 assigns anomalies and 1 to normal points based on the contamination parameter (percentage of anomalies present in the data).

ReadMore about the Improve Dataset Selection with ChatGPT

anomaly point and nominal point

Source: IEEE. We can see that it was easier to isolate an anomaly compared to a normal observation.

Implementation in Python

Let us look at how to implement Isolation Forest in Python.

Step1: Read the Input Data

import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.ensemble import IsolationForest
data = pd.read_csv('marks.csv')
data.head(10)

Output: 

isolation forest data

Step2: Visualize the Data

import pandas as pd
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
data = pd.read_csv('marks.csv')
print(data.head(10))

sns.boxplot(data.marks)
plt.show()
isolation forest boxplot

From the box plot, we can infer that there are anomalies on the right.

Step3: Define and Fit the Model

random_state = np.random.RandomState(42)
model=IsolationForest(n_estimators=100,max_samples='auto',contamination=float(0.2),random_state=random_state)

model.fit(data[['marks']])

print(model.get_params())

Output:

{'bootstrap': False, 'contamination': 0.2, 'max_features': 1.0, 'max_samples': 'auto', 'n_estimators': 100, 'n_jobs': None, 'random_state': RandomState(MT19937) at 0x7F08CEA68940, 'verbose': 0, 'warm_start': False}

You can take a look at Isolation Forest documentation in sklearn to understand the model parameters.

Score the data to obtain anomaly scores:

data['scores'] = model.decision_function(data[['marks']])

data['anomaly_score'] = model.predict(data[['marks']])

data[data['anomaly_score']==-1].head()
 

Output:

isolation forest boxplot

Here, we can observe that both anomalies are assigned an anomaly score of -1.

Step4: Model Evaluation

accuracy = 100*list(data['anomaly_score']).count(-1)/(anomaly_count)
print("Accuracy of the model:", accuracy)

Output:

Accuracy of the model: 100.0

What happens if we change the contamination parameter? Give it a try!!

ReadMore about the SVM One-Class Classifier For Anomaly Detection

Limitations of Isolation Forest

Isolation Forests are computationally efficient and researchers have proven them to be very effective in anomaly detection. Despite its advantages, there are a few limitations as mentioned below.

  • The final anomaly score depends on the contamination parameter, provided while training the model. This implies that we should have an idea of what percentage of the data is anomalous beforehand to get a better prediction.
  • Also, the model suffers from a bias due to the way the branching takes place.

Anomaly Score Map

Well, to understand the second point, we can take a look at the below anomaly score map.

Normally Distributed data and Anomaly score Map
Source : IEEE

Here, in the score map on the right, we can observe that the points in the center obtained the lowest anomaly score, as expected. However, we can see four rectangular regions around the circle with lower anomaly scores as well. So, when scoring a new data point in any of these rectangular regions, it might not detect it as an anomaly.

Two normally distribute clusters , anomaly score map

Similarly, in the above figure, we can see that the model resulted in two additional blobs(on the top right and bottom left ) which never even existed in the data.

Whenever a node splits in an iTree based on a threshold value, it divides the data into left and right branches, resulting in horizontal and vertical branch cuts. And these branch cuts result in this model bias.

Result

single blob , multiple blobs

The above figure shows branch cuts after combining outputs of all the trees of an Isolation Forest. Here, we can observe how the rectangular regions with lower anomaly scores formed in the left figure. And also the right figure shows the formation of two additional blobs due to more branch cuts.

To overcome this limit, an extension to Isolation Forests called ‘Extended Isolation Forests’ was introduced by Sahand Hariri. In EIF, horizontal and vertical cuts were replaced with cuts with random slopes.

anomaly ,nominal ,Isolation forest

Despite introducing EIF (Extended Isolation Forest), the use of Isolation Forests for anomaly detection remains widespread across various fields.

Conclusion

The Local Outlier Factor (LOF) algorithm is a powerful tool for detecting anomalies in data by evaluating the density of points relative to their neighbors. Unlike traditional anomaly detection methods, LOF does not require assuming a specific data distribution, making it suitable for a wide range of applications. When implementing LOF, setting the n_estimators parameter appropriately ensures robust performance, balancing computational efficiency with detection accuracy. It is important to note that LOF performs well with both normal and anomalous instances, as it identifies points that deviate significantly from their local neighborhood.

Hope you like the isolation forest example, which demonstrates the isolation forest anomaly detection algorithm in Python. The isolation forest is a powerful tool for identifying anomalies in complex datasets. By applying the isolation forest anomaly detection method, you can isolate outliers and gain valuable insights from your data. The isolation forest algorithm is a highly effective solution for various applications that require anomaly detection.

By leveraging random partitioning and regression techniques, LOF achieves scalable and efficient anomaly detection, even with large datasets. Adjusting the sampling size parameter enables fine-tuning of the algorithm to suit specific data characteristics, ensuring optimal results in real-world applications.

Key Takeaways

  • ACM is a premier organization that provides resources and promotes research in computer science and information technology.
  • An important challenge in data analysis, high-dimensional data refers to datasets with a large number of variables, which can complicate analysis and interpretation.
  • In data structures like graphs, finding shorter paths is crucial for optimizing algorithms such as Dijkstra’s algorithm for pathfinding in graphs.
  • In data analysis, analysts use a subset as a portion of a larger set to focus on specific parts of the data that are relevant to a particular question or analysis.
  • Time series data analysts analyze observations collected at regular time intervals to extract meaningful insights or forecast future values.
  • A potential reference that could be related to a person, place, or specific context not fully clear from the request.
  • In machine learning, we use training data to build and train models, which then make predictions on new data.
  • Normal data serves as a baseline for identifying anomalies or unusual occurrences within a dataset.

Frequently Asked Questions

Q1. What is the difference between random forest and Isolation Forest?

A. Random Forest is a supervised learning algorithm for classification and regression tasks using decision trees. Isolation Forest is an unsupervised algorithm for anomaly detection, isolating anomalies based on unique properties.

Q2. What is the working of Isolation Forest?

A. Isolation Forest works by randomly selecting a feature from the dataset and a split value to create partitions of the data. The process repeats recursively until it isolates anomalies in their own partitions.

Q3.How good is an Isolation Forest?

Isolation Forest quickly finds outliers by randomly splitting data. Works well on large datasets but needs the right number of expected outliers.

Q4.Is Isolation Forest distance based?

Isolation Forest doesn’t measure distance. It finds outliers by seeing how quickly it can isolate data points. Points that are easy to isolate are likely outliers.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Responses From Readers

Clear

Yes Or No Spinner
Yes Or No Spinner

This is a great blog post! I'm a big fan of using Isolation Forest for anomaly detection.

Akshara
Akshara

Thanks for the feedback!! :)

4pics
4pics

This is a great blog post! I'm a big fan of using Isolation Forest for anomaly detection.

Flash Card

What is the Isolation Forest algorithm and how does it detect anomalies?

Alright, so imagine you’re trying to find the odd one out in a massive crowd. The Isolation Forest algorithm is like your trusty tool for this job! It's an unsupervised anomaly detection algorithm that’s super good at spotting those outliers in big data sets. How does it do this? Well, it builds a bunch of decision trees, which we call Isolation Trees (iTrees), to single out the weird ones. The cool part is that anomalies are easier to isolate because they’re unique and stand out more. Here’s the game plan:

  • It randomly picks features and threshold values to split the data.
  • Then, it builds a whole ensemble of iTrees, with each tree using just a subset of the data.
  • Finally, it gives out anomaly scores based on how deep it needs to go to isolate data points; anomalies get isolated with fewer splits, so they have shorter path lengths.
This method shines when anomalies are rare and distinct, making it a favorite in fields like cybersecurity, finance, and medical research.

What is the Isolation Forest algorithm and how does it detect anomalies?

Quiz

What is the Isolation Forest algorithm and how does it detect anomalies?

Flash Card

Why has the Isolation Forest algorithm gained popularity since its introduction?

Since Fei Tony Liu and Zhi-Hua Zhou brought it to the scene in 2008, the Isolation Forest algorithm has been stealing the spotlight for its speed and reliability in sniffing out anomalies across different fields. Here’s why folks love it:

  • Efficiency: It’s built to handle big data sets fast, which is perfect for real-time stuff.
  • Effectiveness: It nails down anomalies with fewer splits, making it easy to tell the normal from the weird stuff.
  • Versatility: You can use it in all sorts of areas like cybersecurity for spotting odd patterns, finance for catching fraud, and medical research for finding rare diseases.
These perks make Isolation Forest the go-to choice for anomaly detection across various industries.

Why has the Isolation Forest algorithm gained popularity since its introduction?

Quiz

Why has the Isolation Forest algorithm gained popularity since its introduction?

Flash Card

How does the random selection of features and thresholds benefit the Isolation Forest algorithm?

The randomness in picking features and thresholds is like the secret sauce that makes the Isolation Forest algorithm so good at its job. Here’s why this randomness rocks:

  • Robustness: By mixing up the features, it avoids getting stuck on specific patterns, making it tougher against different types of anomalies.
  • Efficiency: Random thresholds help it quickly isolate anomalies since they need fewer splits compared to the usual data points.
  • Scalability: This random approach lets it handle large data sets like a pro, cutting down the complexity in feature selection.
All in all, this randomness ensures that the Isolation Forest can effectively spot anomalies in all sorts of data sets with different quirks.

How does the random selection of features and thresholds benefit the Isolation Forest algorithm?

Quiz

How does the random selection of features and thresholds benefit the Isolation Forest algorithm?

Flash Card

Compare the Isolation Forest algorithm with traditional supervised anomaly detection methods.

So, let’s talk about how the Isolation Forest stacks up against the traditional supervised anomaly detection methods. Unlike those that need labeled data to learn, the Isolation Forest is an unsupervised algorithm. Here’s the lowdown:

  • Data Requirement: Supervised methods need labeled data, which can be a pain to get, while Isolation Forest works with unlabeled data.
  • Flexibility: You can use Isolation Forest in different fields without any special training, whereas supervised methods need domain-specific labeled data.
  • Scalability: Isolation Forest is more scalable for big data sets thanks to its efficient tree-based setup, while supervised methods might struggle with high-dimensional data.
The unsupervised nature and scalability of Isolation Forest make it a versatile tool for anomaly detection in all sorts of applications.

Compare the Isolation Forest algorithm with traditional supervised anomaly detection methods.

Quiz

Compare the Isolation Forest algorithm with traditional supervised anomaly detection methods.

Flash Card

How can one implement the Isolation Forest algorithm in Python using sklearn?

Getting the Isolation Forest algorithm up and running in Python is a breeze with the sklearn library. Here’s your step-by-step guide:

  1. First, import the necessary libraries:
    from sklearn.ensemble import IsolationForest
  2. Next, define the model with the parameters you want:
    model = IsolationForest(n_estimators=100, max_samples='auto', contamination=0.2, random_state=42)
  3. Then, fit the model to your dataset:
    model.fit(data[['feature']])
  4. Finally, score the data to spot anomalies:
    scores = model.decision_function(data[['feature']])
The contamination parameter is key here, as it guesses how many anomalies are in the dataset, affecting how sensitive the model is to outliers.

Quiz

How can one implement the Isolation Forest algorithm in Python using sklearn?

Flash Card

What are the limitations of the Isolation Forest algorithm?

Even though it’s got a lot going for it, the Isolation Forest algorithm isn’t perfect. Here are a few things to watch out for:

  • Dependency on Contamination Parameter: It needs a good guess of the anomaly percentage in the dataset, which isn’t always easy to nail down, affecting how well it works.
  • Model Bias: The random branching might introduce bias, especially if the dataset has complex structures that don’t fit well with random splits.
  • Assumption of Rare Anomalies: It assumes anomalies are rare and distinct, which might not always be true, leading to possible misclassification.
Knowing these limitations is crucial for applying the Isolation Forest algorithm effectively and making sense of its results in real-world scenarios.

What are the limitations of the Isolation Forest algorithm?

Quiz

What are the limitations of the Isolation Forest algorithm?

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details