In my previous article, I talked about the theoretical concepts of outliers and tried to find the answer to the question: “When should we drop outliers and when should we keep them?”. In this article, I will focus on outlier detection and the different ways of treating them. It is important for a data scientist to find outliers and remove them from the dataset as part of the feature engineering before training machine learning algorithms for predictive modeling. Outliers present in a classification or regression dataset can lead to lower predictive modeling performance.
I recommend you read this article before proceeding so that you have a clear idea about the outlier analysis in Data Science Projects. In this article we have cover topics like : outlier detection in python , outlier detection in machine learning and how to remove outlier detection in python so we mainly covering all topics and information regarding outlier detection.
The identification and outlier removal are essential during data preprocessing in machine learning to prevent skewed results. In Outlier Detection Python, different libraries like Pandas and Scikit-learn provide strong techniques for identifying and eliminating outliers. Methods such as Z-score, IQR, and clustering techniques are able to successfully detect outliers. Data scientists can improve model accuracy and reliability by fixing these anomalies, resulting in more insightful analyses and predictions across various fields, such as finance and healthcare.
In this article, you will learn how to remove outliers in Python using various techniques. We will cover the Z-score method, IQR method, and other outlier removal techniques to help you detect and remove outliers from your datasets. By the end of this article, you will have a solid understanding of how to effectively remove outliers in Python for more accurate data analysis.
Learning Objectives
This article was published as a part of the Data Science Blogathon
Outlier is a data point that stands out significantly from the rest of the data. It can be an extremely high or low value compared to the other observations in a dataset. Outliers can be caused by measurement errors, natural variations in the data, or even unexpected discoveries.
There are 3 main types of outliers:
Global outliers: Stand out from the entire dataset, like a lone wolf.
Contextual outliers: Depend on their surroundings, like a high sale at a clothing store.
Collective outliers: Groups that deviate together, like a cluster of oddly high values.
Outlier detection is a method used to find unusual or abnormal data points in a set of information. Imagine you have a group of friends, and you’re all about the same age, but one person is much older or younger than the rest. That person would be considered an outlier because they stand out from the usual pattern. In data, outliers are points that deviate significantly from the majority, and detecting them helps identify unusual patterns or errors in the information. This method is like finding the odd one out in a group, helping us spot data points that might need special attention or investigation.
There are several ways to treat outliers in a dataset, depending on the nature of the outliers and the problem being solved. Here are some of the most common ways of treating outlier values.
It excludes the outlier values from our analysis. By applying this technique, our data becomes thin when more outliers are present in the dataset. Its main advantage is its fastest nature.
In this technique called “outlier detection,” we cap our data to set limits. For instance, if we decide on a specific value, any data point above or below that value is considered an outlier. The number of outliers in the dataset then gives us insight into that capping number. It’s like setting a boundary and saying, “Anything beyond this point is unusual,” and by doing so, we identify and count the outliers in our data.
For example, if you’re working on the income feature, you might find that people above a certain income level behave similarly to those with a lower income. In this case, you can cap the income value at a level that keeps that intact and accordingly treat the outliers.
Treating outliers as a missing value: Byassuming outliers as the missing observations, treat them accordingly, i.e., same as missing values imputation.
You can refer to the missing value article here.
In the method of outlier detection, we create groups and categorize the outliers into a specific group, making them follow the same behavior as the other points in that group. This approach is often referred to as Binning. Binning is a way of organizing data, especially in outlier detection, where we group similar items together, helping us identify and understand patterns more effectively.
You can learn more about discretization here.
Source: sphweb.bumc.bu.edu
Assumption: The features are normally or approximately normally distributed.
Step 1: Importing necessary dependencies
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
Step 2: Read and load the dataset
df = pd.read_csv('placement.csv')
df.sample(5)
Step 3: Plot the distribution plots for the features
import warnings
warnings.filterwarnings('ignore')
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
sns.distplot(df['cgpa'])
plt.subplot(1,2,2)
sns.distplot(df['placement_exam_marks'])
plt.show()
Step 4: Finding the boundary values
print("Highest allowed",df['cgpa'].mean() + 3*df['cgpa'].std())
print("Lowest allowed",df['cgpa'].mean() - 3*df['cgpa'].std())
Output:
Highest allowed 8.808933625397177
Lowest allowed 5.113546374602842
Step 5: Finding the outliers
df[(df['cgpa'] > 8.80) | (df['cgpa'] < 5.11)]
Step 6: Trimming of outliers
new_df = df[(df['cgpa'] < 8.80) & (df['cgpa'] > 5.11)]
new_df
Step 7: Capping on outliers
upper_limit = df['cgpa'].mean() + 3*df['cgpa'].std()
lower_limit = df['cgpa'].mean() - 3*df['cgpa'].std()
Step 8: Now, apply the capping
df['cgpa'] = np.where(
df['cgpa']>upper_limit,
upper_limit,
np.where(
df['cgpa']<lower_limit,
lower_limit,
df['cgpa']
Step 9: Now, see the statistics using the “Describe” function
df['cgpa'].describe()
Output:
count 1000.000000 mean 6.961499 std 0.612688 min 5.113546 25% 6.550000 50% 6.960000 75% 7.370000 max 8.808934 Name: cgpa, dtype: float64
This completes our Z-score-based technique!
Used when our data distribution is skewed.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('placement.csv')
df.head()
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
sns.distplot(df['cgpa'])
plt.subplot(1,2,2)
sns.distplot(df['placement_exam_marks'])
plt.show()
sns.boxplot(df['placement_exam_marks'])
percentile25 = df['placement_exam_marks'].quantile(0.25)
percentile75 = df['placement_exam_marks'].quantile(0.75)
upper_limit = percentile75 + 1.5 * iqr
lower_limit = percentile25 - 1.5 * iqr
df[df['placement_exam_marks'] > upper_limit]
df[df['placement_exam_marks'] < lower_limit]
new_df = df[df['placement_exam_marks'] < upper_limit]
new_df.shape
plt.figure(figsize=(16,8))
plt.subplot(2,2,1)
sns.distplot(df['placement_exam_marks'])
plt.subplot(2,2,2)
sns.boxplot(df['placement_exam_marks'])
plt.subplot(2,2,3)
sns.distplot(new_df['placement_exam_marks'])
plt.subplot(2,2,4)
sns.boxplot(new_df['placement_exam_marks'])
plt.show()
new_df_cap = df.copy()
new_df_cap['placement_exam_marks'] = np.where(
new_df_cap['placement_exam_marks'] > upper_limit,
upper_limit,
np.where(
new_df_cap['placement_exam_marks'] < lower_limit,
lower_limit,
new_df_cap['placement_exam_marks']
plt.figure(figsize=(16,8))
plt.subplot(2,2,1)
sns.distplot(df['placement_exam_marks'])
plt.subplot(2,2,2)
sns.boxplot(df['placement_exam_marks'])
plt.subplot(2,2,3)
sns.distplot(new_df_cap['placement_exam_marks'])
plt.subplot(2,2,4)
sns.boxplot(new_df_cap['placement_exam_marks'])
plt.show()
This completes our IQR-based technique!
Steps to follow for the percentile method:
import numpy as np
import pandas as pd
df = pd.read_csv('weight-height.csv')
df.sample(5)
sns.distplot(df['Height'])
sns.boxplot(df['Height'])
upper_limit = df['Height'].quantile(0.99)
lower_limit = df['Height'].quantile(0.01)
new_df = df[(df['Height'] <= 74.78) & (df['Height'] >= 58.13)]
sns.distplot(new_df['Height'])
sns.boxplot(new_df['Height'])
Winsorization
df['Height'] = np.where(df['Height'] >= upper_limit,
upper_limit,
np.where(df['Height'] <= lower_limit,
lower_limit,
df['Height']))
sns.distplot(df['Height'])
sns.boxplot(df['Height'])
This completes our percentile-based technique!
Outlier detection and removal is a crucial data analysis step for a machine learning model, as outliers can significantly impact the accuracy of a model if they are not handled properly. The techniques discussed in this article, such as Z-score and Interquartile Range (IQR), are some of the most popular methods used in outlier detection. The technique to be used depends on the specific characteristics of the data, such as the distribution and number of variables, as well as the required outcome.
Hope you like the article! Removing outliers in Python is crucial for accurate data analysis. Techniques like the Z-score and IQR methods help in outlier removal. Learn how to remove outliers effectively using Python outlier detection methods for cleaner datasets.
A. Most popular outlier detection methods are Z-Score, IQR (Interquartile Range), Mahalanobis Distance, DBSCAN (Density-Based Spatial Clustering of Applications with Noise, Local Outlier Factor (LOF), and One-Class SVM (Support Vector Machine).
A. Libraries like SciPy and NumPy can be used to identify outliers. Also, plots like Box plot, Scatter plot, and Histogram are useful in visualizing the data and its distribution to identify outliers based on the values that fall outside the normal range.
A. The benefit of removing outliers is to enhance the accuracy and stability of statistical models and ML algorithms by reducing their impact on results. Outliers can distort statistical analyses and skew results as they are extreme values that differ from the rest of the data. Removing outliers makes the results more robust and accurate by eliminating their influence. It reduces overfitting in ML algorithms by avoiding fitting to extreme values instead of the underlying data pattern.
To Detect an Outlier here are the points:
Identify data points significantly different from the rest.
Methods:Statistical: Z-score, IQR, box plots
Visual: Scatter plots, histograms
Other: Domain knowledge, machine learning (Isolation Forest, Local Outlier Factor)
Consider: Outlier definition, impact, handling (remove, cap, transform).
Hello, thanks a lot for the article ! Is there a link to download the data: placement.csv file ? Thanks again. Best regards.
thank you so much. this article is well decorated and helpful must say. how can I get the dataset that is used in his article, please?
I wish you guys would provide links to the datasets. That way people like me who trying to learn could do the work as we read the article.