As a data scientist, how will you identify anomalies or outliers in many cases of fraud detection in the bank for a transaction, such as Smart meters anomaly detection? A data point significantly off the average and, depending on the goal, removing or resolving them from the analysis to prevent skewing is known as outlier detection. Have you ever thought about the bank where you make several transactions and how the bank helps you by identifying fraud? Someone logs into your account, and a message is sent to you immediately after they notice suspicious activity at a place to confirm whether it’s you or someone else.
Overview:
This article was published as a part of the Data Science Blogathon.
We often see anomalies or outliers in a dataset, which are usually defined by values that differ from those of another dataset.
Suppose we own a company and notice the font end data errors, even when our company supplies the same service, but sales are declining. These errors are termed anomalies or outliers.
Let’s take an example that will further clarify what anomaly means.
Here in this example, a bird is an outlier or noise.
#import seaborn as sns
import pandas as pd
titanic = pd.read_csv('titanic.csv')
print(titanic.head())
We can see many null values in the image. We will fill the null values with mode.
titanic['age'].fillna(titanic['age'].mode()[0], inplace=True)
titanic['cabin'].fillna(titanic['cabin'].mode()[0], inplace=True)
titanic['boat'].fillna(titanic['boat'].mode()[0], inplace=True)
titanic['body'].fillna(titanic['body'].mode()[0], inplace=True)
titanic['sex'].fillna(titanic['sex'].mode()[0], inplace=True)
titanic['survived'].fillna(titanic['survived'].mode()[0], inplace=True)
titanic['home.dest'].fillna(titanic['home.dest'].mode()[0], inplace=True)
titanic['age'].plot.hist(
bins = 50,
title = "Histogram of the age"
)
This distribution is Gaussian distribution and is often called a normal distribution.
The two parameters considered are mean and Standard Deviation. With a change in mean values, the distribution curve changes to the left or right, depending on the mean values.
Standard Normal distribution means mean(μ = 0) and standard deviation (σ) is one. To know the probability of Z-table being available,
We can calculate Z-scores using the given formula, where x is a random variable, μ is the mean, and σ is the standard deviation.
Why do we need Z-Scores to be calculated?
It helps to know how a single or individual value lies in the entire distribution.
For example, if the maths subject scores mean is given to us 82, the standard deviation σ is 4. We have a value of x as 75. Now Z-Scores will be calculated as 82-75/4 = 1.75. It shows the value 75 with a z-score of 1.75 lies below the mean. It helps to determine whether values are higher, lower, or equal to the mean and how far.
Now, we will calculate Z-Score in Python and look at outliers.
We imported Z-Scores from Scipy, calculated the Z-Score, and then filtered the data by applying lambda. This gives us the number of outliers ranging from 66 to 80.
from scipy.stats import zscore
titanic["age_zscore"] = zscore(titanic["age"])
titanic["outlier"] = titanic["age_zscore"].apply(
lambda x: x = 2.8
)
titanic[titanic["outlier"]]
We will now examine another clustering method, Density-based spatial clustering of applications with noise (DBSCAN).
As the name indicates, the outliers detection is on clustering. In this method, we calculate the distance between points.
Let’s continue our titanic data and plot a graph between fare and age. We made a scatter graph between age and fare variables. We found three dots far away from the others.
Before we proceed further, we will normalize our data variables.
There are many ways to normalize our data. We can import standard scaler by sklearn or min max scaler.
titanic['fare'].fillna(titanic['fare'].mean(), inplace=True)
from sklearn.preprocessing import StandardScaler
scale = StandardScaler()
fage = scale.fit_transform(fage)
fage = pd.DataFrame(fage, columns = ["age", "fare"])
fage.plot.scatter(x = "age", y = "fare")
We used Standard Scaler to make our data normal and plotted a scatter graph.
Also Read: Understand The DBSCAN Clustering Algorithm
Now we will import DBSCAN to give points to the clusters. If it fails, it will show -1.
from sklearn.cluster import DBSCAN
outlier = DBSCAN(
eps = 0.5,
metric="euclidean",
min_samples = 3,
n_jobs = -1)
clusters = outlier.fit_predict(fage)
clusters
array([0, 1, 1, ..., 1, 1, 1])
Now we have the results, but how do we check which values are min and max and whether we have -1 values? We will use the arg min value to check the smallest value in the cluster.
value=-1
index = clusters.argmin()
print(" The element is at ", index)
small_num = np.min(clusters)
print("The small number is : " , small_num)
print(np.where(clusters == small_num))
The element is at: 14
The small number is : -1
(array([ 14, 50, 66, 94, 285, 286], dtype=int64),)
We can see from the result six values which are -1.
Lets now plot a scatter graph.
from matplotlib import cm
c = cm.get_cmap('magma_r')
fage.plot.scatter(
x = "age",
y = "fare",
c = clusters,
cmap = c,
colorbar = True
)
The above methods we applied are on uni-variate outliers.
For detection of multi-variate outliers, we need to understand them.
For example, we take Car readings. We have seen two reading meters one for the odometer, which records or measures the speed at which the vehicle is moving, and the second is the rpm reading which records the number of rotations made by the car wheel per minute.
Suppose the odometer shows a range of 0-60 mph and rpm of 0-750. We assume that all the values which come should correlate with each other. If the odometer shows a 50 speed and the rpm shows 0 intakes, readings are incorrect. If the odometer shows a value more than zero, that means the car was moving, so the rpm should have higher values, but in our case, it shows a 0 value. i.e., Multi-variate outliers.
In DBSCAN, we used Euclidean distance metrics, but in this case, we are talking about the Mahalanobis distance method. We can also use Mahalanobis distance with DBSCAN.
DBSCAN(eps=0.5, min_samples=3, metric='mahalanobis', metric_params={'V':np.cov(X)}, algorithm='brute', leaf_size=30, n_jobs=-1)
Why is Euclidean unfit for entities correlated to each other? Euclidean distance cannot find or will give incorrect data on how close are the two points.
Mahalanobis method uses the distance between points and distribution that is clean data. Euclidean distance is often between two points, and its z-score is calculated by x minus mean and divided by standard deviation. In Mahalanobis, the z-score is x minus the mean divided by the covariance matrix.
Therefore, what effect does dividing by the covariance matrix have? The covariance values will be high if the variables in your dataset are highly correlated.
Similarly, if the covariance values are low, the distance is not significantly reduced if the data are not correlated. It does so well that it addresses both the scale and correlation of the variables issues.
Dataset can be taken from Anomaly-/caret.csv at main · aster28/Anomaly- (github.com)
df = pd.read_csv('caret.csv').iloc[:, [0,4,6]]
df.head()
We defined the function distance as x= None, data= None, and Covariance = None. Inside the function, we took the mean of data and used the covariance value of the value there. Otherwise, we will calculate the covariance matrix. T stands for transpose.
For example, if the array size is five or six and you want it to be in two variables, then we need to transpose the matrix.
np.random.multivariate_normal(mean, cov, size = 5)
array([[ 0.0509196, 0.536808 ],
[ 0.1081547, 0.9308906],
[ 0.4545248, 1.4000731],
[ 0.9803848, 0.9660610],
[ 0.8079491 , 0.9687909]])
np.random.multivariate_normal(mean, cov, size = 5).T
array([[ 0.0586423, 0.8538419, 0.2910855, 5.3047358, 0.5449706],
[ 0.6819089, 0.8020285, 0.7109037, 0.9969768, -0.7155739]])
We used sp.linalg, which is linear algebra and has different functions to be performed on linear algebra. It has the inv function for the inversion of the matrix. NumPy dot is the means for multiplication of the matrix.
import scipy as sp
def distance(x=None, data=None, cov=None):
x_m = x - np.mean(data)
if not cov:
cov = np.cov(data.values.T)
inv_cov = sp.linalg.inv(cov)
left = np.dot(x_m, inv_cov)
m_distance = np.dot(left, x_m.T)
return m_distance.diagonal()
df_g= df[['carat', 'depth', 'price']].head(50)
df_g['m_distance'] = distance(x=df_g, data=df[['carat', 'depth', 'price']])
df_g.head()
The Tukey method is also often called Box and Whisker or Box plot method.
Tukey method utilizes the Upper and lower range.
Upper range = 75th Percentile -k*IQR
Lower range = 25th Percentile + k* IQR
Let us see our Titanic data with age variable using a box plot.
sns.boxplot(titanic['age'].values)
In the image, the box blot created by Seaborn shows many dots between the ages of 55 and 80 as outliers not within the quartiles. We will detect the lower and upper range by making a function outliers_detect.
def outliers_detect(x, k = 1.5):
x = np.array(x).copy().astype(float)
first = np.quantile(x, .25)
third = np.quantile(x, .75)
# IQR calculation
iqr = third - first
#Upper range and lower range
lower = first - (k * iqr)
upper = third + (k * iqr)
return lower, upper
outliers_detect(titanic['age'], k = 1.5)
(2.5, 54.5)
We will be using the same dataset for detection by PyCaret.
from pycaret.anomaly import *
setup_anomaly_data = setup(df)
Pycaret is an open-source machine learning which uses an unsupervised learning model to detect outliers. It has a get_data method for using the dataset in pycaret itself, set_up for preprocessing task before detection, usually takes data frame but also has many other features like ignore_features, etc.
Other methods create_model for using an algorithm. We will first use Isolation Forest.
ifor = create_model("iforest")
plot_model(ifor)
ifor_predictions = predict_model(ifor, data = df)
print(ifor_predictions)
ifor_anomaly = ifor_predictions[ifor_predictions["Anomaly"] == 1]
print(ifor_anomaly.head())
print(ifor_anomaly.shape)
Anomaly 1 indicates outliers, and Anomaly 0 shows no outliers.
The yellow color here indicates outliers.
Now let us see another algorithm, K Nearest Neighbors (KNN)
knn = create_model("knn")
plot_model(knn)
knn_pred = predict_model(knn, data = df)
print(knn_pred)
knn_anomaly = knn_pred[knn_pred["Anomaly"] == 1]
knn_anomaly.head()
knn_anomaly.shape
Now we will use a clustering algorithm.
clus = create_model("cluster")
plot_model(clus)
clus_pred = predict_model(clus, data = df)
print(clus_pred)
clus_anomaly = clus_predictions[clus_pred["Anomaly"] == 1]
print(clus_anomaly.head())
clus_anomaly.shape
PyOD is a Python library for detecting outliers in multivariate data. It is suitable for both supervised and unsupervised learning.
from pyod.models.iforest import IForest
from pyod.models.knn import KNN
We imported the library and algorithm.
from pyod.utils.data import generate_data
from pyod.utils.data import evaluate_print
from pyod.utils.example import visualize
train= 300
test=100
contaminate = 0.1
X_train, X_test, y_train, y_test = generate_data(n_train=train, n_test=test, n_features=2,contamination=contaminate,random_state=42)
cname_alg = 'KNN' # the name of the algorithm is K Nearest Neighbors
c = KNN()
c.fit(X_train) #Fit the algorithm
y_trainpred = c.labels_
y_trainscores = c.decision_scores_
y_testpred = c.predict(X_test)
y_testscores = c.decision_function(X_test)
print("Training Data:")
evaluate_print(cname_alg, y_train, y_train_scores)
print("Test Data:")
evaluate_print(cname_alg, y_test, y_test_scores)
visualize(cname_alg, X_train, y_train, X_test, y_test, y_trainpred,y_testpred, show_figure=True, save_figure=True)
We will use the IForest algorithm.
fname_alg = 'IForest' # the name of the algorithm is K Nearest Neighbors
f = IForest()
f.fit(X_train) #Fit the algorithm
y_train_pred = c.labels_
y_train_scores = c.decision_scores_
y_test_pred = c.predict(X_test)
y_test_scores = c.decision_function(X_test)
print("Training Data:")
evaluate_print(fname_alg, y_train_pred, y_train_scores)
print("Test Data:")
evaluate_print(fname_alg, y_test_pred, y_test_scores)
visualize(fname_alg, X_train, y_train, X_test, y_test_pred, y_train_pred,y_test_pred, show_figure=True, save_figure=True)
We will use the air passenger dataset with time series prophet/example_air_passengers.csv at main · aster28/prophet (github.com)
import prophet
from prophet import forecaster
from prophet import Prophet
m = Prophet()
data = pd.read_csv('air_pass.csv')
data.head()
data.columns = ['ds', 'y']
data['y'] = np.where(data['y'] != 0, np.log(data['y']), 0)
The Log of the y column enables no negative value. We split our data into train, test, and stored the prediction in the variable forecast.
train, test= train_test_split(data, random_state =42)
m.fit(train[['ds','y']])
forecast = m.predict(test)
def detect(forecast):
forcast = forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].copy()
forcast['real']= data['y']
forcast['anomaly'] =0
forcast.loc[forcast['real']> forcast['yhat_upper'], 'anomaly']=1
forcast.loc[forcast['real']< forcast['yhat_lower'], 'anomaly']=-1
forcast['imp']=0
in_range = forcast['yhat_upper']-forcast['yhat_lower']
forcast.loc[forcast['anomaly']==1, 'imp'] = forcast['real']-forcast['yhat_upper']/in_range
forcast.loc[forcast['anomaly']==-1, 'imp']= forcast['yhat_lower']-forcast['real']/in_range
return forcast
detect(forecast)
We took the anomaly as -1.
The process of finding outliers in a given dataset is called anomaly detection. Outliers are data objects that stand out from the rest of the object values in the dataset and don’t behave normally. Anomaly detection tasks can use distance-based and density-based clustering methods to identify outliers as a cluster. Here, we discuss anomaly detection’s various methods and explain them using the code on three datasets: Titanic, Air passengers, and Caret to understand uni-variate and multi-variate outliers.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.