This article was published as a part of the Data Science Blogathon.
Naive Bayes is a classification technique based on the Bayes theorem. It is a simple but powerful algorithm for predictive modeling under supervised learning algorithms. The technique behind Naive Bayes is easy to understand. Naive Bayes has higher accuracy and speed when we have large data points.
There are three types of Naive Bayes models: Gaussian, Multinomial, and Bernoulli.
Gaussian Naive Bayes is based on Bayes’ Theorem and has a strong assumption that predictors should be independent of each other. For example, Should we give a Loan applicant would depend on the applicant’s income, age, previous loan, location, and transaction history? In real life scenario, it is most unlikely that data points don’t interact with each other but surprisingly Gaussian Naive Bayes performs well in that situation. Hence, this assumption is called class conditional independence.
Let’s understand with an example of 2 dice:
Gaussian Naive Bayes says that events should be mutually independent and to understand that let’s start with basic statistics.
Let A and B be any events with probabilities P(A) and P(B). Both the events are mutually independent. So if we have to calculate the probability of both the events then:
If we are told that B has occurred, then the probability of A might change. The new probability of A is called the conditional probability of A given B.
Conditional Probability:
We can say that:
OR
We can also write the equation as:
This gives us the Bayes theorem:
Let’s understand the working of Naive Bayes with an example. consider a use case where we want to predict if a flight would land in the time given weather conditions on that specific day using the Naive Bayes algorithm. Below are the steps which algorithm follows:
Problem: Given the historical data, we want to predict if the flight will land in time if the weather is Dusty?
P(Yes | Dusty) = P( Dusty | Yes) * P(Yes) / P(Dusty)
1. Calculating Prior probability
P(Dusty) = 6/16 = 0.375
P(Yes)= 9/16 = 0.563
2. Calculating Posterior probability
P (Dusty | Yes) = 4/9 = 0.444
Putting Prior and Posterior in our equation:
P (Yes | Dusty) = 0.444 * 0.563 / 0.375 = 0.666
P(No | Dusty) = P( Dusty | No) * P(No) / P(Dusty)
1. Calculating Prior probability
P(Dusty) = 6/16 = 0.375
P(No) = 7/16 = 0.438
2. Calculating Posterior probability
P(Dusty | No) = 2/7 = 0.285
Putting Prior and Posterior in our equation
P(No | Dusty) = 0.285*0.438 / 0.375 = 0.332
Given its Dusty weather flight will land in time. Here probability of flight arriving in time (0.666) is greater than flight not arriving in time (0.332), So the class assigned will be ‘In Time’.
Suppose we are predicting if a newly arrived email is spam or not. The algorithm predicts based on the keyword in the dataset. While analyzing the new keyword “money” for which there is no tuple in the dataset, in this scenario, the posterior probability will be zero and the model will assign 0 (Zero) probability because the occurrence of a particular keyword class is zero. This is referred to as “Zero Probability Phenomena”.
We can get over this issue by using smoothing techniques. One of the techniques is Laplace transformation, which adds 1 more tuple for each keyword class pair. In the above example, let’s say we have 1000 keywords in the training dataset. Where we have 0 tuples for keyword “money”, 990 tuples for keyword “password” and 10 tuples for keyword “account” for classifying an email as spam. Without Laplace transformation the probability will be: 0 (0/1000), 0.990 (990/1000) and 0.010 (10/1000).
Now if we apply Laplace transformation and add 1 more tuple for each keyword then the new probability will be 0.001 (1/1003), 0.988 (991/1003), and 0.01 (11/1003).
# Gaussian Naive Bayes Classification import numpy as np import pandas as pd from sklearn.model_selection import cross_val_score from sklearn.metrics import accuracy_score, confusion_matrix from sklearn.naive_bayes import GaussianNB from sklearn.model_selection import train_test_split,GridSearchCV import matplotlib.pyplot as plt import seaborn as Sns sns.<a onclick="parent.postMessage({'referent':'.seaborn.set_style'}, '*')">set_style("whitegrid") import Warnings warnings.filterwarnings("ignore") import scipy.stats as stats %matplotlib inline data = pd.<a onclick="parent.postMessage({'referent':'.pandas.read_csv'}, '*')">read_csv('/kaggle/input/pima-indians-diabetes-database/diabetes.csv') X = data.drop(columns=['Outcome'],axis=1) Y = data['Outcome']
from sklearn.impute import SimpleImputer
rep_0 = SimpleImputer(missing_values=0, strategy="mean")
cols = X_train.columns X_train = pd.<a onclick="parent.postMessage({'referent':'.pandas.DataFrame'}, '*')">DataFrame(rep_0.fit_transform(X_train)) X_test = pd.<a onclick="parent.postMessage({'referent':'.pandas.DataFrame'}, '*')">DataFrame(rep_0.fit_transform(X_test)) X_train.columns = cols X_test.columns = cols X_train.head()
#Predicting train and test accuracy
predict_train = model.fit(X_train, y_train).predict(X_train)
# Accuray Score on train dataset a
accuracy_train = accuracy_score(y_train,predict_train) print('accuracy_score on train dataset : ', accuracy_train) # predict the target on the test dataset predict_test = model.predict(X_test) # Accuracy Score on test dataset accuracy_test = accuracy_score(y_test,predict_test) print('accuracy_score on test dataset : ', accuracy_test)
#accuracy_score on train dataset : 0.7597765363128491 #accuracy_score on test dataset : 0.7575757575757576
np.<a onclick="parent.postMessage({'referent':'.numpy.logspace'}, '*')">logspace(0,-9, num=10)
from sklearn.model_selection import RepeatedStratifiedKFold
cv_method = RepeatedStratifiedKFold(n_splits=5, n_repeats=3, random_state=999)
from sklearn.preprocessing import PowerTransformer params_NB = {'var_smoothing': np.<a onclick="parent.postMessage({'referent':'.numpy.logspace'}, '*')">logspace(0,-9, num=100)} gs_NB = GridSearchCV(estimator=model, param_grid=params_NB, cv=cv_method,verbose=1,scoring='accuracy') Data_transformed = PowerTransformer().fit_transform(X_test) gs_NB.fit(Data_transformed, y_test);
results_NB = pd.DataFrame(gs_NB.cv_results_['params'])
results_NB['test_score'] = gs_NB.cv_results_['mean_test_score']
# predict the target on the test dataset predict_test = gs_NB.predict(Data_transformed) # Accuracy Score on test dataset accuracy_test = accuracy_score(y_test,predict_test) print('accuracy_score on test dataset : ', accuracy_test)
#accuracy_score on test dataset : 0.7922077922077922
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.
Hey congrats Akshay, nice article 🙂