Machine Learning 101: Decision Tree Algorithm for Classification

Akshay Last Updated : 01 Mar, 2021
5 min read

This article was published as a part of the Data Science Blogathon.

Overview

  • Learn about the decision tree algorithm in machine learning, for classification problems.
  • here we have covered entropy, Information Gain, and Gini Impurity

 

Decision Tree Algorithm

The decision tree Algorithm belongs to the family of supervised machine learning algorithms. It can be used for both a classification problem as well as for regression problem.

The goal of this algorithm is to create a model that predicts the value of a target variable, for which the decision tree uses the tree representation to solve the problem in which the leaf node corresponds to a class label and attributes are represented on the internal node of the tree.

Let’s take a sample data set to move further ….
 Decision Tree Algorithm data
Suppose we have a sample of 14 patient data set and we have to predict which drug to suggest to the patient A or B.
Let’s say we pick cholesterol as the first attribute to split data
 Decision Tree Algorithm cholestrol

It will split our data into two branches High and Normal based on cholesterol, as you can see in the above figure.

Let’s suppose our new patient has high cholesterol by the above split of our data we cannot say whether Drug B or Drug A will be suitable for the patient.

Also, If the patient cholesterol is normal we still do not have an idea or information to determine that either Drug A or Drug B is Suitable for the patient.

Let us take Another Attribute Age, as we can see age has three categories in it Young, middle age and senior let’s try to split.

Another Attribute Age work Decision tree algorithm

From the above figure, Now we can say that we can easily predict which Drug to give to a patient based on his or her reports.

Assumptions that we make while using the Decision tree:

– In the beginning, we consider the whole training set as the root.

-Feature values are preferred to be categorical, if the values continue then they are converted to discrete before building the model.

-Based on attribute values records are distributed recursively.

-We use a statistical method for ordering attributes as a root node or the internal node.

Mathematics behind Decision tree algorithm: Before going to the Information Gain first we have to understand entropy

Entropy: Entropy is the measures of impurity, disorder, or uncertainty in a bunch of examples.

Purpose of Entropy:

Entropy controls how a Decision Tree decides to split the data. It affects how a Decision Tree draws its boundaries.

“Entropy values range from 0 to 1”, Less the value of entropy more it is trusting able.

 

 

Entropy
F1

Suppose we have F1, F2, F3 features we selected the F1 feature as our root node

F1 contains 9 yes label and 5 no label in it, after splitting the F1 we get F2 which have 6 yes/2 No and F3 which have 3 yes/3 no.

Now if we try to calculate the Entropy of both F2 by using the Entropy formula…

Putting the values in the formula:

entropy formula

Here, 6 is the number of yes taken as positive as we are calculating probability divided by 8 is the total rows present in the F2.

Similarly, if we perform Entropy for F3 we will get 1 bit which is a case of an attribute as in it there is 50%, yes and 50% no.

This splitting will be going on unless and until we get a pure subset.

 

What is a Puresubset?

The pure subset is a situation where we will get either all yes or all no in this case.

We have performed this concerning one node what if after splitting F2 we may also require some other attribute to reach the leaf node and we also have to take the entropy of those values and add it up to do the submission of all those entropy values for that we have the concept of information gain.

Information Gain: Information gain is used to decide which feature to split on at each step in building the tree. Simplicity is best, so we want to keep our tree small. To do so, at each step we should choose the split that results in the purest daughter nodes. A commonly used measure of purity is called information.

For each node of the tree, the information value measures how much information a feature gives us about the class. The split with the highest information gain will be taken as the first split and the process will continue until all children nodes are pure, or until the information gain is 0.

 

information gain formula

The algorithm calculates the information gain for each split and the split which is giving the highest value of information gain is selected.

We can say that in Information gain we are going to compute the average of all the entropy-based on the specific split.

Sv = Total sample after the split as in F2 there are 6 yes

S = Total Sample as in F1=9+5=14

Now calculating the Information Gain:

information gain

Like this, the algorithm will perform this for n number of splits, and the information gain for whichever split is higher it is going to take it in order to construct the decision tree.

The higher the value of information gain of the split the higher the chance of it getting selected for the particular split.

Gini Impurity:

Gini Impurity is a measurement used to build Decision Trees to determine how the features of a data set should split nodes to form the tree. More precisely, the Gini Impurity of a data set is a number between 0-0.5, which indicates the likelihood of new, random data being miss classified if it were given a random class label according to the class distribution in the data set.

 

Entropy vs Gini Impurity

The maximum value for entropy is 1 whereas the maximum value for Gini impurity is 0.5.

As the Gini Impurity does not contain any logarithmic function to calculate it takes less computational time as compared to entropy.

 

End Notes

In this article, we have covered a lot of details about decision tree, how it works and maths behind it, attribute selection measures such as Entropy, Information Gain, Gini Impurity with their formulas, and how machine learning algorithm solves it.

By now I hope you would have got an idea about the Decision tree, One of the best machine learning algorithms to solve a classification problem.

As a fresher, I’d advise you to learn these techniques and understand their implementation and later implement them in your models.

for better understanding refer to https://scikit-learn.org/stable/modules/tree.html

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Responses From Readers

Clear

Deepak Patel
Deepak Patel

Explanation is very consise about data science balgothom. Good works Akshay Sharma.

Shikhar
Shikhar

You explain it in a very simple way thank you

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details