Have you expected great results from your machine learning model, only to get poor accuracy? You’ve put in the effort, so what went wrong? How can you fix it? There are many ways to assess your classification model, but the confusion matrix is one of the most reliable option. It shows how well your model performed and where it made errors, helping you improve. Beginners often find the confusion matrix confusing, but it’s actually simple and powerful. This tutorial will explain what a confusion matrix in machine learning is and how it provides a complete view of your model’s performance.
Despite its name, you’ll see that a confusion matrix is straightforward and effective. Let’s explore the confusion matrix together!
In this article, you will explore the confusion matrix formula and its significance in analyzing confusion metrics. We will delve into the role of the confusion matrix in deep learning and its applications in AI, providing a comprehensive understanding of model performance evaluations.
Learning Objectives
Learn what a confusion matrix is and understand the various terms related to it.
Learn to use a confusion matrix for multi-class classification.
Learn to implement a confusion matrix using scikit-learn in Python.
Learning the ropes in the machine learning field? These courses will get you on your way:
A confusion matrix is a performance evaluation tool in machine learning, representing the accuracy of a classification model. It displays the number of true positives, true negatives, false positives, and false negatives. This matrix aids in analyzing model performance, identifying mis-classifications, and improving predictive accuracy.
A Confusion matrix is an N x N matrix used for evaluating the performance of a classification model, where N is the total number of target classes. The matrix compares the actual target values with those predicted by the machine learning model. This gives us a holistic view of how well our classification model is performing and what kinds of errors it is making.
For a binary classification problem, we would have a 2 x 2 matrix, as shown below, with 4 values:
Let’s decipher the matrix:
The target variable has two values: Positive or Negative
The columns represent the actual values of the target variable
The rows represent the predicted values of the target variable
But wait – what’s TP, FP, FN, and TN here? That’s the crucial part of a confusion matrix. Let’s understand each term below.
Important Terms in a Confusion Matrix
True Positive (TP)
The predicted value matches the actual value, or the predicted class matches the actual class.
The actual value was positive, and the model predicted a positive value.
True Negative (TN)
The predicted value matches the actual value, or the predicted class matches the actual class.
The actual value was negative, and the model predicted a negative value.
False Positive (FP) – Type I Error
The predicted value was falsely predicted.
The actual value was negative, but the model predicted a positive value.
Also known as the type I error.
False Negative (FN) – Type II Error
The predicted value was falsely predicted.
The actual value was positive, but the model predicted a negative value.
Also known as the type II error.
Let me give you an example to better understand this. Suppose we had a classification dataset with 1000 data points. We fit a classifier (say logistic regression or decision tree) on it and get the below confusion matrix:
The different values of the Confusion matrix would be as follows:
True Positive (TP) = 560, meaning the model correctly classified 560 positive class data points.
True Negative (TN) = 330, meaning the model correctly classified 330 negative class data points.
False Positive (FP) = 60, meaning the model incorrectly classified 60 negative class data points as belonging to the positive class.
False Negative (FN) = 50, meaning the model incorrectly classified 50 positive class data points as belonging to the negative class.
This turned out to be a pretty decent classifier for our dataset, considering the relatively larger number of true positive and true negative values.
Remember the Type I and Type II errors. Interviewers love to ask the difference between these two! You can prepare for all this better from our Machine learning Course Online.
Why Do We Need a Confusion Matrix?
Before we answer this question, let’s think about a hypothetical classification problem.
Let’s say you want to predict how many people are infected with a contagious virus in times before they show the symptoms and isolate them from the healthy population (ringing any bells, yet?). The two values for our target variable would be Sick and Not Sick.
Now, you must be wondering why we need a confusion matrix when we have our all-weather friend – Accuracy. Well, let’s see where classification accuracy falters.
Our dataset is an example of an imbalanced dataset. There are 947 data points for the negative class and 3 data points for the positive class. This is how we’ll calculate the accuracy:
Let’s see how our model performed:
The total outcome values are:
TP = 30, TN = 930, FP = 30, FN = 10
So, the accuracy of our model turns out to be:
96%! Not bad!
But it gives the wrong idea about the result. Think about it.
Our model is saying, “I can predict sick people 96% of the time”. However, it is doing the opposite. It predicts the people who will not get sick with 96% accuracy while the sick are spreading the virus!
Do you think this is a correct metric for our model, given the seriousness of the issue? Shouldn’t we be measuring how many positive cases we can predict correctly to arrest the spread of the contagious virus? Or maybe, out of the correct predictions, how many are positive cases to check the reliability of our model?
This is where we come across the dual concept of Precision and Recall.
How to Calculate Confusion Matrix for a 2-class Classification Problem?
To calculate the confusion matrix for a 2-class classification problem, you will need to know the following:
True positives (TP): The number of samples that were correctly predicted as positive.
True negatives (TN): The number of samples that were correctly predicted as negative.
False positives (FP): The number of samples that were incorrectly predicted as positive.
False negatives (FN): The number of samples that were incorrectly predicted as negative.
Once you have these values, you can calculate the confusion matrix using the following table:
Predicted
TRUE
FALSE
Positive
True positives (TP)
False positives (FP)
Negative
False negatives (FN)
True negatives (TN)
Here is an example of how to calculate the confusion matrix for a 2-class classification problem:
The confusion matrix can be used to calculate a variety of metrics, such as accuracy, precision, recall, and F1 score.
Precision vs. Recall
Precision tells us how many of the correctly predicted cases actually turned out to be positive.
Here’s how to calculate Precision:
This would determine whether our model is reliable or not.
Recall tells us how many of the actual positive cases we were able to predict correctly with our model.
And here’s how we can calculate Recall:
We can easily calculate Precision and Recall for our model by plugging in the values into the above questions:
50% percent of the correctly predicted cases turned out to be positive cases. Whereas 75% of the positives were successfully predicted by our model. Awesome!
Precision is a useful metric in cases where False Positive is a higher concern than False Negatives.
Precision is important in music or video recommendation systems, e-commerce websites, etc. Wrong results could lead to customer churn and be harmful to the business.
Recall is a useful metric in cases where False Negative trumps False Positive.
Recall is important in medical cases where it doesn’t matter whether we raise a false alarm, but the actual positive cases should not go undetected!
In our example, when dealing with a contagious virus, the Confusion Matrix becomes crucial. Recall, assessing the ability to capture all actual positives, emerges as a better metric. We aim to avoid mistakenly releasing an infected person into the healthy population, potentially spreading the virus. This context highlights why accuracy proves inadequate as a metric for our model’s evaluation. The Confusion Matrix, particularly focusing on recall, provides a more insightful measure in such critical scenarios
But there will be cases where there is no clear distinction between whether Precision is more important or Recall. What should we do in those cases? We combine them!
What is F1-Score?
In practice, when we try to increase the precision of our model, the recall goes down, and vice-versa. The F1-score captures both the trends in a single value:
F1-score is a harmonic mean of Precision and Recall, and so it gives a combined idea about these two metrics. It is maximum when Precision is equal to Recall.
But there is a catch here. The interpretability of the F1-score is poor. This means that we don’t know what our classifier is maximizing – precision or recall. So, we use it in combination with other evaluation metrics, giving us a complete picture of the result.
Confusion Matrix Using Scikit-learn in Python
You know the theory – now let’s put it into practice. Let’s code a confusion matrix with the Scikit-learn (sklearn) library in Python.
Sklearn has two great functions: confusion_matrix() and classification_report().
Sklearn confusion_matrix() returns the values of the Confusion matrix. The output is, however, slightly different from what we have studied so far. It takes the rows as Actual values and the columns as Predicted values. The rest of the concept remains the same.
Sklearn classification_report() outputs precision, recall, and f1-score for each target class. In addition to this, it also has some extra values: micro avg, macro avg, and weighted avg
Mirco average is the precision/recall/f1-score calculated for all the classes.
Macro average is the average of precision/recall/f1-score.
Weighted average is just the weighted average of precision/recall/f1-score.
Confusion Matrix for Multi-Class Classification
How would a confusion matrix in machine learning work for a multi-class classification problem? Well, don’t scratch your head! We will have a look at that here.
Let’s draw a confusion matrix for a multiclass problem where we have to predict whether a person loves Facebook, Instagram, or Snapchat. The confusion matrix would be a 3 x 3 matrix like this:
The true positive, true negative, false positive, and false negative for each class would be calculated by adding the cell values as follows:
That’s it! You are ready to decipher any N x N confusion matrix!
Conclusion
The Confusion matrix is not so confusing anymore, is it?
Hope this article gave you a solid base on how to interpret and use a confusion matrixfor classification algorithms in machine learning. The matrix helps in understanding where the model has gone wrong and gives guidance to correct the path and it is a powerful and commonly used tool to evaluate the performance of a classification model in machine learning.
We will soon come out with an article on the AUC-ROC curve and continue our discussion there. Until next time, don’t lose hope in your classification model; you just might be using the wrong evaluation metric!
Key Takeaways
True Positive and True Negative values mean the predicted value matches the actual value.
A Type I Error happens when the model makes an incorrect prediction, as in, the model predicted positive for an actual negative value.
A Type II Error happens when the model makes an incorrect prediction of an actual positive value as negative.
Frequently Asked Questions
Q1. How to interpret a confusion matrix?
A. A classification model’s accuracy and errors are summarized in a confusion matrix, where entries indicate true positive, true negative, false positive, and false negative cases.
Q2. What are the advantages of using Confusion matrix?
A. A thorough assessment of a classification model’s performance is given by the confusion matrix, which also helps with more in-depth analysis by providing information on true positives, true negatives, false positives, and false negatives.
Q3. What are some examples of confusion matrix applications?
A. Applications for confusion matrices can be found in many domains, such as fraud detection, sentiment analysis, medical diagnosis (determining true/false positives/negatives for illnesses), and picture recognition accuracy evaluation.
Q4. What is the confusion matrix diagram?
A. An illustration of a classification model’s performance is provided by a confusion matrix graphic. In a structured matrix format, it shows values for true positive, true negative, false positive, and false negative.
Q5. How to use the Confusion Matrix in Machine Learning?
To use a confusion matrix in machine learning in 4 steps:
Train a machine learning model. This can be done using any machine learning algorithm, such as logistic regression, decision tree, or random forest.
Make predictions on a test dataset. This is a dataset of data that the model has not been trained on.
Construct a confusion matrix. This can be done using a Python library such as Scikit-learn.
Analyze the confusion matrix. Look at the diagonal elements of the matrix to see how many instances the model predicted correctly. Look at the off-diagonal elements of the matrix to see how many instances the model predicted incorrectly.
I am on a journey to becoming a data scientist. I love to unravel trends in data, visualize it and predict the future with ML algorithms! But the most satisfying part of this journey is sharing my learnings, from the challenges that I face, with the community to make the world a better place!
Hi Aniruddha,
Thanks for writing this. Isn't the definition of FP and FN other way around? Like
False Positive (FP) – The actual value was positive but we predicted a negative value
Shouldn't it be reversed?
Arjun Badhan
Hi Aniruddha,
Thanks for the article. It is indeed informative.
However, I would like to highlight something in the section with heading "Understanding True Positive, True Negative, False Positive and False Negative in a Confusion Matrix". Do you think that we might have mixed up on the second point on False Positive and False Negative.
Sagar
Nice and well written article Aniruddha. Introduced confusion matrix very well for beginners.
(But I think there is one minor issue you may want to correct if you also notice it is really an issue: Under Type1 and Type2 error definitions, I think you have to swap 2nd bullet points. These do not match with matrix you mentioned earlier.)
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.
Show details
Powered By
Cookies
This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy.
brahmaid
It is needed for personalizing the website.
csrftoken
This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website
Identityid
Preserves the login/logout state of users across the whole site.
sessionid
Preserves users' states across page requests.
g_state
Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.
MUID
Used by Microsoft Clarity, to store and track visits across websites.
_clck
Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
_clsk
Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.
SRM_I
Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
SM
Use to measure the use of the website for internal analytics
CLID
The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.
SRM_B
Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
_gid
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.
_ga_#
Used by Google Analytics, to store and count pageviews.
_gat_#
Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.
collect
Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.
AEC
cookies ensure that requests within a browsing session are made by the user, and not by other sites.
G_ENABLED_IDPS
use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.
test_cookie
This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.
_we_us
this is used to send push notification using webengage.
WebKlipperAuth
used by webenage to track auth of webenagage.
ln_or
Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.
JSESSIONID
Use to maintain an anonymous user session by the server.
li_rm
Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.
AnalyticsSyncHistory
Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.
lms_analytics
Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.
liap
Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.
visit
allow for the Linkedin follow feature.
li_at
often used to identify you, including your name, interests, and previous activity.
s_plt
Tracks the time that the previous page took to load
lang
Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings
s_tp
Tracks percent of page viewed
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg
Indicates the start of a session for Adobe Experience Cloud
s_pltp
Provides page name value (URL) for use by Adobe Analytics
s_tslv
Used to retain and fetch time since last visit in Adobe Analytics
li_theme
Remembers a user's display preference/theme setting
li_theme_set
Remembers which users have updated their display / theme preferences
We do not use cookies of this type.
_gcl_au
Used by Google Adsense, to store and track conversions.
SID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SAPISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
__Secure-#
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
APISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
HSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
DV
These cookies are used for the purpose of targeted advertising.
NID
These cookies are used for the purpose of targeted advertising.
1P_JAR
These cookies are used to gather website statistics, and track conversion rates.
OTZ
Aggregate analysis of website visitors
_fbp
This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
fr
Contains a unique browser and user ID, used for targeted advertising.
bscookie
Used by LinkedIn to track the use of embedded services.
lidc
Used by LinkedIn for tracking the use of embedded services.
bcookie
Used by LinkedIn to track the use of embedded services.
aam_uuid
Use these cookies to assign a unique ID when users visit a website.
UserMatchHistory
These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.
li_sugr
Used to make a probabilistic match of a user's identity outside the Designated Countries
MR
Used to collect information for analytics purposes.
ANONCHK
Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation
We do not use cookies of this type.
Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy.
Hi Aniruddha, Thanks for writing this. Isn't the definition of FP and FN other way around? Like False Positive (FP) – The actual value was positive but we predicted a negative value Shouldn't it be reversed?
Hey Punit, Thanks for taking out the time to read the article and pointing out the mistake. Much appreciated! Thanks Aniruddha
Hi Puneet, In a FP, the value was predicted to be positive, but the value actually belonged to the negative class, so I think its correct, unless I'm missing something.
Thanks bro..You simply explained it..
Hi Aniruddha, Thanks for the article. It is indeed informative. However, I would like to highlight something in the section with heading "Understanding True Positive, True Negative, False Positive and False Negative in a Confusion Matrix". Do you think that we might have mixed up on the second point on False Positive and False Negative.
Hi Arjun, Glad you found it useful. And you are correct in pointing out the mix-up in the definitions. Thanks for your timely intervention🙏. Aniruddha
Nice and well written article Aniruddha. Introduced confusion matrix very well for beginners. (But I think there is one minor issue you may want to correct if you also notice it is really an issue: Under Type1 and Type2 error definitions, I think you have to swap 2nd bullet points. These do not match with matrix you mentioned earlier.)
Hey Sagar Really glad you liked the article! I have made the relevant changes. Thanks for the feedback! Aniruddha