This article was published as a part of the Data Science Blogathon
In today’s post, we will discuss the merits of ROC curves vs. accuracy estimation.
Photo By @austindistel on Unsplash
Multiple mysteries may bother you while appraising your machine learning models like,
Examining these metrics is a complex matter because, in machine learning, each works differently on different natural datasets.
It will make some sense if we accept the hypothesis “Performance on past learning problems (roughly) predicts performance on future learning problems.“
The ROC vs. accuracy discussion confuses with “is the goal classification or ranking?” because ROC curve creation demands generating a ranking.
Here, we believe the purpose is classification willingly than ranking. (There are several natural problems where we prefer the ranking of instances to classification. In extension, there are numerous natural obstacles where classification is the intention.)
The ROC curve is generated by measuring and outlining the true positive rate versus the false-positive rate for a particular classifier at a family of thresholds.
True Positive Rate = True Positives / (True Positives + False Negatives) False Positive Rate = False Positives / (False Positives + True Negatives)
Accuracy is calculated as the division of accurate predictions for the test data. It can be determined easily by dividing the aggregate of true predictions by the product of complete predictions.
Accuracy = True Positive + True Negative / True Positive + True Negative + False Positive + False Negative.
Specification: The costs of choices are not well specified. The training standards are often not expressed from the corresponding marginal distribution as the test models. ROC curves allow for an adequate comparison over a range of different choice costs and marginal distributions.
Dominance: Standard classification algorithms do not have a dominant structure as the costs vary. We should not say “algorithm A is better than algorithm B” when you do not know the choice costs well enough to be sure.
Just-in-Time use: Any system with a good ROC curve can efficiently be designed with a ‘knob’ that controls the rate of false positives vs. false negatives.
Summarization: Humans do not have the time to understand the complexities of a conditional comparison, so having a single number instead of a curve is valuable.
Robustness: Algorithms with a large AROC are robust against a variation in costs.
Summarization: As for AROC.
Intuitiveness: Within no time, people understand what Accuracy means. Unlike (A)ROC, it is obvious what happens when one additional example is classified wrong.
Statistical Stability: The basic test set bound shows that Accuracy is stable subject to only the IID assumption. It is only valid for AROC (and ROC) when the number in each class is not near zero.
Minimality: In the end, a classifier makes classification decisions. Accuracy directly measures this while (A)ROC compromises this measure with hypothetical alternate choice costs. For the corresponding purpose, evaluating (A)ROC may demand significantly more effort than resolving the problem.
Generality: Accuracy generalizes immediately to multiclass Precision, rank-weighted Precision, and comprehensive (per-example) cost-sensitive classification. ROC curves become problematic when there are just three classes.
I observe that its interpretation as a Wilcoxon-Mann-Whitney statistic, which effectively measures the fraction of positive-negative instance pairs ranked correctly, makes the quantity easier to understand. This interpretation also has other benefits; while generalizing ROC curves to more than two classes is not straightforward, the above interpretation facilitates graceful generalizations of the AROC statistic to multi-category ranking.
a) A subtle and exciting difference between AROC evaluations and computations based on the most significant “standard” loss functions (including 0/1 loss, squared-error, “cost-sensitive classification,” etc.) is that we can evaluate all the standard loss functions for each (example) independently of the others. AROC is defined only for a set of examples.
b) One neat use of AROC is as a base-rate-independent version of the Bayes rate. Specifically, data sets cannot be compared directly to Bayes rates when their base rates differ (by base rate, it means the typical notion of the marginal/unconditional probability of the most probable class). However, their “optimal” AROCs could be connected instantly as assumptions of how divisible the classes are.
In the ongoing conversation about AROC vs Accuracy vs ROC, it’s essential to highlight the notable contribution by Provost and Fawcett – the ROC Convex Hull. This method stands out as an alternative to conventional ROC curves and the Area Under the Curve summary. Within the ROCCH framework, classifiers achieving the highest expected utility are represented by curves positioned on the convex hull of all candidate classifiers’ curves. The parametrized gradient along the upper boundary of the hull identifies expected-cost-optimal regions, linking them to the practitioner’s considerations regarding utility and class priors.
Here are a few study materials I suggest to readers for further understanding of the topic:
AROC, or Adjusted Receiver Operating Characteristic, is a metric that considers the cost associated with misclassification. Accuracy, on the other hand, is a more straightforward measure of overall correctness, while ROC, or Receiver Operating Characteristic, provides a graphical representation of a classifier’s performance.
AROC takes into account the asymmetric costs of false positives and false negatives, making it a valuable metric in scenarios where misclassifying certain instances has more significant consequences.
In a medical diagnosis scenario, AROC would be relevant when the cost of false negatives (missing a disease) is higher than false positives. Accuracy would not consider this cost asymmetry, while ROC would provide a graphical representation of the classifier’s performance.
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.