Calculating Precision Recall for Binary Classification

There are many metrics for checking the accuracy of Machine learning models. Based on model, types of metrics also varies. There are mainly two types of machine learning models: Supervised Learning model and Unsupervised Learning model. Classification, regression and ranking are examples of supervised learning.

Classification are mainly used for classifying and labeling data based on different attributes. Number of label can be two or more. Binary classification labels data with two labels where in multi class classification, there are more than two possible classes. Precision – Recall is one of the many metrics used for calculating classification accuracy and can be used for both type of classification. For simplicity, we are now just focusing on precision-recall for binary classification only.

Precision – Recall are actually two different metrics but often used in a combination to depict the real scenario.  Precision answers the question, “Out of the items that the classifier predicted to be relevant, how many are truly relevant?” Whereas, recall answers the question, “Out of all the items that are truly relevant, how many are found by the classifier?” In short, precision is concerned with quality of the retrieving whereas recall gives insight of the quantity of the actual retrieval.

Mathematically, we can define this like following:

gsdfh

 

dsfaf

To describe more precisely the above definition lets look an example:

Our algorithm is trying to retrieve relevant information from database and the scenario is depicted by Venn Diagram. The left circle represents The set of relevant items in the database and the right one shows the set of items retrieved by our algorithm.

 

wer

So, Precision = B/(B+C) and Recall =B/(A+B)

Let’s look another example. This time we will do some calculation by our hand also. But, before the calculation lets focus on some short form which will be helpful for our calculation.

TN = True Negative ( Case was negative and model predicted as negative)
TP = True Positive ( Case was positive and model predicted as positive)
FN = False Negative ( Case was positive but model predicted as negative)
FP = False Positive (Case was negative but model predicted as positive)

Suppose our example-set contains 1000 entries and after applying our nifty model we get the following result:

preTable

Total Negative cases (TN + FP) = 540 + 60 = 600
Total Positive cases (TP + FN) =   320 + 80 = 400
Total Prediction as Negative (TN + FN) = 540 + 80 = 620
Total Prediction as Positive (TP + FP) = 320 + 60 = 380

Now, what percent of positive cases did your model can catch? The answer should be –

RecCal

This is Recall.

And what percent of Positive Prediction were correct?

PreCal

This is Precision.

That’s all. Hope it will help much to understand the intricate concept of Precision and Recall. And please excuse my poor English.

Oh! Forget to mention acknowledgement. Thanks Alice Zheng for her wonderful book”Evaluating Machine Learning Models—A Beginner’s Guide” which helps me to learn this concept clearly.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s