Don't miss an insight. Subscribe to Techopedia for free.


What is 'precision and recall' in machine learning?

By Justin Stoltzfus | Last updated: July 15, 2019
Made Possible By AltaML

There are a number of ways to explain and define “precision and recall” in machine learning. These two principles are mathematically important in generative systems, and conceptually important, in key ways that involve the efforts of AI to mimic human thought. After all, people use “precision and recall” in neurological evaluation, too.

One way to think about precision and recall in IT is to define precision as the union of relevant items and retrieved items over the number of retrieved results, while recall represents the union of relevant items and retrieved items over the total of relevant results.

Another way to explain it is that precision measures the portion of positive identifications in a classification set that were actually correct, while recall represents the proportion of actual positives that were identified correctly.

These two metrics are often affecting each other in an interactive process. Experts use a system of tagging true positives, false positives, true negatives and false negatives in a confusion matrix in order to show precision and recall. Changing the classification threshold can also change the output in terms of precision and recall.

Another way to say it is that recall measures the number of correct results, divided by the number of results that should have been returned, while precision measures the number of correct results divided by the number of all results that were returned. This definition is helpful, because you can explain recall as the number of results that a system can “remember,” while you can cast precision as the efficacy or targeted success of identifying those results. Here we get back to what precision and recall mean in a general sense — the ability to remember items, versus the ability to remember them correctly.

The technical analysis of true positives, false positives, true negatives and false negatives is extremely useful in machine learning technologies and evaluation, in order to show how classification mechanisms and machine learning technologies work. By measuring precision and recall in a technical way, experts can not only show the results of running a machine learning program, but can also start to explain how that program produces its results — by what algorithmic work the program comes to evaluate data sets in a particular way.

With that in mind, many machine learning professionals may talk about precision and recall in an analysis of return results from test sets, training sets or subsequent performance sets of data. Using an array or matrix will help to order this information and more transparently show how the program works and what results it brings to the table.

Share this Q&A

  • Facebook
  • LinkedIn
  • Twitter


Artificial Intelligence Emerging Technology Machine Learning

Made Possible By

Logo for AltaML

Written by Justin Stoltzfus | Contributor, Reviewer

Profile Picture of Justin Stoltzfus

Justin Stoltzfus is a freelance writer for various Web and print publications. His work has appeared in online magazines including Preservation Online, a project of the National Historic Trust, and many other venues.

More Q&As from our experts

Related Terms

Related Articles

Term of the Day

Canary Test

A canary test, also known as a canary deployment or canary release, is a form of A/B testing used in Agile software...
Read Full Term

Tech moves fast! Stay ahead of the curve with Techopedia!

Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.

Go back to top