What is 'precision and recall' in machine learning?
There are a number of ways to explain and define “precision and recall” in machine learning. These two principles are mathematically important in generative systems, and conceptually important, in key ways that involve the efforts of AI to mimic human thought. After all, people use “precision and recall” in neurological evaluation, too.
One way to think about precision and recall in IT is to define precision as the union of relevant items and retrieved items over the number of retrieved results, while recall represents the union of relevant items and retrieved items over the total of relevant results.
Another way to explain it is that precision measures the portion of positive identifications in a classification set that were actually correct, while recall represents the proportion of actual positives that were identified correctly.
These two metrics are often affecting each other in an interactive process. Experts use a system of tagging true positives, false positives, true negatives and false negatives in a confusion matrix in order to show precision and recall. Changing the classification threshold can also change the output in terms of precision and recall.
Another way to say it is that recall measures the number of correct results, divided by the number of results that should have been returned, while precision measures the number of correct results divided by the number of all results that were returned. This definition is helpful, because you can explain recall as the number of results that a system can “remember,” while you can cast precision as the efficacy or targeted success of identifying those results. Here we get back to what precision and recall mean in a general sense — the ability to remember items, versus the ability to remember them correctly.
The technical analysis of true positives, false positives, true negatives and false negatives is extremely useful in machine learning technologies and evaluation, in order to show how classification mechanisms and machine learning technologies work. By measuring precision and recall in a technical way, experts can not only show the results of running a machine learning program, but can also start to explain how that program produces its results — by what algorithmic work the program comes to evaluate data sets in a particular way.
With that in mind, many machine learning professionals may talk about precision and recall in an analysis of return results from test sets, training sets or subsequent performance sets of data. Using an array or matrix will help to order this information and more transparently show how the program works and what results it brings to the table.
More Q&As from our experts
- How do machine learning professionals use structured prediction?
- What is TensorFlow’s role in machine learning?
- Can there ever be too much data in big data?
- Machine Learning
- Artificial Intelligence
- Test Set
- Training Data
- Autonomic Computing
- Computational Linguistics
- Turing Test
- Alan Turing
- Backward Chaining
Tech moves fast! Stay ahead of the curve with Techopedia!
Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.
- European Sharepoint, Office 365 & Azure Conference
- Robotic Process Automation: What You Need to Know
- Data Governance Is Everyone's Business
- Key Applications for AI in the Supply Chain
- Service Mesh for Mere Mortals - Free 100+ page eBook
- Do You Need a Head of Remote?
- Web Data Collection in 2022 - Everything you need to know