Statistical measurements of accuracy and precision reveal a test’s basic reliability. These terms, which describe sources of variability, are not interchangeable. A test method can be precise (reliably reproducible in what it measures) without being accurate (actually measuring what it is supposed to measure), or vice versa.
A test method is said to be accurate when it measures what it is supposed to measure. This means it is able to measure the true amount or concentration of a substance in a sample.
Picture a bull’s-eye target with a dart correctly hitting the centre ring and you see what an accurate test produces: the method is capable of hitting the intended target.
A test method is said to be precise when repeated determinations (analyses) on the same sample give similar results. When a test method is precise, the amount of random variation is small. The test method can be trusted because results are reliably reproduced time after time.
Picture a bull’s-eye target with darts all clustered together – but not in the centre ring – and you see what a precise but inaccurate method produces: the method can be counted on to reach the same target over and over again but the target may not be the one intended. When the method is both precise and accurate, bull’s-eye!
Although a test that is 100% accurate and 100% precise is the ideal, in reality, this is impossible. Tests, instruments, and laboratory personnel each introduce a small amount of variability. This amount of variability does not usually detract from the test’s value as it is taken into account.
Specificity and sensitivity reveal the likelihood of false negatives and false positives. To be effective a pathology test is expected to detect abnormalities with certainty. How likely is it that an individual has the disease that the test suggests? What are the chances that an individual has a certain disorder even though a test for it was negative?
Specificity is the ability of a test to correctly exclude individuals who do not have a given disease or disorder. For example, a certain test may have proven to be 90% specific. If 100 healthy individuals are tested with that method, only 90 of those 100 healthy people will be found to be “normal” (disease-free). The other 10 people also do not have the disease, but their test results seem to indicate they do. For that 10% their “abnormal” findings are a misleading false-positive result.
The more specific a test is, the fewer “false-positive” results it produces. A false-positive result can lead to misdiagnosis and unnecessary, possibly challenging or life-altering, diagnostic procedures and therapies. It is important to confirm a diagnosis that requires dangerous therapy and a test’s specificity is one of the crucial indicators.
Although few if any tests succeed in diagnosing disease correctly 100% of the time, most tests produce only a small proportion of false-positive or false-negative results. Laboratories are required through laboratory accreditation to use the most sensitive and specific tests available.
Sensitivity is the ability of a test to correctly identify people who have a given disease or disorder. For example, a certain test may have proven to be 90% sensitive. That is, if 100 people known to have a certain disease are tested with that method, the test will correctly identify 90 of those 100 cases of disease. The other 10 people who were tested also have the disease but the test will fail to detect it. For that 10%, the finding of a “normal” result is a misleading false-negative result. A test’s sensitivity becomes particularly important when you are seeking to exclude a dangerous disease.
The more sensitive a test, the fewer “false-negative” results it produces. A false-negative result fails to identify disease states even though they are present.