English Banner 

Sensitivity, Prevalence and Predictive Values

1. Core Knowledge:

Sensitivity and specificity can be considered fixed properties of a diagnostic test. [This is a slight simplification, but it's good enough for our purposes].

However, clinicians are chiefly concerned with the predictive value of the test, rather than its sensitivity.  The clinical question is: how likely is a patient with a positive test result to actually have the disease?  [Yes, you may also be concerned with negative predictive value, but let's use PPV as the current example]

A crucial point is that prevalence affects the predictive value of any test. This means that the same diagnostic test will have a different predictive accuracy according to the clinical setting in which you are applying it!
    Whoa! That sounds weird...

The following table illustrates this phenomenon.
It holds sensitivity and specificity constant, at 99% and 95% (this is a REALLY good test…)
You can just look at the first and last rows (the coloured ones):  
As prevalence rises from 1% (e.g., diabetes among 30 year-olds) to 20% (e.g., among 70 year-olds), PPV will rise from 17% to 83%: a huge difference in the clinical interpretation of the same test result.  
[The middle rows of the table show how this result is calculated]

The Impact on Positive Predictive Value (PPV) as Prevalence Changes,
for a test with 99% Sensitivity and 95% Specificity
  Prevalence
1%
10%
20%
a # in population
1,000
1,000
1,000
b Diseased
10
100
200
c Not diseased
990
900
800
d True Positives on the test (b x 0.99)
10
99
198
e False positives on the test (c x (1-0.95))
50
45
40
f Total # positive on test (d + e)
60
144
238
  PPV (d / f)
17%
69%
83%

(Source: Dr. Chan Shah: Public health and preventive medicine in Canada. Elsevier, Canada, 2003)

So, your interpretation of any test result depends on sensitivity and specificity, but also on the baseline prevalence of the disorder in the population you are working with.  Unless specificity is perfect, falling prevalence leads to false positive results. So, you need to be roughly aware of the prevalence in the population you are treating. This illustrates a population health approach to routine clinical work.

Here is a display that you can manipulate yourself. Click here  to explore what happens to test performance when prevalence changes, and when you alter the cut-point on the test. Note: you will need Excel 2007 for this. You may need to re-position the display on your screen to allow you to move the two slider bars.
Warm thanks to Paul Lee, PhD, University of Hong Kong, for programming this.

Link: Bayesian estimation methods illustrate this phenomenon.

Nerd's Corner…(click here)
PPV
close

Nerd's Corner

PPV

The following diagram shows the same thing in a different way.  With rare conditions, the positive predictive value is driven by the specificity of the test.
Remember that the number of false positives reflects specificity (mnemonic: 1-sPecificity = false Positives).
When prevalence is low, there are few true positives in the population, and false positives can be large compared to the number of true positives.  Hence the positive predictive value falls.

PPV by prevalence