predict consequences from abstract knowledge: Knowing Godâs vision, what can you predict about Man? But Bayesâs theorem takes the more pragmatic and humble approach to inference. It is based on real, observable knowledge: Knowing Manâs world, Bayes asks, what can you guess about the mind of God?
....
How might this apply to a medical test? The equations described by Bayes teach us how to interpret a test given our prior knowledge of risk and prevalence: If a man has a history of drug addiction, and if drug addicts have a higher prevalence of HIV infection, then what is the chance that a positive test is real? A test is not a Delphic oracle, Bayes reminds us; it is not a predictor of perfect truths. It is, rather, a machine that modifies probabilities. It takes information in and puts information out. We feed it an âinput probabilityâ and it gives us an âoutput probability.â If we feed it garbage, then it will inevitably spit garbage out.
The peculiar thing about the âgarbage in, garbage outâ rule is that we are quick to apply it to information or computers, but are reluctant to apply it to medical tests. Take PSA testing, for instance. Prostate cancer is an age-related cancer: the incidence climbs dramatically as a man ages. If you test every man over the age of forty with a PSA test, the number of false positives will doubtless overwhelm the number of true positives. Thousands of needless biopsies and confirmatory tests will be performed, each adding complications, frustration, and cost. If you use the same test on men above sixty, the yield might increase somewhat, but the false-positive and -negative rates might still be forbidding. Add more dataâfamily history, risk factors, genetics, or a change in PSA value over timeâand the probability of a truly useful test keeps getting refined. There is no getting away from this logic. Yet, demands for indiscriminate PSA testing to âscreenâ for prostate cancer keep erupting among patients and advocacy groups.
The force of Bayesâs logic has not diminished as medical information has expanded; it has only become more powerful. Should a woman with a mutant BRCA1 gene have a double mastectomy? âYesâ and ânoâ are both foolish answers. The presence of a BRCA1 mutation is well known to increase the risk of ovarian or breast cancerâbut the actual risk varies vastly from person to person. One woman might develop a lethal, rapidly growing breast cancer at thirty; another woman might only develop an indolent variant in her eighties. A Bayesian analyst would ask you to seek more information: Did a womanâs mother or grandmother have breast cancer? At what age? What do we know about her previous risksâgenes, exposures, environments? Are any of the risks modifiable?
If you scan the daily newspapers to identify the major âcontroversiesâ simmering through medicine, they inevitably concern Bayesian analysis, or a fundamental lack of understanding of Bayesian theory. Should a forty-year-old woman get a mammogram? Well, unless we can modify the prior probability of her having breast cancer, chances are that we will pick up more junk than real cases of cancer. What if we invented an incredibly sophisticated blood test to detect Ebola? Should we screen all travelers at the airport using such a test and thereby prevent the spread of a lethal virus into the United States? Suppose I told you, further, that every person who had Ebola tested positive with this test , and the only drawback was a modest 5 percent false-positive rate. On first glance, it seems like a no-brainer. But watch what happens with Bayesian analysis. Assume that 1 percent of travelers are actually infected with Ebolaâa heftyfraction. If a man tests positive at the airport, what is the actual chance that he is infected? Most people guess some number between 50 and 90 percent. The actual answer is about 16 percent. If