Monday, February 3, 2014

Not good enough

Humans are notoriously bad at estimating risks. Airline travel makes us nervous even though it is hundreds of times safer than driving. We feel safer having a gun, even though the odds of shooting ourselves or a loved one are many times higher than the chance we will shoot an attacker.

I was reminded of this common frailty in recent exchanges with ID docs about the risks and benefits of empiric prescription of antibiotics. Empiric prescription is the practice of prescribing an antibiotic before having identified the infecting bacteria or determining what antibiotic it is susceptible to. That this happens is neither shocking or wrong. It typically takes 2-3 days to get this information, and treatment of seriously ill patients can't wait until it is available.

Empiric prescriptions are generally based on patient history, signs and symptoms, along with local epidemiology and perhaps a Gram stain. Other than that, it's mostly guessing and it is inevitable that some guesses will be wrong.

Rising levels of antibiotic resistance have made successful empiric prescription much more difficult. Patient history and clinical presentation can tell a doctor a lot about the likely identity of the infecting bacteria: if the patient has pneumonia, it's likely Strep; a skin infection is probably Staph; and a urinary tract infection is usually E. coli.

But signs and symptoms say nothing about what resistance genes those bacteria might be carrying. Other than knowing local resistance patterns, and that a patient is at elevated risk of resistance for antibiotics previously prescribed, it is pretty much a pure guess as to whether a particular infection is susceptible to an antibiotic.

So it should be no surprise that doctors often prescribe the wrong, inappropriate antibiotic for serious infections. How often does this happen?

Pretty often. One study showed that half of patients with MRSA bloodstream infections received inappropriate empiric prescriptions. Another found that a third of patients with Pseudomonas infections were treated inappropriately. ICU patients received inappropriate antibiotic therapy 36% of the time. And 43% of serious infections were treated inappropriately according to yet another study. I could go on, but you get the idea: your chance of getting a suboptimal or ineffective antibiotic by empiric prescription is somewhere between a third and a half.

This matters quite a bit. The odds of dying from a serious infection go up by a factor of 1.6 if the initial prescription is wrong. That's a lot - no other class of drugs, eg for cancer or heart disease, has anywhere near the impact of antibiotics on your odds of survival, and so getting the antibiotics right is crucial.

To put some numbers on the effect of inappropriate empiric prescription, consider that 2 million patients are treated for sepsis each year in the US, and about 300,000 of them die. If we assume that 65% of sepsis patients get appropriate initial antibiotic therapy, and do a little algebra, then we can estimate that the death rate for appropriately treated patients is 12%, while that for inappropriately treated patients is 20%. Multiply that out by 2 million patients and you get 56,000 excess deaths due to guessing wrong on which antibiotic to use.

That's more deaths than are due to breast cancer or prostate cancer, more than are due to homicide and suicide, more than result from automobile accidents. Which is why I am taken aback when ID docs say in effect, sure it would be nice to have a rapid diagnostic, but if there isn't, we will just continue with empiric prescription and it's all good.

What accounts for this comfort with the risks of empiric prescription? Several cognitive factors are probably at play here. Familiarity is one factor - we underestimate the risks of familiar activities. Physicians routinely prescribe antibiotics on an empirical basis, and the vast majority of their patients get better.

But how much better? It is really hard to tell the difference in patient outcomes by simple observation. Most patients get well. In the example of sepsis, 80% of the patients given inappropriate empiric treatment still recover, as compared to the 88% who recover when given appropriate treatment. This is a big difference when extended over many thousands of patients, but pretty hard to discern over dozens or even hundreds. To use a sports analogy, imagine two basketball players: one shoots free throws with 80% accuracy, the other with 88% accuracy. You couldn't tell who was the better shooter over the course of a few games, or even the season without careful record keeping. Even so, you'd need each player to shoot 329 free throws to be 95% confident that one was a better shooter than the other.

And whereas every free throw is pretty much like another, sepsis patients are all different and usually have underlying conditions - cancer, trauma, diabetes etc. - that contribute to mortality. So it is pretty easy to rationalize those factors as being the cause of death, rather than an inappropriate prescription. It's just human nature to ascribe our failures to bad luck and our successes to skill.

The way to combat human cognitive biases is with data and information. Rapid testing is beginning to provide the information needed to save lives, but doctors have to insist that hospitals buy and use these tests, even if they are significantly more expensive than Petri dishes. Saying "we will just prescribe empirically" isn't good enough anymore.

 

No comments:

Post a Comment