Friday, February 21, 2014

A swing and a miss for team phage

The rise of diabetes in the developed world has brought along with it a rise in persistent ulcers and abscesses. These infections don't respond particularly well to either systemic or topical antibiotics and as a result, an unfortunately high proportion have to be resolved by amputation. This is nobody's notion of an acceptable medical outcome.

Phage therapy has been used in wound and burn treatment for decades in some countries, and should have several potential advantages over antibiotics in treating diabetic ulcers. Unlike antibiotics, phage tend to be highly specific in their action, allowing them to be targeted toward pathogens (S. aureus, Pseudomonas, Streptococcus, Acinetobacter) while leaving commensal bacteria alone. Antibiotics are much more indiscriminate, killing the good bacteria along with the bad. As some commensals, such as S. epidermidis, have been reported to secrete immune-stimulating compounds, it would seem that a treatment which spares them might confer a significant clinical advantage.

Phage have also been reported to be able to penetrate biofilms, which antibiotics fail to do. Biofilms are characteristic of persistent infections, making their resident bacteria much more resistant to eradication.

Phage also have the power of replication: one phage-infected bacterial cell will give rise to tens or hundreds of phage progeny. These progeny can then go on to infect other bacterial cells, potentially creating a chain-reaction that could wipe out the pathogen bacterial community at a wound site.

Phage treatment of diabetic ulcers would thus seem to have great potential to improve the care of these stubborn infections. That's why the recent paper from Mendes et al is such a disappointment. The authors did indeed see a positive response to phage treatment of diabetic ulcers in two animal models (mice and pigs), and the responses sometimes were strong enough to probably not be random. But that's about all you can say - the differences between treated and untreated wounds just wasn't all that great. Given the fact that most preclinical results, even very strong and powerful results, usually fade significantly when translated to clinical settings, it is hard to get excited by marginally observable effects in animal models.

Why such weak results? Phage therapy really should work for this indication, and I think that it can be made to work. But my brows were lifted in disbelief when I read this description in Methods: "S. aureus ... P . aeruginosa ... and A. baumannii lytic bacteriophages were isolated from sewage water from the Lisbon area." What!!?? Bacteriophages are highly diverse creatures that have evolved under intense selective pressure to thrive and survive in specific environments. There is no reason to expect that a sewage phage - which probably had been living and competing in the environment of the lower intestine - would prosper in the entirely different environment of a diabetic ulcer.

That the phage fared poorly in wounds is implied by the doses used in the experimental treatment: 10 - 100 times the number of bacteria in the wounds. At this concentration, bacteria do not have to even successfully infect a host in order to kill it. The degradative enzymes they use to gain entry to the cell are sufficient to do the job. The authors probably found in preliminary experiments that low doses of phage had no effect, and rather than find better phage, simply upped the dose.

Phage therapy is a promising approach for treating diabetic ulcers and deserves to be pursued. But it is unlikely that a random phage picked up in the environment is going to be an effective medicine. Screening and selecting phages that work in the environment of a diabetic ulcer will require a lot of hard and frustrating work. That's just how drug development is. Random chemicals rarely turn out to have the efficacy, tox profile and PK/PD required of useful drugs. It is naive, and ultimately harmful, to expect development of phage therapeutics to be any less difficult.

 

Wednesday, February 19, 2014

This is just depressing

I know I often sound hostile or dismissive of the medical profession. But it's hard not to when you read something like this recent study in Clinical and Infectious Diseases.

The most valid excuse physicians have for not prescribing the most appropriate antibiotics is that the information required is not available when it is needed. This is a real concern - a sepsis patient simply can't wait 3 days for the lab to return microbial identification and susceptibility results before beginning treatment.

But what about patients who present with wound infections - sick, certainly, but in no imminent danger of death or disablement. And what if you only need a couple of hours to get the information needed to treat effectively? And can do so while minimizing the risk of adverse events and maximizing antibiotic stewardship? This seems like a trick question, doesn't it? Of course you would seek out that information and use it to the best advantage of the patient.

If so, then you would be practicing good medicine, but you would apparently be an outlier. That's the only conclusion one can draw from the results of the CID study. Wound patients admitted to the UCLA-Olive View center were tested for the presence of MRSA or S. aureus (presumptively methicillin-sensitive) using the Cepheid GeneXpert real-time PCR test. Treating physicians had been educated on the use and interpretation of the test, and in an additional phase of the study, were contacted by a pharmacist.

The hope was that only the patients suffering from MRSA infections would be given anti-MRSA antibiotics such as vancomycin. This matters because vancomycin is probably not as effective as B-lactams, is more likely to have toxic side effects, and exposes patients to the risk of C. difficile-associated diarrhea, which kills some 14,000 Americans per year.

What the study found was that the rapid-test results had no discernible effect on physician prescribing practices: 60-70% of the non-MRSA patients received anti-MRSA antibiotics in the absence of test results, and 60-70% received them with test results. If physicians had used the test results, this number would have been 7%.

This was not a particularly large study, and it was conducted at a single institution. But still - how can this be? I appreciate that doctors are, and should be, conservative when it comes to adopting changes in practice. But the Cepheid test is not especially new - it received FDA clearance in 2008. False-negative results for MRSA are very rare, less than 1% in the literature and 0% in the CID study. So there was essentially no risk to their patients that erroneous test results would lead to ineffective treatment. These facts must have been presented to the physicians as part of the education phase of the study.

The prestige of the medical profession has always made doctors susceptible to a kind of mulish arrogance. In previous centuries and decades, this arrogance manifested itself as a resistance to improved hygiene, or in the bullying of patients. It's hard to see a refusal to follow practice guidelines when provided with timely information, as anything other than a continuation of this sordid tradition.

 

Monday, February 3, 2014

Not good enough

Humans are notoriously bad at estimating risks. Airline travel makes us nervous even though it is hundreds of times safer than driving. We feel safer having a gun, even though the odds of shooting ourselves or a loved one are many times higher than the chance we will shoot an attacker.

I was reminded of this common frailty in recent exchanges with ID docs about the risks and benefits of empiric prescription of antibiotics. Empiric prescription is the practice of prescribing an antibiotic before having identified the infecting bacteria or determining what antibiotic it is susceptible to. That this happens is neither shocking or wrong. It typically takes 2-3 days to get this information, and treatment of seriously ill patients can't wait until it is available.

Empiric prescriptions are generally based on patient history, signs and symptoms, along with local epidemiology and perhaps a Gram stain. Other than that, it's mostly guessing and it is inevitable that some guesses will be wrong.

Rising levels of antibiotic resistance have made successful empiric prescription much more difficult. Patient history and clinical presentation can tell a doctor a lot about the likely identity of the infecting bacteria: if the patient has pneumonia, it's likely Strep; a skin infection is probably Staph; and a urinary tract infection is usually E. coli.

But signs and symptoms say nothing about what resistance genes those bacteria might be carrying. Other than knowing local resistance patterns, and that a patient is at elevated risk of resistance for antibiotics previously prescribed, it is pretty much a pure guess as to whether a particular infection is susceptible to an antibiotic.

So it should be no surprise that doctors often prescribe the wrong, inappropriate antibiotic for serious infections. How often does this happen?

Pretty often. One study showed that half of patients with MRSA bloodstream infections received inappropriate empiric prescriptions. Another found that a third of patients with Pseudomonas infections were treated inappropriately. ICU patients received inappropriate antibiotic therapy 36% of the time. And 43% of serious infections were treated inappropriately according to yet another study. I could go on, but you get the idea: your chance of getting a suboptimal or ineffective antibiotic by empiric prescription is somewhere between a third and a half.

This matters quite a bit. The odds of dying from a serious infection go up by a factor of 1.6 if the initial prescription is wrong. That's a lot - no other class of drugs, eg for cancer or heart disease, has anywhere near the impact of antibiotics on your odds of survival, and so getting the antibiotics right is crucial.

To put some numbers on the effect of inappropriate empiric prescription, consider that 2 million patients are treated for sepsis each year in the US, and about 300,000 of them die. If we assume that 65% of sepsis patients get appropriate initial antibiotic therapy, and do a little algebra, then we can estimate that the death rate for appropriately treated patients is 12%, while that for inappropriately treated patients is 20%. Multiply that out by 2 million patients and you get 56,000 excess deaths due to guessing wrong on which antibiotic to use.

That's more deaths than are due to breast cancer or prostate cancer, more than are due to homicide and suicide, more than result from automobile accidents. Which is why I am taken aback when ID docs say in effect, sure it would be nice to have a rapid diagnostic, but if there isn't, we will just continue with empiric prescription and it's all good.

What accounts for this comfort with the risks of empiric prescription? Several cognitive factors are probably at play here. Familiarity is one factor - we underestimate the risks of familiar activities. Physicians routinely prescribe antibiotics on an empirical basis, and the vast majority of their patients get better.

But how much better? It is really hard to tell the difference in patient outcomes by simple observation. Most patients get well. In the example of sepsis, 80% of the patients given inappropriate empiric treatment still recover, as compared to the 88% who recover when given appropriate treatment. This is a big difference when extended over many thousands of patients, but pretty hard to discern over dozens or even hundreds. To use a sports analogy, imagine two basketball players: one shoots free throws with 80% accuracy, the other with 88% accuracy. You couldn't tell who was the better shooter over the course of a few games, or even the season without careful record keeping. Even so, you'd need each player to shoot 329 free throws to be 95% confident that one was a better shooter than the other.

And whereas every free throw is pretty much like another, sepsis patients are all different and usually have underlying conditions - cancer, trauma, diabetes etc. - that contribute to mortality. So it is pretty easy to rationalize those factors as being the cause of death, rather than an inappropriate prescription. It's just human nature to ascribe our failures to bad luck and our successes to skill.

The way to combat human cognitive biases is with data and information. Rapid testing is beginning to provide the information needed to save lives, but doctors have to insist that hospitals buy and use these tests, even if they are significantly more expensive than Petri dishes. Saying "we will just prescribe empirically" isn't good enough anymore.