Tuesday, March 25, 2014

Why is phage science so weak?

 

My last post discussed the disappointing results of a study of the effectiveness of phage therapy in animal models of diabetic ulcers. Although the authors reported improvement over sham treatment, the effects were only marginally significant - there was some reduction in bacterial load, but little or no effect on wound healing.

Well, that happens. Drug development is hard, and there are always going to be results that are less than stellar. The question is what has been learned, and can it be put to good use in improving the technology. Unfortunately the authors of the study did not ask this question, at least in print, so it is hard to tell if the study is a tiny step forward on the path to therapeutic development, or if it is just another random result that goes nowhere.

I'm sorry to say that this is where the vast majority of phage therapy studies go - absolutely nowhere. And given that phage therapy could have a real role to play in infectious disease therapy, it's tiresome and unacceptable. We really should be moving beyond the stage of reporting phage therapy results as mere phenomena. After all, there is more than a century of clinical and preclinical behind us, and you would think that phage therapy development efforts would be pretty sophisticated by now. But you would be wrong.

The thing that I find so appalling about this study is that the authors simply collected phage for the study from a convenient sewer. I'm sure they tested them for reactivity against a panel of clinically relevant strains grown in pure cultures in rich media to log phase and plated on agar plates in the lab. But these phage didn't do so well in the complex environment of a diabetic wound, causing only a slight decrease in the bacterial load. This shouldn't have been a surprise, but apparently it was.

Most bacteriophage genes are dispensable in the laboratory environment. That is, they can be mutated or deleted, and the phage are able to propagate just fine. The functions of these genes remain largely unknown, even to the present day. But we can guess, intelligently. My guess is that a large fraction help the phage adapt to less-favorable environmental conditions. They allow the phage to productively infect potential hosts by defeating a variety of host defenses, and to maximize their output of progeny in a host that is constantly warding off immunological, bacteriological, nutritional and environmental challenges.

If this is true - and it has to be true to some degree - then the notion of taking a phage that has adapted to sewer life or life in the gut, and expecting it to thrive in the environment of a wound is somewhere between naive and downright stupid. It doesn't advance the development of phage therapy, it discredits it. It is fundamentally unserious.

There are serious scientists who are interested in developing phage therapy. Steven Abedon's work is an outstanding example. But they are still a minority. I don't know how many times I have sat through presentations at a phage meeting that were just embarrassing, thinking "why did the organizers of this session pick this clown to speak?". But then the session leaders would get up to give their talk, and it would be just as bad, and I would have my answer.

So the question arises as to why phage science is so bad. There are a number of contributing factors of course. It is no longer the 8th day of creation. Molecular biology has "moved on" to more complex organisms, and funding has largely dried up. This has been true not just of phage biology, but microbiology in general.

Funding is surely a factor, but isn't the whole explanation. RNA science was a backwater in the 70's, but this didn't prevent brilliant science from being done by the likes of Noller, Woese, Spiegelman and others. Instead, I think it is the lack of funding combined with a low barrier to entry that makes phage science such a refuge of mediocrity. RNA science in the 70s and 80s was really hard - there was no way to synthesize RNA either enzymatically or chemically (except for very small fragments in very small quantities), and so getting enough material to do a study often required heroic efforts. But that didn't stop Woese from fundamentally reorganizing the Tree of Life, or Noller from showing that it is ribosomal RNA which directs protein synthesis.

That phage science hasn't made any paradigm-shifting discoveries since the 60s is hardly shameful - phage long ago secured their place in the Pantheon of model organisms. But it is scarcely credible that so little progress has been made in converting these model citizens of science into useful therapeutics.

Phage are everywhere and can be isolated, propagated and analyzed by very simple methods. That's what led professional physicists/amateur biologists like Delbruck and Crick to adopt phage as models of living systems. But it also means that just about anyone can find a new phage or alter an old one, stick it into some diagnostic or therapeutic model system, and show an effect that is statistically significant but functionally meaningless. This has been happening over and over since antibiotic resistance began to emerge as a significant medical problem, leading to an endless succession of review articles touting phage therapy as a new and emerging therapeutic alternative to antibiotics. That phage therapy keeps not emerging does not seem to make it any less new and promising to the writers of these reviews, nor does it abate the flow of such articles.

For translational phage science to improve, the first step is to call out the purveyors of mediocrity - to acknowledge that there is a problem, and that a higher standard is required. There is a real clinical need that phage therapy can fill, and enabling second-rate science benefits no one.

 

Friday, February 21, 2014

A swing and a miss for team phage

The rise of diabetes in the developed world has brought along with it a rise in persistent ulcers and abscesses. These infections don't respond particularly well to either systemic or topical antibiotics and as a result, an unfortunately high proportion have to be resolved by amputation. This is nobody's notion of an acceptable medical outcome.

Phage therapy has been used in wound and burn treatment for decades in some countries, and should have several potential advantages over antibiotics in treating diabetic ulcers. Unlike antibiotics, phage tend to be highly specific in their action, allowing them to be targeted toward pathogens (S. aureus, Pseudomonas, Streptococcus, Acinetobacter) while leaving commensal bacteria alone. Antibiotics are much more indiscriminate, killing the good bacteria along with the bad. As some commensals, such as S. epidermidis, have been reported to secrete immune-stimulating compounds, it would seem that a treatment which spares them might confer a significant clinical advantage.

Phage have also been reported to be able to penetrate biofilms, which antibiotics fail to do. Biofilms are characteristic of persistent infections, making their resident bacteria much more resistant to eradication.

Phage also have the power of replication: one phage-infected bacterial cell will give rise to tens or hundreds of phage progeny. These progeny can then go on to infect other bacterial cells, potentially creating a chain-reaction that could wipe out the pathogen bacterial community at a wound site.

Phage treatment of diabetic ulcers would thus seem to have great potential to improve the care of these stubborn infections. That's why the recent paper from Mendes et al is such a disappointment. The authors did indeed see a positive response to phage treatment of diabetic ulcers in two animal models (mice and pigs), and the responses sometimes were strong enough to probably not be random. But that's about all you can say - the differences between treated and untreated wounds just wasn't all that great. Given the fact that most preclinical results, even very strong and powerful results, usually fade significantly when translated to clinical settings, it is hard to get excited by marginally observable effects in animal models.

Why such weak results? Phage therapy really should work for this indication, and I think that it can be made to work. But my brows were lifted in disbelief when I read this description in Methods: "S. aureus ... P . aeruginosa ... and A. baumannii lytic bacteriophages were isolated from sewage water from the Lisbon area." What!!?? Bacteriophages are highly diverse creatures that have evolved under intense selective pressure to thrive and survive in specific environments. There is no reason to expect that a sewage phage - which probably had been living and competing in the environment of the lower intestine - would prosper in the entirely different environment of a diabetic ulcer.

That the phage fared poorly in wounds is implied by the doses used in the experimental treatment: 10 - 100 times the number of bacteria in the wounds. At this concentration, bacteria do not have to even successfully infect a host in order to kill it. The degradative enzymes they use to gain entry to the cell are sufficient to do the job. The authors probably found in preliminary experiments that low doses of phage had no effect, and rather than find better phage, simply upped the dose.

Phage therapy is a promising approach for treating diabetic ulcers and deserves to be pursued. But it is unlikely that a random phage picked up in the environment is going to be an effective medicine. Screening and selecting phages that work in the environment of a diabetic ulcer will require a lot of hard and frustrating work. That's just how drug development is. Random chemicals rarely turn out to have the efficacy, tox profile and PK/PD required of useful drugs. It is naive, and ultimately harmful, to expect development of phage therapeutics to be any less difficult.

 

Wednesday, February 19, 2014

This is just depressing

I know I often sound hostile or dismissive of the medical profession. But it's hard not to when you read something like this recent study in Clinical and Infectious Diseases.

The most valid excuse physicians have for not prescribing the most appropriate antibiotics is that the information required is not available when it is needed. This is a real concern - a sepsis patient simply can't wait 3 days for the lab to return microbial identification and susceptibility results before beginning treatment.

But what about patients who present with wound infections - sick, certainly, but in no imminent danger of death or disablement. And what if you only need a couple of hours to get the information needed to treat effectively? And can do so while minimizing the risk of adverse events and maximizing antibiotic stewardship? This seems like a trick question, doesn't it? Of course you would seek out that information and use it to the best advantage of the patient.

If so, then you would be practicing good medicine, but you would apparently be an outlier. That's the only conclusion one can draw from the results of the CID study. Wound patients admitted to the UCLA-Olive View center were tested for the presence of MRSA or S. aureus (presumptively methicillin-sensitive) using the Cepheid GeneXpert real-time PCR test. Treating physicians had been educated on the use and interpretation of the test, and in an additional phase of the study, were contacted by a pharmacist.

The hope was that only the patients suffering from MRSA infections would be given anti-MRSA antibiotics such as vancomycin. This matters because vancomycin is probably not as effective as B-lactams, is more likely to have toxic side effects, and exposes patients to the risk of C. difficile-associated diarrhea, which kills some 14,000 Americans per year.

What the study found was that the rapid-test results had no discernible effect on physician prescribing practices: 60-70% of the non-MRSA patients received anti-MRSA antibiotics in the absence of test results, and 60-70% received them with test results. If physicians had used the test results, this number would have been 7%.

This was not a particularly large study, and it was conducted at a single institution. But still - how can this be? I appreciate that doctors are, and should be, conservative when it comes to adopting changes in practice. But the Cepheid test is not especially new - it received FDA clearance in 2008. False-negative results for MRSA are very rare, less than 1% in the literature and 0% in the CID study. So there was essentially no risk to their patients that erroneous test results would lead to ineffective treatment. These facts must have been presented to the physicians as part of the education phase of the study.

The prestige of the medical profession has always made doctors susceptible to a kind of mulish arrogance. In previous centuries and decades, this arrogance manifested itself as a resistance to improved hygiene, or in the bullying of patients. It's hard to see a refusal to follow practice guidelines when provided with timely information, as anything other than a continuation of this sordid tradition.

 

Monday, February 3, 2014

Not good enough

Humans are notoriously bad at estimating risks. Airline travel makes us nervous even though it is hundreds of times safer than driving. We feel safer having a gun, even though the odds of shooting ourselves or a loved one are many times higher than the chance we will shoot an attacker.

I was reminded of this common frailty in recent exchanges with ID docs about the risks and benefits of empiric prescription of antibiotics. Empiric prescription is the practice of prescribing an antibiotic before having identified the infecting bacteria or determining what antibiotic it is susceptible to. That this happens is neither shocking or wrong. It typically takes 2-3 days to get this information, and treatment of seriously ill patients can't wait until it is available.

Empiric prescriptions are generally based on patient history, signs and symptoms, along with local epidemiology and perhaps a Gram stain. Other than that, it's mostly guessing and it is inevitable that some guesses will be wrong.

Rising levels of antibiotic resistance have made successful empiric prescription much more difficult. Patient history and clinical presentation can tell a doctor a lot about the likely identity of the infecting bacteria: if the patient has pneumonia, it's likely Strep; a skin infection is probably Staph; and a urinary tract infection is usually E. coli.

But signs and symptoms say nothing about what resistance genes those bacteria might be carrying. Other than knowing local resistance patterns, and that a patient is at elevated risk of resistance for antibiotics previously prescribed, it is pretty much a pure guess as to whether a particular infection is susceptible to an antibiotic.

So it should be no surprise that doctors often prescribe the wrong, inappropriate antibiotic for serious infections. How often does this happen?

Pretty often. One study showed that half of patients with MRSA bloodstream infections received inappropriate empiric prescriptions. Another found that a third of patients with Pseudomonas infections were treated inappropriately. ICU patients received inappropriate antibiotic therapy 36% of the time. And 43% of serious infections were treated inappropriately according to yet another study. I could go on, but you get the idea: your chance of getting a suboptimal or ineffective antibiotic by empiric prescription is somewhere between a third and a half.

This matters quite a bit. The odds of dying from a serious infection go up by a factor of 1.6 if the initial prescription is wrong. That's a lot - no other class of drugs, eg for cancer or heart disease, has anywhere near the impact of antibiotics on your odds of survival, and so getting the antibiotics right is crucial.

To put some numbers on the effect of inappropriate empiric prescription, consider that 2 million patients are treated for sepsis each year in the US, and about 300,000 of them die. If we assume that 65% of sepsis patients get appropriate initial antibiotic therapy, and do a little algebra, then we can estimate that the death rate for appropriately treated patients is 12%, while that for inappropriately treated patients is 20%. Multiply that out by 2 million patients and you get 56,000 excess deaths due to guessing wrong on which antibiotic to use.

That's more deaths than are due to breast cancer or prostate cancer, more than are due to homicide and suicide, more than result from automobile accidents. Which is why I am taken aback when ID docs say in effect, sure it would be nice to have a rapid diagnostic, but if there isn't, we will just continue with empiric prescription and it's all good.

What accounts for this comfort with the risks of empiric prescription? Several cognitive factors are probably at play here. Familiarity is one factor - we underestimate the risks of familiar activities. Physicians routinely prescribe antibiotics on an empirical basis, and the vast majority of their patients get better.

But how much better? It is really hard to tell the difference in patient outcomes by simple observation. Most patients get well. In the example of sepsis, 80% of the patients given inappropriate empiric treatment still recover, as compared to the 88% who recover when given appropriate treatment. This is a big difference when extended over many thousands of patients, but pretty hard to discern over dozens or even hundreds. To use a sports analogy, imagine two basketball players: one shoots free throws with 80% accuracy, the other with 88% accuracy. You couldn't tell who was the better shooter over the course of a few games, or even the season without careful record keeping. Even so, you'd need each player to shoot 329 free throws to be 95% confident that one was a better shooter than the other.

And whereas every free throw is pretty much like another, sepsis patients are all different and usually have underlying conditions - cancer, trauma, diabetes etc. - that contribute to mortality. So it is pretty easy to rationalize those factors as being the cause of death, rather than an inappropriate prescription. It's just human nature to ascribe our failures to bad luck and our successes to skill.

The way to combat human cognitive biases is with data and information. Rapid testing is beginning to provide the information needed to save lives, but doctors have to insist that hospitals buy and use these tests, even if they are significantly more expensive than Petri dishes. Saying "we will just prescribe empirically" isn't good enough anymore.

 

Thursday, January 16, 2014

Antibiotic overuse in the ER - rapid test needed

A lot of antibiotics get prescribed from the ER. And unfortunately, a lot of antibiotics get prescribed inappropriately there. That's the take home from a report by Donnelly et al in AAC this month.

Over 12M Americans visit the ER for acute respiratory infections every year, and about 60% leave with a prescription for antibiotics. This number has decreased only slightly in the last decade, from 62% to 58%, despite increased stewardship efforts.

The key parameter is how many of these patients got antibiotics who should not have, and here the numbers are discouraging: about 48% of patients not diagnosed with a bacterial infection still got antibiotics.

It's not too hard to understand why this happens, especially given the transient nature of the doctor-patient relationship in the ER setting. But that's a lot of antibiotics to be challenging our collective microbiomes with, and almost certainly a major driver of the spread of antibiotic resistance.

More education seems unlikely to help, and requiring pharmacist or ID doc sign off for prescribing seems unrealistic for ER care. The ideal solution would be a rapid test that distinguishes bacterial from viral infections.

Given that there are hundreds of potential pathogens for respiratory infections, the strategy for developing a viral vs bacterial test can't be based on pathogen detection. Instead, this test would take advantage of the differential response of our immune systems to bacterial vs viral infections.

One such test already exists, the procalcitonin test. PCT is a mediator of cytokine responses, the molecules which different cell types in the immune system use to communicate and coordinate their responses to infection. PCT levels rise in response to bacterial but not viral infections. The availability of PCT assays in the ER has been shown to reduce excessive antibiotic use by 30-80%. Unfortunately PCT has some limitations as a biomarker - its levels rise in response to a number of shock states, not just bacterial infections.

It is unlikely that PCT or any other single biomarker will serve to adequately distinguish bacterial from viral infections. However, a couple of studies (here and here)have shown that arrays of biomarkers - some 15 to 30 in all - collectively perform quite well at this task.

So the concept of a bacterial-viral test is sound, but the path to execution is not yet obvious. The vast majority of diagnostic assays measure a single analyte. There are a few multiplexed assays for bacterial identification and resistance gene detection, such as the Cepheid Xpert or the Nanosphere Verigene systems. But the genes of immune cells don't vary in response to infection - it is the pattern of mRNA and protein expression that changes. Neither the Cepheid or Nanosphere systems currently measures mRNA levels, and therefore some serious product development would be required for either of these platforms.

For detection of protein expression, the only multiplex platforms are mass spectrometry and the SomaLogic SomaScan system. Sample prep is always the Achilles heel of mass spec - the amount of sample cleanup required to get a good signal makes it a poor platform for a rapid test. The SomaScan system is currently configured only as a lab-developed test that takes a couple of days to return results. It could potentially be developed into an IVD platform, but that hasn't happened yet.

So there is a significant opportunity here - urgent unmet clinical need, much wasteful spending, very large markets. Is anyone up to the challenge?

 

Tuesday, January 7, 2014

Doing what the FDA won't

I haven't written about the FDAs new voluntary relabeling of antibiotics for livestock use because I don't expect it to have much effect. It is a positive step in the sense that it acknowledges the problem of antibiotic overuse in livestock production, but that's about it.

The Netherlands tried a similar approach in the 2000s, banning the use of antibiotics as growth promoters. The "therapeutic" use of antibiotics promptly expanded to the point where overall consumption of antibiotics remained unchanged. The result for public health? Being a pig farmer classifies you as a presumptive carrier of MRSA in Dutch hospitals, which are otherwise nearly free of MRSA.

So that didn't work out so well there, and I expect the results of the FDAs action to be about the same here. The tell is that the antibiotic manufacturers are going along with it, not squealing too loudly, you might say. Thus they expect little disruption to their business model.

Ideally we might follow the lead of Denmark, which has instituted an effective ban, and had no trouble maintaining or increasing their production of pork and chicken. But that sort of edict is not likely to happen here - the political power of of rural constituencies and drug suppliers is much greater than that of public interest regulators.

Raising the cost of antibiotics would be the most effective way to ensure that they are put to the highest-value uses. A tax on antibiotics would discourage indiscriminate use. If the proceeds were rebated to producers on a per-pig basis, farmers would be incentivized to minimize use of antibiotics. In effect, the antibiotic-free producers would get a subsidy. They would be the winners, helping to split political opposition to the tax. The heavy users of antibiotics and the pharmas who manufacture and sell them would be the losers. However, given the power of the status quo in American governance, they would likely be able to block change.

Like it or not, some mechanism to preserve pharma profits will have to be found. This could come in the form of a "floor" on antibiotic prices, much like price supports for milk and other commodities. A floor would allow margins to go up as volume goes down, preserving profitability.

This sort of solution is ugly - it basically amounts to bribing farmers and pharmas to not endanger public health. But it would actually result in reduced usage of antibiotics in livestock. That's something that the FDA is unlikely to ever do on its own.

 

Wednesday, January 1, 2014

"Stumpage" fees for animal antibiotics

Aidan Hollis and Ziana Ahmed have published a short and terrific letter in NEJM outlining the benefits of a user fee for agricultural uses of antibiotics (h/t HuffPost). Key points:.

  1. More than 13M kg of antibiotics are sold for ag use in the US each year, at a cost of less than $25/kg.
  2. A ban on the use of antibiotics as growth promoters would raise prices some $2B per year.
  3. The medical value of antibiotics is more than $60T in the US; thus even a 1% loss of susceptibility due to ag use is a big economic loss, much larger than the economic benefits of ag use.
  4. They liken antibiotic susceptibility to a commons, and propose a user fee that would be akin to stumpage fees paid by loggers.

There's more - but go read it for yourself.