Conjunctivitis: No Antibiotics, Please!

It’s the sad state of modern medicine – choose a common ambulatory condition, and you can find widespread avoidable overuse and waste. There is a spectrum of acceptability to this practice variation, of course, depending on the severity of consequences for missed or delayed diagnoses – but, for the most part, we’re just setting our professional respectability aflame.

This is a simple retrospective review of prescriptions associated with diagnoses of acute conjunctivitis. These authors reviewed records from a large managed care network and identified 340,372 patients with a clinical visit coded for acute conjunctivitis. Within 14 days of this visit, 58% of patients filled prescriptions for topical ophthalmologic medications. Considering most conjunctivitis encountered in the clinical setting is viral or allergic, obviously, the vast majority of these are wholly unnecessary. Then, frankly, while topical antibiotics mildly hasten the improvement of bacterial conjunctivitis, it is still a generally self-limited condition.

Ophthalmologists and optometrist visits were the least likely to have an antibiotic prescription associated with a visit for acute conjunctivitis, but 36% and 44%, respectively. Urgent Care Physicians and “Other Provider” – probably inclusive of Emergency Medicine – were at 68% and 64%, respectively. Fluoroquinolones accounted for 33% of antibiotic prescriptions – which is fabulous, because they are typically the most costly, and result in both increased risk for antimicrobial resistance and S. aureus endophthalmitis. Then, one in five prescriptions were for combination corticosteroid-antibiotic combination products – which are contraindicated, as they can prolong viral infections or worsen an underlying herpes simplex infection.

The American Academy of Ophthalmology contribution to Choosing Wisely recommends avoiding antibiotic prescriptions for viral conjunctivitis, and deferring immediate antibiotic therapy when the cause of conjunctivitis is unknown. Stop the madness! Everyone!

“Antibiotic Prescription Fills for Acute Conjunctivitis among Enrollees in a Large United States Managed Care Network”

https://www.ncbi.nlm.nih.gov/pubmed/28624168

Nothing But Advantages to Treating Stroke Mimics!

What is the acceptable rate of treatment of stroke mimics with tPA? Zero? A few percent?  No limit?  It’s mostly harmless, after all – with only a ~1% rate of intracerebral hemorrhage. And, thanks to the free-market forces of comparison shopping and collective bargaining power of individual stroke patients, the cost of alteplase has increased >100% in the past decade to ~$6400 per dose. With all this going for it, it’s no wonder the American Heart Association gives a Class II recommendation for empirically treating, rather than pursuing additional diagnostic tests.

The added bonus – the more mimics you treat, the better your stroke outcomes appear!

This retrospective review of 725 tPA-treated patients at three hospitals evaluated the difference in rate of treatment of stroke mimics at an MRI-based “hub” hospital and CT-based “spokes”. Of 514 patients treated at the hub, only 3 (0.3%) were ultimately given a non-stroke diagnosis. Of 211 treated at the spokes, 33 (16%) were stroke mimics. The authors also noted, splitting their review period into 2005-09 and 2010-14, the rate of treatment of stroke mimics at spokes had increased from 9% to 20%.

To no great surprise, clinical outcomes – as measured both by mRS ≤1 five days after discharge and hemorrhagic transformation – significantly favored the spoke hospitals. Outcomes also improved between the time periods compared – hand-in-hand with the increase in treatment of stroke mimics.

These authors go on to mention treatment of stroke mimics has real financial cost to the health system and to individual patients, the misdiagnosis of stroke notwithstanding – growing ever more important as our health system lurches back towards penalties for pre-existing conditions. The authors acknowledge the luxury of having rapid MRI available for stroke, but go on to implicate aggressive efforts to improve door-to-needle times as contributing to misdiagnosis and harmful waste.

But, none of that matters when you can get a shiny promotional merit badge for your stroke center!

“Effects of increasing IV tPA-treated stroke mimic rates at CT-based centers on clinical outcomes”
http://www.neurology.org/content/early/2017/06/28/WNL.0000000000004149.abstract

The Door-to-Lasix Quality Measure

Will [door-to-furosemide] become the next quality measure in modern HF care? Though one could understand enthusiasm to do so ….

No.

No one would understand such enthusiasm, despite the hopeful soaring rhetoric of the editorial accompanying this article. That enthusiasm will never materialize.

The thrills stacked to the ceiling here are based on the data in the REALITY-AHF registry, a multi-center, prospective, observational cohort designed to collect data on treatments administered in the acute phase of heart failure treatment in the Emergency Department.  Twenty hospitals, mixed between academic and community, in Japan participated.  Time-to-furosemide, based on the authors’ review of prior evidence, was prespecified as particular data point of interest.

They split their cohort of 1,291 analyzed patients between “early” and “non-early” furosemide administration, meaning within 60 minutes of ED arrival and greater than 60 minutes. Unadjusted mortality was 2.3% in the early treatment group and 6% in the non-early – and similar, but slightly smaller, differences persisted after multivariate adjustment and propensity matching. The authors conclude, based on these observations, the association between early furosemide treatment and mortality may be clinically important.

Of course, any observational cohort is not able to make the leap from association to causation.  It is, however, infeasible to randomize patients with acute heart failure to early vs. non-early furosemide – so this is likely close to the highest level of evidence we will receive.  As always, any attempt at adjustment and propensity matching will always be limited by unmeasured confounders, despite incorporating nearly 40 different variables. Finally, patients with pre-hospital diuretic administration were excluded, which is a bit odd, as it would make for an interesting comparison group on its own.

All that said, I do believe their results are objectively valid – if clinically uninterpretable. The non-early furosemide cohort includes both patients who received medication in the first couple hours of their ED stay, as well as those whose first furosemide dose was not given until up to 48 hours after arrival.  This probably turns the heart of the comparison into “appropriately recognized” and “possibly mismanaged”, rather than a narrow comparison of simply furosemide, early vs. not.  Time may indeed matter – but the heterogeneity of and clinical trajectory of patients treated between 60 minutes and 48 hours after ED arrival defies collapse into a dichotomous “early vs. non-early” comparison.

And this certainly ought not give rise to another nonsensical time-based quality metric imposed upon the Emergency Department.

“Time-to-Furosemide Treatment and Mortality in Patients Hospitalized With Acute Heart Failure”

http://www.onlinejacc.org/content/69/25/3042

Blood Cultures Save Lives and Other Pearls of Wisdom

It’s been sixteen years since the introduction of Early Goal-Directed Therapy in the Emergency Department. For the past decade and a half, our lives have been turned upside-down by quality measures tied to the elements of this bundle. Remember when every patient with sepsis was mandated to receive a central line? How great were the costs – in real, in time, and in actual harms from these well-intentioned yet erroneous directives based off a single trial?

Regardless, thanks to the various follow-ups testing strict protocolization against the spectrum of timely recognition and aggressive intervention, we’ve come a long way. However, there are still mandates incorporating the vestiges of such elements of care –such as those introduced by the New York State Department of Health. Patients diagnosed with severe sepsis or septic shock are required to complete protocols consisting of 3-hour and 6-hour bundles including blood cultures, antibiotics, and intravenous fluids, among others.

This article, from the New England Journal, looks retrospectively at the mortality rates associated with completion of these various elements. Stratified by time-to-completion following initiation of the 3-hour bundle within 6 hours of arrival to the Emergency Department, these authors looked at the mortality associations of the bundle elements.

Winners: obtaining blood cultures, administering antibiotics, and measuring serum lactate
Losers: time to completion of a bolus of intravenous fluids

Of course, since blood cultures are obtained prior to antibiotic administration, these outcomes are co-linear – and they don’t actually save lives, as facetiously suggested in the post heading. But, antibiotic administration was associated with a fraction of a percent of increased mortality per hour delay over the first 12 hours after initiation of the bundle. Intravenous fluid administration, however, showed no apparent association with mortality.

These data are fraught with issues, of course, relating to their retrospective nature and the limitations of the underlying data collection. Their adjusted model accounts for a handful of features, but there are still potential confounders influencing mortality of those who received their bundle completion within 3 hours as compared to those who did not.  The differences in mortality, while a hard and important endpoint, are quite small.  Earlier is probably better, but the individual magnitude of benefit will be unevenly distributed around the average benefit, and while a delay of several hours might matter, minutes probably do not.  The authors are appropriately reserved with their conclusions, however, only stating these observational data support associations between mortality and antibiotic administration, and do not extend to any causal inferences.

The lack of an association between intravenous fluids and mortality, however, raises significant questions requiring further prospective investigation. Could it be, after these years wandering in the wilderness with such aggressive protocols, the only universally key feature is the initiation of appropriate antibiotics? Do our intravenous fluids, given without regard to individual patient factors, simply harm as many as they help, resulting in no net benefit?

These questions will need to be addressed in randomized controlled trials before the next level of evolution in our approach to sepsis, but the equipoise for such trials may now exist – to complete our journey from Early Goal-Directed to Source Control and Patient-Centered.  The difficulty will be, again, in pushing back against well-meaning but ill-conceived quality measures whose net effect on Emergency Department resource utilization may be harm, with only small benefits to a subset of critically ill patients with sepsis.

“Time to Treatment and Mortality during Mandated Emergency Care for Sepsis”

http://www.nejm.org/doi/full/10.1056/NEJMoa1703058

Correct, Endovascular Therapy Does Not Benefit All Patients

Unfortunately, that headline is the strongest takeaway available from these data.

Currently, endovascular therapy for stroke is recommended for all patients with a proximal arterial occlusion and can be treated within six hours. The much-ballyhooed “number needed to treat” for benefit is approximately five, and we have authors generating nonsensical literature with titles such as “Endovascular therapy for ischemic stroke: Save a minute—save a week” based on statistical calisthenics from this treatment effect.

But, anyone actually responsible for making decisions for these patients understands this is an average treatment effect. The profound improvements of a handful of patients with the most favorable treatment profiles obfuscate the limited benefit derived by the majority of those potentially eligible.

These authors have endeavored to apply a bit of precision medicine to the decision regarding endovascular intervention. Using ordinal logistic regression modeling, these authors used the MR CLEAN data to create a predictive model for good outcome (mRS score 0-2 at 90 days). These authors subsequently used the IMS-III data as their validation cohort. The final model displayed a C-statistic of 0.69 for the ordinal model and 0.73 for good functional outcome – which is to say, the output is closer to a coin flip than a informative prediction for use in clinical practice.

More importantly, however, is whether the substrate for the model is anachronistic, limiting its generalizability to modern practice. Beyond MR CLEAN, subsequent trials have demonstrated the importance of underlying tissue viability using either CT perfusion or MRI-based selection criteria when making treatment decisions. Their model is limited in its inclusion of just a measure of collateral circulation on angiogram, which is only a surrogate for potential tissue viability. Furthermore, the MR CLEAN cohort is comprised of only 500 patients, and the IMS-III validation only 260. This sample is far too small to properly develop a model for such a heterogenous set of patients as those presenting with proximal cerebrovascular occlusion. Finally, the choice of logistic regression can be debated, simply from a model standpoint, given its assumptions about underlying linear relationships in the data.

I appreciate the attempt to improve outcomes prediction for individual patients, particularly for a resource-intensive therapy such as endovascular intervention in stroke. Unfortunately, I feel the fundamental limitations of their model invalidate its clinical utility.

“Selection of patients for intra-arterial treatment for acute ischaemic stroke: development and validation of a clinical decision tool in two randomised trials”
http://www.bmj.com/content/357/bmj.j1710

Discharged and Dropped Dead

The Emergency Department is a land of uncertainty. Generally a time-compressed, zero-continuity environment with limited resources, we frequently need to make relatively rapid decisions based on incomplete information. The goal, in general, is to treat and disposition patients in an advantageous fashion to prevent morbidity and mortality, while minimizing the costs and other harms.

The consequence of this confluence of factors leads, unfortunately, to a handful of patients who meet their unfortunate end following discharge. A Kaiser Permanente Emergency Department cohort analysis found 0.05% died within 7 days of discharge, and identified a few interesting risk factors regarding their outcomes. This new article, in the BMJ, describes the outcomes of a Medicare cohort following discharge – and finds both similarities and differences.

One notable difference, and a focus of the authors, is that 0.12% of patients discharged from the Emergency Department died within 7 days. This is a much larger proportion than the Kaiser cohort, however, the Medicare population is obviously a much older cohort with greater comorbidities. Then, they found similarities regarding the risks for death – most prominently, “altered mental status”. The full accounting of clinical features is described in the figure below:


Then, there were some system-level factors as well. Potentially, rural emergency departments and those with low annual volumes contributed in their multivariate model to increased risk of death. This data set is insufficient to draw any specific conclusions regarding these contributing factors, but it raises questions for future research. In general, however, this is interesting – and not terribly surprising data – even if it is hard to identify specific operational interventions based on these broad strokes.

“Early death after discharge from emergency departments: analysis of national US insurance claims data”
http://www.bmj.com/content/356/bmj.j239

Insight Is Insufficient

In this depressing trial, we witness a disheartening truth – physicians won’t necessarily do better, even if they know they’re not doing well.

This study tested a mixed educational and peer comparison intervention on primary care physicians in Switzerland, with an end goal of improving antibiotic stewardship for common ambulatory complaints. The “worst-performing” 2,900 physicians with respect to antibiotic prescribing rates were enrolled and randomized to the study intervention or none. The study intervention consisted of materials regarding appropriate prescribing, along with personalized feedback regarding where their prescribing rate ranked compared to the entire national cohort. The core of their hypothesis involved whether just this passive knowledge regarding their peer performance would exert normalizing influence over their practice.

Unfortunately, despite providing these physicians with this insight, as well as tools for improvement, the net effect of their intervention was effectively zero. There were some observations regarding changes in prescribing rates for certain age groups, and for certain types of antibiotics, but dredging through these secondary outcomes leads to only unreliable conclusions.

This is not particularly surprising data. These sorts of passive feedback mechanisms unhitched from material consequences have never previously been shown to be effective. There are other, more effective mechanisms – focused education, decision-support interventions, and shared decision-making – but, for a fragmented, national health system, this represented a relatively inexpensive model to test.

Try again!

“Personalized Prescription Feedback Using Routinely Collected Data to Reduce Antibiotic Use in Primary Care”

https://www.ncbi.nlm.nih.gov/pubmed/28027333

Stumbling Around Risks and Benefits

Practicing clinicians contain multitudes: the vastness of critical medical knowledge applicable to the nearly infinite permutaions of individual patients.  However, lost in the shuffle is apparently a grasp of the basic fundamentals necessary for shared decision-making: the risks, benefits, and harms of many common treatments.

This simple research letter describes a survey distributed to a convenience sample of residents and attending physicians at two academic medical centers. Physicians were asked to estimate the incidence of a variety of effects from common treatments, both positive and negative. A sample question and result:

treatment effect estimates
The green responses are those which fell into the correct range for the question. As you can see, in these two questions, hardly any physician surveyed guessed correctly.  This same pattern is repeated for the remaining questions – involving peptic ulcer prevention, cancer screening, and bleeding complications on aspirin and anticoagulants.

Obviously, only a quarter of participants were attending physicians – though no gross differences in performance were observed between various levels of experience. Then, some of the ranges are narrow with small magnitudes of effect between the “correct” and “incorrect” answers. Regardless, however, the general conclusion of this survey – that we’re not well-equipped to communicate many of the most common treatment effects – is probably valid.

“Physician Understanding and Ability to Communicate Harms and Benefits of Common Medical Treatments”
http://www.ncbi.nlm.nih.gov/pubmed/27571226

Your New Career in “Waiting Room Medicine”

A few years back, a facetious advertisement in the Canadian Journal of Emergency Medicine promoted the availability of fellowship positions in “Waiting Room Medicine”, a comedic take on the struggles of the specialty to manage increasing patient volume with limited resources. While there are certainly Emergency Departments with ample space and “white glove”-type service – see the for-profit expansion of free-standing EDs in states like Texas – there are also publicly-funded and other EDs that struggle with physical bed space for patients for a variety of reasons.

This study attempts to quantify the effect of an intervention utilized by many overburdened or otherwise saturated EDs – starting the initial evaluation in triage with either provider-directed or protocolized orders. At UCLA/Olive-View, all patients presenting to an already-full ED received an initial rapid evaluation by an attending physician or nurse practitioner. During their 10-month study period, non-pregnant adults with abdominal pain were randomized to either receiving initial evaluation orders following this evaluation, or to be returned to the waiting room to await full evaluation at a later time pending bed availability.

There were 1,691 enrolled and randomized, with approximately 10% excluded from analysis mostly because they left the ED before their evaluation was complete. Overall, the initiation of the work-up in triage saved patients approximately a half-hour, on average, of bedded time in the ED. This was reflected by a similar absolute decrease in overall ED length-of-stay. There were a couple other interesting tidbits unique to their execution:

  • The most profound difference associated with WR medicine was simply blood and urine testing. While imaging could be ordered up front, it was rarely done.
  • Some of the advantages related to the WR blood testing were minimized by ~13% of patients receiving further testing after being bedded in the ED.
  • Patients randomized to WR medicine received, on average, a greater number of diagnostics per patient, probably representing resource waste.

So – yes, this probably accurately reflects the impact of orders placed in triage: some wasted resources based on the initial, incomplete evaluation, with a trade-off of potential time savings. The extent to which your system might benefit from a similar set-up is probably related to your level of chronic bed scarcity.

“Initiating Diagnostic Studies on Patients With Abdominal Pain in the Waiting Room Decreases Time Spent in an Emergency Department Bed: A Randomized Controlled Trial”
http://www.annemergmed.com/article/S0196-0644(16)30360-2/abstract

The Downside of Antibiotic Stewardship

There are many advantages to curtailing antibiotic prescribing. Costs are reduced, fewer antibiotic-resistant bacteria are induced, and treatment-associated adverse events are eliminated.

This retrospective, population-based study, however, illuminates the potential drawbacks. Using electronic record review spanning 10 years of general practice encounters, these authors compared infectious complication rates between practices with low and high antibiotic prescribing rates. Spanning 45.5 million person-years of follow-up after office visits for respiratory tract infections, there is both reason for reassurance and reason for further concern.

On the “pro” side, cases of mastoiditis, empyema, bacterial meningitis, intracranial abscess and Lemierre’s syndrome were no different between those who prescribed high rates (>58%) and those with low rates (<44%). However, there is a reasonably clear linear relationship with excess follow-up encounters for both pneumonia and peritonsilar abscess. Incidence rate ratios were 0.70 compared with reference for pneumonia and 0.78 compared with reference for peritonsillar abscess. However, the absolute differences can best be described as “large handful” and “small handful” of extra cases per 100,000 encounters

There are many rough edges and flaws relating to these data, some of which are probably adequately defeated by the massive cohort size. I think it is reasonable to interpret this article as accurately reflecting true harms from antibiotic stewardship. More work should absolutely be pursued in terms of strategies to mitigate these potential downstream complications, but I believe the balance of benefits and harms still falls on the side of continued efforts in stewardship.

“Safety of reduced antibiotic prescribing for self limiting respiratory tract infections in primary care: cohort study using electronic health records”

http://www.bmj.com/content/354/bmj.i3410