Should We Keep Patients in the Dark on Costs?

That seems to be the overwhelming opinion of folks interviewed for this recent News & Perspective from Annals of Emergency Medicine.

Citing everything from ignorance, to the Emergency Medical Treatment and Labor Act, several clinicians in this vignette make the case discussions of cost have no role in Emergency Department care.  Victor Friedman, of the ACEP board of directors says costs are “irrelevant to me as a provider …. The billing and all that stuff comes later.”  Ellis Weeker from CEP America is concerned any discussion of costs might influence decisions regarding whether patients are seen, and potentially represent an EMTALA violation.

On the other hand, Neal Shah of Costs of Care points out there are real patient harms secondary to the financial burdens of healthcare, in no small part because of the astounding charges meted out from the Emergency Department.  While patients rarely see (or pay) the fantasy prices on the chargemaster, the burdens of even a fraction of these costs may mean choosing between food and insulin, or heat and clopidogrel.

If you’ve seen my writing to this effect, I fall squarely on the side of “costs should be communicated”, within reason.  I agree with Dr. Shah, that many Emergency Department interactions are “urgent” rather than “emergent”, and there is time to include costs as an adverse effect of a test or therapy.  I look forward to the communication instrument his team is developing.

“Price Transparency in the Emergency Department”
http://www.sciencedirect.com/science/article/pii/S0196064414004211

The Broken ED Sepsis Quality Measure

Are there yet sufficient mandates in the Emergency Department?  Door-to-physician times, door-to-CT time in acute ischemic stroke, door-to-analgesia for long bone fractures – and, on the horizon, National Quality Forum proposed measures for delivery of sepsis bundle components within 3 and 6 hours.

The problem? As these authors discover, even for patients ultimately receiving a diagnosis of severe sepsis and septic shock, many do not meet those criteria within 3 hours, or in the Emergency Department.  These authors perform a retrospective review of 113 patients from a public Level 1 trauma center and 372 from a university teaching hospital who received who received at least a provisional diagnosis of severe sepsis or septic shock.  According to their review, 9.8% of patients at the trauma center and 15.3% of patients did not meet criteria for severe sepsis or septic shock within 3 hours of arrival.

No one disputes early recognition and treatment of sepsis is a cornerstone of quality Emergency Department care.  However, retrospective application of sepsis definitions to the initial time period of presentation is clearly a Quixotic quest.  Chasing every last potential severe sepsis patient will only lead to further unintended consequences, inappropriate care, and resource over-utilization – particularly because most patients with SIRS in the Emergency Department are never diagnosed with an infection.

Just as with OP-15, we should continue to work against implementation of this measure.

“Many Emergency Department Patients With Severe Sepsis and Septic Shock Do Not Meet Diagnostic Criteria Within 3 Hours of Arrival”
http://www.ncbi.nlm.nih.gov/pubmed/24680548

How Much Money Is Wasted By Endovascular Treatment for Stroke?

If you recall, last year was a bumper crop of prospective, randomized, controlled trials testing the efficacy of endovascular devices versus tPA alone for acute ischemic stroke.  These trials – SYNTHESIS, MR-RESCUE, and IMS-III – were unified by demonstrating no additive benefit.  Of course, these trials proved nothing to proponents of endovascular therapy, owing to the “outdated” devices used.

Interestingly, IMS-III also prospectively gathered costs associated with both treatment modalities.  Presumably, the authors expected to show a treatment advantage despite increased costs, and would follow-up with a cost-effectiveness analysis.  Now, since there was no advantage with endovascular treatment, this is simply a fascinating observational report.

So, how much did everything cost?  The answer, like everything in medicine:  depends on who’s paying.  Hospital charges for patients receiving tPA were a mean of $86,880, with a median of $58,247, and ranged from $13,701 to $830,652.  Hospital charges for endovascular treatment would have been a mean of $113,185, with a median cost of $86,481, and ranged from $23,350 to $552,279.  Thankfully, this is the funny money that few patients are realistically expected to pay.  Costs, on the other hand, are based off the negotiated Medicare reimbursements, and were estimated at a mean cost of $25,630 for IV tPA and $35,130 for endovascular therapy.  So, a fair bit of extra cost to the system for a therapy that isn’t providing any proven benefit.

Given the lack of efficacy and increased costs, you’d think it should be obvious we ought not be deploying endovascular therapy widely – but, clearly, this is unfortunately not the case.  Medicare and Medicaid still reimburse for endovascular interventions – and its use is bolstered by its sponsors and other such propaganda in the NEJM.  Until proven otherwise, this is all simply money down the drain.

“Drivers of Costs Associated With Reperfusion Therapy in Acute Stroke: The Interventional Management of Stroke III Trial”
http://stroke.ahajournals.org/content/early/2014/05/13/STROKEAHA.113.003874.abstract

Abscess Management in the Era of MRSA

Every so often, it’s good to circle back from the esoteric to the basics, and remind ourselves how to provide the best, evidence-based treatment for some of the most common diseases – in this case, abscesses.

This review in the New England Journal is a reasonable, concise overview of the evidence behind management of cutaneous abscesses, updated for the increasing prevalence of methicillin-resistant Staphylococcus aureus.  And, quite simply, there’s no evidence for any reason yet to panic.  The authors of this article summarize the literature thusly:

  • Incision & drainage is definitive treatment.  Non-complicated disease does not require additional antibiotic treatment, and the incremental benefit – if any – would be single-digit differences in clinical failure.
  • Packing of abscesses is a matter of tradition, and evidence is neither sufficient to conclusively confirm nor refute this practice.
  • Primary closure of abscesses after I&D is reasonable, particularly for larger, exposed, and cosmetically important areas.
  • Antibiotic coverage for primarily cellulitic soft-tissue infections ideally includes both MRSA and streptococcal coverage, but recent evidence showed no advantage to double-coverage.  Clinical trials regarding antibiotic use are ongoing:  NCT00729937 NCT00730028  NCT00729937
  • Wound cultures are not necessary.

One could argue covering such basics in infection and wound management is a sundry affair for a blog frequently covering the cutting edge.  However, current management of such a common condition is so highly variable and frequently low-value, ACEP even made a point to include abscess management in their Choosing Wisely campaign list.

Now, go and do as little harm as possible.

“Management of Skin Abscesses in the Era of Methicillin-Resistant Staphylococcus aureus”
http://www.ncbi.nlm.nih.gov/pubmed/24620867

SIRS is Rarely Sepsis

You already knew this – but that hasn’t stopped your hospital from purchasing the “Sepsis Alert” tool for your electronic health record.  Now, you and your nurses get blasted with computerized interruptions every time a patient is tachycardic and has an elevated WBC count.  And, you ignore it – because it’s 1) wrong, or 2) you placed a central line and admitted the patient to the ICU half an hour ago.

But, just how often do these sepsis alerts, based on systemic inflammatory response criteria, fire erroneously?  That is the question asked by this group from Harbor-UCLA and UC Davis.  Using the National Hospital Ambulatory Medical Care Survey from 2007 to 2010, these authors attempted to estimate the frequency of true infection in the setting of SIRS.  Unfortunately, while the NHAMCS set now includes vital signs obtained at triage, it does not include results of tests, such as the WBC.  Therefore, these authors – and this is where the study breaks down a bit – were required to mathematically conjure up a range of estimates for the frequency with which patients would meet the WBC criterion for SIRS.  Based on minimum and maximum estimates, the percentage of Emergency Department visits estimated to have SIRS ranged from 9.7% to 26.0%, and the authors ultimately split the difference at 17.8% for their analysis.

Based on their estimate, there were approximately 66 million visits to Emergency Departments meeting SIRS criteria, and the largest cohort of eventual diagnoses for these patients was indeed infection – but this constituted a mere 26% of all SIRS.  The remaining diagnoses were scattered among trauma, mental disorders, respiratory diseases, and other non-specific, organ-system dysfunction, catch-all ICD-9 codes.  While the interruptions and low specificity of SIRS alert tools are the obvious problem addressed by this study, the other implication is the troubling scope of the problem:  after trauma and infection are excluded, there are approximately 42 million other ED visits that may erroneously trip institutional protocols, costly unnecessary testing, and additional resource utilization targeting sepsis.

This is the sort of decision-support that simply doesn’t add any proven value, and another venue of encroachment into efficient and effective care.

“Epidemiology of the Systemic Inflammatory Response Syndrome (SIRS) in the Emergency Department”
http://www.ncbi.nlm.nih.gov/pubmed/24868313

Dr. Wikipedia is In

… and Dr. Wikipedia is wrong.  Or, at least, that’s what most of the popular media coverage of this study perpetuated.

Given estimates of Wikipedia utilization for medical advice range from 40-70% of physicians, this group thought it important to undertake a comparison of Wikipedia articles with peer-reviewed references for accuracy.  Looking at ten Wikipedia articles representative of the top ten most costly healthcare conditions, two reviewers compared declarative statements from the Wikipedia article to those cited by a reference source – limited, unfortunately, to only those articles cited by UpToDate.  In the end, the reviewers generally found a good deal that was similar between Wikipedia and UpToDate – but also a great deal that was not fully supported by peer-reviewed references.

Reviewers were generally in agreement over which facts from Wikipedia were unsupported, but not entirely.  And, of course, UpToDate and its references are hardly the definitive source of medical fact.  However, it’s probably fair to say – physicians ought to exercise a substantial level of caution when considering basing patient care off Wikipedia.

“Wikipedia vs Peer-Reviewed Medical Literature for Information About the 10 Most Costly Medical Conditions”
http://www.ncbi.nlm.nih.gov/pubmed/24778001

The Pain of Patient Satisfaction

From a business standpoint, it seems entirely reasonable to value patient satisfaction.  However, from a medical standpoint, high-quality ethical care, with clear, honest communication ought to be the goal – with patient satisfaction a natural outgrowth of good clinical practice.  Focusing independently on satisfaction due to financial incentives or otherwise distorts this natural balance.

One issue Emergency Physicians have struggled with is the supposed marriage between opiate analgesia and patient satisfaction.  This study is a retrospective review of 4,749 Press Ganey patient satisfaction survey scores linked to ED visit information from two hospitals in Rhode Island.  Scores were compared between those who received no analgesia, those receiving analgesia, and those receiving opiate analgesia in the ED.  Many sub-analyses were undertaken, attempting to control for baseline differences between those receiving opiates and those who did not.  Essentially, across the entire survey, no consistent effects were observed between satisfaction, analgesia, and opiate analgesia.

There are two main issues with this data.  First, the Press Ganey instrument is, essentially, useless for measuring satisfaction.  The questions asked are certainly reasonable surrogates for patient satisfaction, but their proprietary tool has never been validated or compared against any other measurement device, and its discriminatory power and reliability have never been described.  Second, this study probably addresses an entirely incorrect question by focusing on pain control in the ED, rather than prescriptions at discharge.  I think Emergency Physicians are less concerned with using opiates to manage acute pain in the ED than they are with prescribing at discharge – and sending folks home without opiate prescriptions seems to be, at least anecdotally, the more challenging issue.

I wouldn’t let anyone quote this study to you as evidence opiate analgesia is not tied to patient satisfaction, other than in the narrowest of circumstances described here.

“Lack of Association Between Press Ganey Emergency Department Patient Satisfaction Scores and Emergency Department Administration of Analgesic Medications”
http://www.ncbi.nlm.nih.gov/pubmed/24680237

Burn the ACC/AHA Low-Risk Chest Pain Guidelines

Management of low-risk chest pain is, by reasonable conjecture, one of the greatest failings of Emergency Medicine and the medical profession in general.  Whether driven by true altruism or by risk-management and zero-miss strategies based on the ACC/AHA guidelines, many, many, many patients are admitted and subjected to provocative testing.

And almost none of those patients are ultimately, correctly, diagnosed with the feared disease – acute coronary syndrome.

This is a prospective, observational evaluation of patients admitted for chest pain observation at a single academic center in Rhode Island.  Over the course of ~2 years, 3,543 patients were admitted for an initial evaluation of chest pain after initial negative cardiac biomarkers in the Emergency Department.  Approximately half of patients underwent stress testing.

Of 1,754 stress tests, there were 29 positives.  Of those, 9 were false positives.  Stratified by pretest probability, none of the patients with a “low probability” Diamond & Forrester Score had a true positive test.  Only 1% of patients admitted and stressed with “intermediate probability” D&F Score ultimately proved to have true positive tests.  Even with “high probability”, 5% of all stress tests performed were true positives.

The author of this article means to specifically reduce stress testing in the “low probability” cohort.  this is a reasonable proposal to skim off a small percentage of tests.  However, he misses asking the better question – how do we reduce use in our “intermediate probability” cohort, which constituted 85% of admissions with just a 1.7% yield for ACS?  We need to seriously address the outdated and inefficient notion admission and testing for these patients is the ideal strategy – and that probably starts by tossing our current guidelines out the window.

“The Association Between Pretest Probability of Coronary Artery Disease and Stress Test Utilization and Outcomes in a Chest Pain Observation Unit”
http://www.ncbi.nlm.nih.gov/pubmed/24730402

Ignorant About Costs, Interested in Learning

Survey after survey shows: physicians rarely have any idea about the costs of medical care.  And, this is unsurprising – as there is a complex divorce between hospital charges, reimbursements, and ultimate expenses shouldered by patients.  Considering all these variables, it is nigh impossible to clearly communicate the cost of care to an individual in a patient care setting.

But, a ballpark estimate would be nice.

So, how do physicians do with their ballpark estimates of the costs of routine tests and procedures in the Emergency Department?

Using CMS reimbursement rates from 2012 and 2013, this survey of 97 emergency physicians representing 11 EDs in the Salt Lake City area finds they’re usually nowhere close.  Of all the tests and procedures surveyed, only 17% of physician estimates were within ± 25% of the actual CMS reimbursement.  We would be awful on the Price is Right.

Interestingly, the estimates varied widely.  With regards to lab tests and radiology, physicians tended to over-estimate reimbursement by >50%, while under-appreciating the charges associated with CPT codes for administration of IV fluids and IV antibiotics.  I’m not sure how to describe the host of interesting information graphics and tables detailing the bewildering range of inaccuracy, but suffice to say – we could/should/need to do a lot better.

At least, however, physicians self-rated their knowledge of costs of care as low, and over 80% wished they knew more about the charges.  So, hope is not lost – and movements like (the aptly named) Costs of Care ought to receive ready and enthusiastic audience.

“Emergency physician knowledge of reimbursement rates associated with emergency medical care”
http://www.ncbi.nlm.nih.gov/pubmed/24657227

Will Twitter Ruin Your Diagnostic Abilities?

Medical errors, by some estimates, are associated with cognitive biases up to 75% of the time.  Given the oft-quoted 98,000 deaths per year as a result of medical error, recognition of these biases seems prudent.  Knowing is, after all, half the battle.

One of these is “availability bias”, the tendency to conflate the likelihood of disease depending on whether the details are readily present in memory.  Essentially, if you don’t think of it – you’ll never diagnose it – but if you think of it too frequently, you might test or treat for it with greater frequency than appropriate.

These authors subjected 38 internal medicine residents to a simulation where they read Wikipedia entries on two diseases.  Six hours later, they were asked to review and submit diagnoses for eight cases – two of which superficially resembled the disease descriptions from Wikipedia.  Finally, the residents were asked to use a structured methodology evaluating signs and symptoms in order to systematically create and winnow a list of potential diagnoses.

I’ve probably already clued you into the end result – but, basically, in the initial case review, residents had a 56% correct diagnosis rate for the “availability bias” cases and a 70% correct diagnosis rate for the others.  Then, by simply re-reading the cases in a systematic fashion, they subsequently were able to bring their rate of correct diagnosis up to 71% on the bias cases.

So, the next time you discover something novel and interesting on Twitter – try not to take it with you to work unchecked ….

“Exposure to Media Information About a Disease Can Cause Doctors to Misdiagnose Similar-Looking Clinical Cases”
http://www.ncbi.nlm.nih.gov/pubmed/24362387