The War on Blood Cultures

There are two problems with blood cultures.  The first question is with regard to the likelihood you’ll get a true positive result.  That question is covered by this JAMA Rational Clinical Examination.

The second question regards whether the true positive result is clinically meaningful.  This retrospective review of 639 cellulitis patients – 325 without medical comorbidities and 314 with – evaluated for changes in therapy as a result of positive cultures.  46 cultures returned positive – with half being judged due to contaminants.    Of the 23 true positives, 5 resulted in a change of antibiotic therapy – only 2 of which expanded the initial antibiotic choice to include coverage for a new pathogen.  Both changes in therapy occurred in the immunosuppressed group.

Yet another example of the incredibly low yield of an expensive test.  We’re clearly simply asking a question for which we already have the answer.

“Blood culture results do not affect treatment in complicated cellulitis”
www.ncbi.nlm.nih.gov/pubmed/23588078

Autopulse Advertisement in Critical Care Medicine

We’ve all seen folks come in via EMS with mechanical devices performing automated chest compressions.  These probably do a lovely job of freeing up paramedics from performing uninterrupted CPR, but their relationship to outcomes has been typically uncertain.

This meta-analysis and systematic review, however, reports these devices are superior to manual chest compression – with an OR of 1.6 towards increased return of spontaneous circulation.  Considering the copious evidence towards improved outcomes by minimizing interruptions during CPR, this would be an important finding, and tailors nicely with the expected advantage of mechanical compression devices.

However, this COI statement covering each of the four authors might also be in some fashion related to the positive results reported here:
“Dr. Westfall has received modest research grant support from ZOLL Medical Corporation. Mr. Krantz has received significant research grant support from ZOLL Medical Corporation. Mr. Mullin has served as a consultant for ZOLL Medical Corporation. Dr. Kaufman is an employee of ZOLL Medical Corporation.”

Unsurprisingly, these authors also demonstrate one of the overlooked evils of meta-analyses – the obfuscation of source COIs.  This JAMA article from 2011 does a lovely job describing this critical problem, and, as expected, these conflicted authors ignore the pervasive sponsorship bias present in their selected review.  Additionally, half the articles are only conference abstracts, suffering from results and methods not subject to the same level of rigorous peer review.

It really ought to be rather embarrassing for the editors of this journal to be approving such a clearly flawed vehicle – essentially blatant advertising for their $15,000 medical device – for publication.  No better, Journal Watch Emergency Medicine gives this article a bland and un-insightful thumbs-up.

“Mechanical Versus Manual Chest Compressions in Out-of-Hospital Cardiac Arrest: A Meta-Analysis”
www.ncbi.nlm.nih.gov/pubmed/23660728‎

Mixed “Cost-Conscious” Ordering Results

It’s a little bit of a messy study, sadly, because it’s probably a lovely idea.

These authors performed a before-and-after interventional trial in which they measured laboratory test ordering rates.  After a six-month baseline phase, the intervention phase consisted of displaying the 2008 Medicare allowable charge for a subset of frequent lab tests.  The theory, of course, is that displaying price information in the context of test ordering will alter physician behavior.

Most of the orders were placed on internal medicine services – and yes, there was a decrease in the number of orders with cost information displayed.  At the same time, however, the tests without cost information increased.  The net result, overall, was a decrease in total testing.  Interestingly, the impact seemed to mostly include a reduction by replacing CMP orders by BMPs.  $3.79 per patient-day costs were reduced during the intervention period.

So, the impact was mixed – slightly expensive tests were replaced by slightly less expensive tests.  More evaluation is necessary to determine whether these reductions have unanticipated impact on patient outcomes.

Impact of Providing Fee Data on Laboratory Test Ordering”
www.ncbi.nlm.nih.gov/pubmed/23588900

How To Evaluate Decision Instruments

This lovely editorial by Steven Green from Loma Linda succinctly summarizes the limitations of clinical decision instruments.  Decision instruments, referred to in this article as decision “rules”, are potentially valuable distillations of data from large research cohorts meant to concisely address vital clinical concerns.  These include such well-known instruments as NEXUS, PERC, Centor, Alvarado, Wells, and Geneva.

He describes a need for rigorous derivation, external validation, and ease of application as important criteria.  However, the most important topics he addresses are the related issues of “1-way” versus “2-way” application and whether the rule improves upon pre-existing clinical practice.  A “1-way” decision instrument informs clinicians only when its criteria are all met – such as the PERC rule.  A patient who fails the PERC rule does not necessarily need any additional testing due to its low specificity.  The NEXUS criteria, on the other hand, is a 2-way decision rule – where its use in appropriately selected patients typically leads to radiography if its criteria are not met.

The danger, however, is the natural propensity to using a “1-way” rule like a “2-way” rule.  His example for this error is the PECARN blunt abdominal trauma article for which I previously expressed concerns.  In the PECARN blunt trauma instrument, the specificity of the derivation was actually lower than the performance of the clinical gestalt of the physicians involved.  This means the authors recommend its use only as a “1-way” rule, based on sensitivity.  However, if the cognitive error is made to apply it as a “2-way” rule, CT scanning will increase by 13%.  Then, unfortunately, if used as a “1-way” rule, the PECARN instrument only has 97% sensitivity compared with the clinician gestalt of 99% sensitivity.  This means that, if implemented as routine practice, the PECARN instrument may have a non-trivial number of misses while potentially increasing scanning.  This illustrates his point as a “poorly-designed” decision rule, despite the statistical power of the cohort evaluated.

Overall, a lovely read regarding how to properly evaluate and apply decision instruments.

“When Do Clinical Decision Rules Improve Patient Care?”
www.ncbi.nlm.nih.gov/pubmed/23548403

“Neuroimaging Negative” Strokes Are A Lie

Back in 2011, there was an article in Annals of Emergency Medicine discussing what a fantastic job we were doing in diagnosing stroke and avoiding administering tPA to “stroke mimics”.  They reported a rate of 1.4% administration to stroke mimics – none of whom had bleeds.  The problem I pointed out, both on my blog and in a response letter to Annals, was that the authors invented a new category called “neuroimaging negative” acute stroke – which was probably actually all stroke mimics.  This would have changed the rate of tPA administration to stroke mimics from 1.4% to 29.3%.  The authors, having financial conflict of interest with the manufacturers of tPA, disagreed.

This study, part of the “Lesion Evolution in Stroke and Ischemia On Neuroimaging” project, evaluated the progression of lesions on MRI following tPA administration.  These authors found 231 patients with acute stroke who were initially screened by MRI prior to tPA administration and had evidence of infarction on diffusion weighted imaging.  They found that, following tPA administration, only 2 patients had resolution of an MRI DWI lesion.  They therefore conclude that “Patients with a stroke are unlikely to have complete DWI lesion reversal within 24 hours after IV tPA treatment,” and patients with no DWI lesion following tPA administration should be considered to have a diagnosis other than acute stroke.

Thus, this confirms my conclusion that the 27.9% of patients from the prior study with “neuroimaging negative” acute stroke ought to universally be considered to have had a diagnosis other than acute stroke.  The reality is that we are likely treating an ever-greater number of acute ischemic strokes – and further efforts to push Emergency Physicians to treat additional patients more quickly are certainly going to expose additional patients to avoidable harms.

“Negative Diffusion-Weighted Imaging After Intravenous Tissue-Type Plasminogen Activator Is Rare and Unlikely to Indicate Averted Infarction”
http://www.ncbi.nlm.nih.gov/pubmed/23572476

A Muddled Look at ED CPOE

Computerized Provider Order Entry – the defining transition in medicine over the last couple decades.  Love it or hate it, as UCSF’s CEO says, the best way to characterize the industry leader is that it succeeds “not because it’s so good, but because others are so bad.”  A fantastic sentiment for a trillion-dollar industry that has somehow become an unavoidable reality of medical practice.

But, it’s not all doom and gloom.  This systematic review of CPOE in use in the Emergency Department identified 22 articles evaluating different aspects of EDIS – and some were even helpful!  The main area of benefit – which has been demonstrated repeatedly in the informatics literature – was a reduction in medication prescribing errors, overdoses, and potential adverse drug events.  There was no consensus regarding changes in patient flow, length of stay, or time spent in direct patient care.  Then, on the flip side, some CPOE interventions were harmful – the effect of order set use as decision-support was implementation dependent, with some institutions seeing increased testing while others saw decreases.

A muddled look at a muddled landscape with, almost certainly, a muddled immediate future.  There are a lot of decisions being made in boardrooms and committees regarding the use of these systems, and not nearly enough evaluation of the unintended consequences.

“May you live in interesting times,” indeed.

“The Effect of Computerized Provider Order Entry Systems on Clinical Care and Work Processes in Emergency Departments: A Systematic Review of the Quantitative Literature”
www.ncbi.nlm.nih.gov/pubmed/23548404

Negative Tests Fail to Reassure Patients

This article touches in a topic that we encounter all the time in Emergency Medicine – testing with the intent of “reassurance”.  The assumption is, wouldn’t a patient with symptom concerns be less anxious regarding their illness if they received a favorable negative test result?

That assumption, according to this meta-analysis and systematic review, is wrong.  These authors gathered together 14 trials evaluating the effect of non-diagnostic testing on downstream patient outcomes.  These tests included endoscopy for mild dyspepsia, radiography for low back pain, and cardiac event recording for palpitations.  This is a difficult article to interpret, particularly because there’s so much heterogeneity between the included studies, but the general conclusion is that tests performed in the setting of low pretest probability do not decrease subsequent primary care utilization, symptom recurrence, or anxiety regarding illness.

It’s rarely easy to tell a patient no testing is indicated – but this is yet another example illustrating the minimal benefits to over-testing.

Reassurance After Diagnostic Testing With a Low Pretest Probability of Serious Disease”
http://www.ncbi.nlm.nih.gov/pubmed/23440131

Don’t Get Sick on the Weekend

Quite bluntly, you’re more likely to die.

These authors analyzed the 2008 Nationwide Emergency Department Sample, using 4,225,973 patient encounters as the basis of their observational analysis.  The absolute mortality differences between weekday emergency department presentations and weekend emergency department presentations is tiny – about 0.2% difference.  However, this difference is very consistent across type of insurance, teaching hospital status, and hospital funding source.

The NEDS sample did not offer these authors any specific explanation of the “weekend effect”, but they expect it is due to decreased resource availability on weekends.  The authors note specific systems in place (e.g., trauma centers, PICU, stroke centers) where weekend staffing is unchanged have demonstrated the ability to eliminate such weekend phenomena.  However, it’s probably never going to be the case that weekend shifts are less desirable – so we’re probably stuck with this slight mortality bump on weekends.

“Don’t get sick on the weekend: an evaluation of the weekend effect on mortality for

patients visiting US EDs”

www.ncbi.nlm.nih.gov/pubmed/23465873

Critical Deficiencies in Pediatric EM Training

This article is an overview of the critical procedures performed over a one-year period at Cincinnati Children’s, a large, well-respected, level 1 trauma center with a pediatric emergency medicine fellowship program.  In theory, this facility ought to provide trainees with top-flight training, including adequate exposure to critical life-saving procedures.

Not exactly.

In that one year period, the PEM fellows performed 32 intubations, 7 intraosseus line placements, 3 tube thoracostomies, and zero central line placements.  This accounted for approximately 25% of all available procedures – attending physicians and residents poached the remainder of procedures during the year.  Therefore, based on this observational data, these authors conclude the training in PEM might not be sufficient to provide adequate procedural expertise.  Then, the authors note pediatric emergency departments have such routinely low acuity – 2.5 out of every 1,000 patients requiring critical resuscitation – that it is inevitable these skills will deteriorate.

Essentially, this means the general level of emergency physician preparedness for a critically ill child is very low.  PEM folks might have more pediatric-specific experience – but very limited procedural exposure – while general emergency physicians perform procedures far more frequently – but on adults.  The authors even specifically note 63% of PEM faculty did not perform a single successful intubation throughout the entire year.

Their solution – which I tend to agree with – is the development of high-quality simulation tools to be used for training and maintenance of skills.  Otherwise, we won’t be providing optimal care to the few critically ill children who do arrive.

“The Spectrum and Frequency of Critical Procedures Performed in a Pediatric Emergency Department: Implications of a Provider-Level View”
www.ncbi.nlm.nih.gov/pubmed/22841174

The Boondoggle of Step 2 CS

Recent medical school graduates are familiar with the Step 2 Clinical Skills examination, a day-long charade of simulated clinical encounters intended to screen out medical students who are incapable of functioning in a clinical setting.  This test was adapted from the ECFMG Clinical Skills Assessment, intended essentially to screen out foreign medical graduates with inadequate communication skills to safely practice medicine in the United States.

However, U.S. and Canadian medical school graduates pass this test 98% of the time on the first attempt, and 91% of the time on a re-attempt.  This means each year $20.4 million are expended in test fees – and probably half again that amount in travel expenses – to identify 30-odd medical school graduates who are truly non-functional.  The authors of this brief letter in the NEJM suggest, with interest compounding secondary to medical school debt repayments, it costs over a million dollars per failed student.

Clearly, some medical students are not capable of functioning as physicians.  However, clinical skills teaching, evaluation, and remediation ought to be part of the purview of the medical school training program that has multi-year longitudinal experience with the student, not a one-day simulation.  I’m sure some of the few who fail Step 2 CS twice are capable of safely practicing medicine, and certainly many who pass Step 2 CS still require additional teaching.  I agree with these authors that this test is an expensive and ineffective farce.

Then again, as this NYTimes vignette points out, medical schools are having a tough time failing folks for poor clinical skills.  However, the solution is not to pass the buck along to the NBME.

“The Step 2 Clinical Skills Exam — A Poor Value Proposition”
www.nejm.org/doi/full/10.1056/NEJMp1213760