A Very Odd Look at CT In the ED

Why do we perform CTs in the Emergency Department?  It’s fair to say the primary indication is diagnostic certainty: the ruling-in or ruling-out of a disease process of substantial clinical relevance.  However, this study begs the question: have we lost touch with this concept of “substantial clinical relevance”?

This is a qualitative study evaluating physician decision-making in the context of CT ordering.  These authors provided physicians, approximately 2/3rds attending physicians, a questionnaire pre- and post-CT for 1,280 patients in the Emergency Department.  The main gist: what are you worried about?  How confident are you in the diagnosis?  And, then, after CT, how about now?

The bullet-point summary:

  • Physician confidence in their diagnosis grew after CT.  Splendid.
  • CT excluded or confirmed alternative diagnoses in 95+% of cases.  Excellent.
  • Increasing pre-CT confidence in a leading diagnosis was associated with lesser changes in leading diagnosis post-CT.  OK.
  • Many pre-CT leading diagnoses were benign, but with low physician confidence.  Except for CT head.
  • Nearly 3/4ths of CT scans performed of the head had a leading diagnosis of “Benign headache” or other, had no change in diagnosis following CT, and confidence was generally pretty high.  This is awful.
  • Finally, if you were hoping a CT would prevent bouncebacks: no.  15% of abdominal pain returned within a month for related reasons, as well as 14% of chest pain/dyspnea, and 11% of headache.

CT is an important tool.  It certainly makes the life of the risk-averse physician much, much easier.  However, the instances in which CT identified an important diagnosis in this study are certainly in the minority – most final diagnoses were either benign or could have been achieved through other means.  Unfortunately, very few specific actionable items can be taken away from this study – excepting CT for headache (ugh) – but it certainly shows there is fertile ground for a culture change to take root and decrease low-yield CT utilization.

“CT in the emergency Department: A Real-Time Study of Changes in Physician Decision Making”
http://www.ncbi.nlm.nih.gov/pubmed/26402399

The “Routine” Chest X-Ray

Many presenting complaints in the Emergency Department call for cardiothoracic imaging.  Some can be assessed by point-of-care ultrasound, but, for the most part, plain radiography is the established routine.  Whether the pretest probability of disease warrants such widespread use is one matter.  This article documents yet another – duplication of imaging.

These authors review four years of radiology from their institution and document 3,627 patients for whom both CXR and chest CT were ordered.  Their main analysis breaks down the use of radiology mostly looking at the order of which these studies were requested, and whether results from one were available prior to the completion of the other.

For the most part, the CXR was ordered first, and the images were available for review before the subsequent CT chest.  However, in 354 (9.8%) cases, the CXR images hadn’t even yet been acquired when the CT chest was ordered.  This probably generally overlaps the 134 (3.7%) cases where the CT chest was ordered simultaneously or prior to the CXR.  Regardless – if the results were clinically irrelevant, why order the test?

I think it’s fair to say many of the CXRs included in this study were pointlessly redundant – especially when the decision for CT was obviously made prior to their acquisition.  No doubt the CXR is included in most ED protocols for certain chief complaints, and is ordered reflexively without thought.

Looking for waste to target in the system?  Here you go.

“Inefficient Resource Use for Patients Who Receive Both a Chest Radiograph and Chest CT in a Single Emergency Department Visit”
http://www.ncbi.nlm.nih.gov/pubmed/26387774

Not So Fast on Race-Related Oligoanalgesia

This recent study regarding pain control received a lot of press, covered by both Reuters and NBC News.  The general gist of the breathless coverage seems to indict physicians for latent biases against treating African American children with opiates.

I’m not so certain.

This is a retrospective evaluation of a national Emergency Department database of seven years of ED visits for appendicitis, looking at pain control disparity between white children and minorities.  Pain management was documented in only 57% of children, 41% of which was opiates.  Children of African American descent received opiate medication only 12% of the time, leading to the authors’ observations of an apparent reluctance to treat this population with opiates.

But, I think the foundation of their analysis may be misleading.  The authors state: “The following covariates were included in our analyses to adjust for potential confounding: ethnicity, age, sex, insurance status, triage acuity level, pain score, geographic region, ED type, and survey year.”  However, I think these data need to be addressed at a within-hospital level, not as a pooled cohort.  African Americans have been previously shown to be over-represented at low-quality, safety-net hospitals – the sort of hospitals almost assuredly do a poor job of addressing and managing pain across all their patients.  Indeed, when other researchers have looked at racial disparities in care for acute myocardial infarction, performing within-hospital analyses dramatically altered their findings, with individual hospital inadequacies accounting for a greater effect than ethnicity.

The foundational issues in race-related difference in care may yet be present, but I do not believe to the magnitude these data reflect.  Rather than suggesting “there may be a higher threshold of pain score for administering analgesia to black patients with appendicitis,” these data probably reflect the underlying under-resourced care available to this population.  A tremendous and embarrassing problem, to be sure, but with a different approach needed for a solution.

“Racial Disparities in Pain Management of Children With Appendicitis in Emergency Departments”
http://archpedi.jamanetwork.com/article.aspx?articleid=2441797

The MD/NP Equivalency Study!

As covered by Medscape:

“Nurse practitioners’ diagnostic reasoning abilities compared favourably to those of doctors in terms of diagnoses made, problems identified and action plans proposed from a complex case scenario.”

Certainly not delving into the myriad of issues associated with healthcare roles and training, but, from a critical appraisal standpoint:

  • A gold standard for acute clinical evaluation determined by a general practioner, a rheumatologist, and a diabetes nurse practitioner.
  • An inability to recruit 30 physicians to match the 30 NPs for the study, and thus it proceeded with only 16.
  • Many of the “correct diagnoses” involved in their test of equivalency were related to chronic health maintenance, and not the acute illness of presentation.
  • The NPs recruited having had almost 30 years of clinical experience, compared with the physicians all still in training, with an average 6 years of experience, several of whom were engaged non-primary care (e.g., cardiology) specialties.

The commentary on Medscape waxes poetic regarding  reconciliation of independence and oversight issues based on this “evidence”.  The limitations in these data are so profound that this study is virtually meaningless – and serves no function in further illuminating the safety or effectiveness of scope of practice, as these authors unfortunately attempt.

“Nurse practitioners versus doctors diagnostic reasoning in a complex case presentation to an acute tertiary hospital: A comparative study”
http://www.ncbi.nlm.nih.gov/pubmed/25234268

The NNT of a Chest Pain Admission

To prevent death: 333.

In a bitterly complex analysis of Center for Medicare and Medicaid Services data, these authors describe a relationship between admission rate and subsequent cardiac adverse events.  Based on a statistical sample of Medicare patients visiting acute care hospitals, these authors calculate an admission rate for chest pain for each, and divide the sample into quintiles.  Then, the authors follow index visits for chest pain to those hospitals, and measure 30-day acute myocardial infarction or death.  Thus, a relationship between admission rate and poor outcomes.

The mean adjusted admission rate for chest pain ranged from 37.5% in the lowest quintile to 81.0% in the highest quintile.  Owing to the large sample size, many of the differences between hospitals in each quintile meet statistical significance.  However, the difference that leaps out at me the most, for-profit hospitals represented 24% of the highest quintile for admissions, while for-profit hospitals were only 7.8% of the lowest.

And, what was that massive variation and expenditure associated with, in terms of beneficial outcomes?  An inconsistent reduction in subsequent AMI and death which, through multivariate logistic regression, was equal to about 3.6 fewer AMIs and 2.8 fewer deaths per 1,000 patients – with very wide 95% CIs.

And, thus, to oversimplify and overstate the soundness of the analysis, the NNT of 333.

It seems very reasonable to suggest a relationship between intensity of care and 30-day cardiac outcomes.  Such intensity of care, however, is quite expensive – on the order of probably $1.5-$2M per life shortened and captured in this 30-day window.  As our population ages, we are simply going to have to do better – in order to maximize the value of the limited healthcare dollars.

“Variation in Chest Pain Emergency Department Admission Rates and Acute Myocardial Infarction and Death Within 30 Days in the Medicare Population”
http://www.ncbi.nlm.nih.gov/pubmed/26205260

Goodness Gracious We’re &*@ing Up Sinusitis

The American Academy of Allergy, Asthma, and Immunology has a lovely Choosing Wisely statement on sinusitis, featuring the following highlights:

  • Antibiotics usually do not help sinus problems.
  • Antibiotics cost money.
  • Antibiotics have risks.

So, how does one of the United States largest organized health systems fare for the treatment of such a simple, basic, commonplace condition?  A system, perhaps, that prides itself on internal quality initiatives and guideline adherence?  Well, based on this sample of 152,774 Primary Care, Urgent Care, and Emergency Department patients in Kaiser Southern California, they are: still awful.

  • ED patients received antibiotics 72.8% of the time.
  • UC patients received antibiotics 89.3% of the time.
  • PC patients received antibiotics 89.8% of the time.

And, not only that, antibiotic usage was all over the map, with large cohorts receiving prescriptions for less-appropriate options such as azithromycin and trimethoprim-sulfamethoxazole.

Why are we so terrible at this?

“Low-Value Care for Acute Sinusitis Encounters: Who’s Choosing Wisely?”
http://www.ajmc.com/journals/issue/2015/2015-vol21-n7/Low-Value-Care-for-Acute-Sinusitis-Encounters-Choosing-Wisely

A Little Intubation Checklist Magic

In the interests of patient safety, many have turned to peri-procedural checklists.  Rather than,
essentially, “winging it”, a standardized protocol is followed each time, reducing the chance of an important omission.

These authors describe a checklist intervention for, as they describe, the high-risk procedure of endotracheal intubation in the setting of trauma.  The checklist involves, generally, assignment of roles, explicit back-up airway planning, and adequate patient positioning.  The authors used a before-and-after design using video review of all intubation events to compare steps performed.

In the six-month pre-checklist period, 7 of 76 intubation events resulted in complications – 6 desaturations, 2 emesis, and 2 hypotension.  In the post-intervention period, using the checklist, events were reduced to a single episode of desaturation in 65 events.  So, success?

As with every before-and-after study, it is hard to separate the use of the checklist to the educational diffusion associated with checklist exposure.  Would another, less intrusive, intervention been just successful?  Will the checklist lose effectiveness over time as it is superseded by newer safety initiatives?  And, most importantly, what did operators actually do differently after checklist implementation?

Only 4 of 15 checklist elements differed from the pre-checklist period: verbalization of backup intubation technique (61.8% vs. 90.8%), pre-oxygenation (47.3% vs. 75.4%), team member roles verbalized (76.4% vs. 98.5%), and optimal patient positioning (80.3% vs. 100%).  If only four behaviors were substantially changed, are they responsible for the outcomes difference – which, technically, is solely episodes of hypoxia?

Their intervention seems reasonable, and the procedure is likely high-risk enough to warrant a checklist.  However, I probably would not implement their specific checklist, as some refinement to the highest-yield items would probably be of benefit.

“A Preprocedural Checklist Improves the Safety of Emergency Department Intubation of Trauma Patients”
http://www.ncbi.nlm.nih.gov/pubmed/26194607

Your Bouncebacks Are Not Alone

“Remember that patient you had yesterday?” is infrequently a favorable start to a conversation.  Emergency Department bouncebacks are frequently tracked metric, ostensibly for self-reflection, but also as a proxy for care quality and mismanagement.

This is a 6 state review of 2 to 5 years of data linked between State Emergency Department Databases and State Inpatient Databases, evaluating Emergency Department recidivism up to 30 days.  The authors also linked this data to healthcare cost data, but highest quality cut of meat here is the detail on bouncebacks.  Based on 53,530,443 Emergency Department visits, these authors found the overall 3-day revisit rate was 8.2%, and the 30-day revisit rate was 19.9%.  Approximately 2/3rds of revisits were to the same Emergency Department, with the remainder choosing a different ED.

These numbers, I think, are much higher than most would expect – and provide at least a small amount of solace if you feel as though it seems there’s always a previous patient of yours checking back into the ED.  The authors break down several interesting details regarding the types of revisits:

  • Skin and soft-tissue infections resulted in 23.1% 3-day revisit rates, with 12.9% admission on revisit.
  • Abdominal pain was the second-most frequent revisit, at 9.7%, associated with 29.9% admission on revisit.
  • Patients aged 18-44 were more likely to visit a different ED for the second visit, while patients aged 65 and above were the most likely to be admitted on revisit.
  • Patient with back pain had the highest revisit rate to a different ED within 3 days, 7.8%, with 41% of those visiting a different ED.

Simply at face value, these additional visits are expensive and resource-intensive – particularly if there’s not an effective local electronic information exchange preventing duplication of testing.  There is also clearly ample opportunity to develop targeted interventions for certain groups of patients to potentially provide follow-up care in a lower-cost setting.

“Revisit Rates and Associated Costs After an Emergency Department Encounter”
http://www.ncbi.nlm.nih.gov/pubmed/26030633

A Window Into Your EHR Sepsis Alert

Hospitals are generally interested in detecting and treating sepsis.  As a result of multiple quality measures, however, now they are deeply in love with detecting and treating sepsis.  And this means: yet another alert in your electronic health record.

One of these alerts, created by the Cerner Corporation, is described in a recent publication in the American Journal of Medical Quality.  Their cloud-based system analyzes patient data in real-time as it enters the EHR and matches the data against the SIRS criteria.  Based on 6200 hospitalizations retrospectively reviewed, the alert fired for 817 (13%) of patients.  Of these, 622 (76%) were either superfluous or erroneous, with the alert occurring either after the clinician had ordered antibiotics or in patients for whom no infection was suspected or treated.  Of the remaining alerts occurring prior to action to treat or diagnose infection, most (89%) occurred in the Emergency Department, and a substantial number (34%) were erroneous.

Therefore, based on the authors’ presented data, 126 of 817 (15%) of SIRS alerts provided accurate, potentially valuable information.  Unfortunately, another 80 patients in the hospitalized cohort received discharge diagnoses of sepsis despite never triggering the tool – meaning false negatives approach nearly 2/3rds the number of potentially useful true positives.  And, finally, these data only describe patients requiring hospitalization – i.e., not including those discharged from the Emergency Department.  We can only speculate regarding the number of alerts triggered on the diverse ED population not requiring hospitalization – every asthmatic, minor trauma, pancreatitis, etc.

The lead author proudly concludes their tool is “an effective approach toward early recognition of sepsis in a hospital setting.”  Of course, the author, employed by Cerner, also declares he has no potential conflicts of interest regarding the publication in question.

So, if the definition of “effective” is lower than probably 10% utility, that is the performance you’re looking it with these SIRS-based tools.  Considering, on one hand, the alert fatigue, and on the other hand, the number of additional interventions and unnecessary tests these sorts of alerts bludgeon physicians into – such unsophisticated SIRS alerts are almost certainly more harm than good.

“Clinical Decision Support for Early Recognition of Sepsis”
http://www.ncbi.nlm.nih.gov/pubmed/25385815

A Laughable tPA “Systematic Review”

Over 200,000 physicians belong to the American Medical Association.  The Journal, therefore, of this Association has a significant audience and a long tradition.  Continuing Medical Education inserts in JAMA may represent the basic education of many new developments for general practitioners.

Unfortunately, the authors of this most recent CME portion seem to require their own education on the conduct of a “systematic review”.

A properly performed systematic review utilizes a precise, replicable, well-described search strategy with which to canvass the evidence for synthesis.  The assembled evidence is then evaluated based on pre-specified criteria for inclusion or exclusion.  The end-result, hopefully, is a knowledge translation document based on the entire scope of published literature, accounting for controversy and irregularity in the context of a larger summary.

These authors perform a systematic review on “acute stroke intervention”.  They identify and review 145 abstracts utilizing multiple combinations of MeSH terms and synonyms for “brain ischemia/drug therapy, stroke drug/therapy, tissue plasminogen activator, fibrinolytic agents, endovascular procedures, thrombectomy, time factors, emergency service, treatment outcome, multicenter study, and randomized controlled trial”.  A massive undertaking, to be sure – considering these authors are also including intra-arterial and mechanical therapy in their review.

Yet, as indicated in their evidence review chart in the supplement, this strategy managed to identify only 17 RCTs – in the whole of systemic and endovascular therapy.  As an example for comparison, the latest Cochrane Review of thrombolytics for acute ischemic stroke included 27 trials of fibrinolytic agents alone.  And, as covered in their text and cited in their References, the RCT evidence regarding systemic therapy for acute stroke consists of: NINDS and ECASS III.

That’s it.

No MAST-E, MAST-I, or ASK.  No mention of the smaller imaging-guided trials, EPITHET, DEDAS, DIAS, or DIAS II.  Or, even excluding non-tPA trials, no ECASS, ECASS II, ATLANTIS, or, even the largest of flawed acute stroke trials, IST-3.  And, even with such limited coverage, certainly no mention of any of the controversy over imbalances in NINDS, nor flaws in ECASS III pertaining to tPA’s persistent non-approval by the FDA for the 3-4.5 hour time window.

If this were simply a commentary for the lay press regarding the bare minimum highlights of the last 20 years of stroke treatment, perhaps this would suffice.  And, frankly, these authors do much better regarding their reporting on the recent endovascular trials.  But, a CME publication in a prominent medical journal failing to address 90% of the evidence on a particular topic – yet calling itself a “systematic review” – is retraction-worthy.

“Acute Stroke Intervention – A Systematic Review”
http://jama.jamanetwork.com/article.aspx?articleid=2247149