I’ve only worked in a handful of Emergency Departments – but at each institution, 72-hour Emergency Department recidivism has been tracked. The simple act of bothering to track such events implies a very simple conclusion: these revisits somehow reflect poor care, missed diagnoses, or other opportunities to prevent return visits. At a minimum, it’s best not to be an outlier at the high end.
These authors perform a retrospective evaluation of Healthcare Cost and Utilization Project data from New York and Florida, looking specifically at the outcomes of patients returning to the Emergency Department after an index visit. Based on approximately 9 million Emergency Department visits, these authors found recidivism starting at 8.2% by 7 days and increasing to 16.6% within 30 days. The proportion of re-visits resulting in an admission to the hospital was stable at ~14.5% across the time period. Patients with the greatest number of ED visits per year were the most likely to return, and the most likely to be admitted. Interestingly, only approximately one-quarter of the revisits were identified as for the same condition as their index visit.
The authors’ analysis focuses on comparing the outcomes of patients admitted at an index visit, re-admitted after an ED visit, and those re-admitted after a discharge from the hospital, including ICU admission, length of stay, mortality, and hospital costs. For what little insight it gives us, these outcomes tended to favor those discharged from the ED – although discharged patients were obviously younger and healthier at baseline than those who were analyzed as hospital readmissions.
These data – given the limitations of their source – do very little to inform any conclusions regarding the underlying processes at work. And, in essence, by lacking such insight, these data help support the conclusions of the authors: Emergency Department recidivism should not be used as a quality measure. This level of administrative data whitewashes any clues regarding the etiology of re-visits: are they misdiagnoses? Are they high healthcare utilizers with chronic problems? Is system access to primary care inadequate? Are they scheduled returns for wound care? Were these patient appropriately given trials of outpatient therapy with an expected failure rate? Were they simply just very satisfied patients returning to their new favored location for care? The overall recidivism rate, with all these confounders, is such a poor surrogate for possible missed diagnosis – and whether such missed diagnoses truly represent “low quality” care – that the opacity of the data presented by these authors proves its inadequacy.
Even more importantly, this is excellent context with which to review the proposed Clinical Emergency Data Registry quality measures. Do they accurately reflect the underlying quality of care? Can they be reliably and accurately measured with little impact on workflow and care delivery? Capturing data on care delivery is an important part of improving our specialty, but this draft requires substantial feedback.
“In-Hospital Outcomes and Costs Among Patients Hospitalized During a Return Visit to the Emergency Department”
http://jama.jamanetwork.com/article.aspx?articleid=2491638