More Mistakes In An Unfamiliar System

Probably tells us what we already know – and likely underestimates the problem.

These authors take a retrospective look at all the reported medication errors between 2000 and 2005, and then try to associate increased errors with the involvement of a temporary staff member.  The problem is, they don’t actually have staffing documents that report which employees are temporary – they rely on the population of a QA field listing “contributing factors”, under which temporary staff is an option.  So, you can dismiss this as a bit of garbage-in/garbage-out depending on how accurate the reporting is – but, I figure, if anything, people will forget to implicate temporary staffing more frequently than not.
More interesting – and potentially confounding re: temporary vs. permanent – are the perceived reported reasons behind the medication error.  Temporary staff were more likely to be reported to have knowledge deficits, performance deficits, and fail to follow appropriate procedures.  I might read into that data that it’s easier for an unfamiliar temp to appear knowledge-deficient, although that’s just my own imagination.
From a risk management standpoint, the solution seems to be: whatever the retention costs of your permanent staff members, they are almost assuredly lower than the costs associated with the errors inflicted upon patients by temps.
“Are Temporary Staff Associated with More Severe Emergency Department Medication Errors?”

Who Are The Readmitted?

Now, where I trained, we were the only useful facility for hundreds of miles – so we actually had a a lot of continuity of care in the Emergency Department.  And nothing beat the continuity we saw when a patient who was discharged in the morning was back in our Emergency Department by evening – and the inevitable question of “how did they screw this up?”

This is a retrospective look at the readmissions from 11 teaching and community hospitals trying describe the readmissions as avoidable vs. unavoidable, characterize the cause for readmission, and see if there were any baseline characteristics that might predict readmission.  They found avoidable readmissions were in the minority, and there was no useful predictive clinical information regarding baseline differences between the readmitted group and the overall cohort – comorbidities, length of stay, new medications, etc.  When patients were avoidably readmitted, however, several recurring factors were noted:
 – Management error (48% of the time)
 – Surgical complications (38.5%)
 – Medication-related event (32.7%)
 – Nosocomial infection (18.3%)
 – System error (15.4%)
 – Diagnostic error (10.6%).

Considering CMS is looking closely at decreasing payments to physicians and hospitals for readmissions, this study provides a small amount of systematic insight into some of the things we’ve all observed anecdotally.

“Incidence of potentially avoidable urgent readmissions and their relation to all-cause urgent readmissions.”
www.cmaj.ca/content/early/2011/08/22/cmaj.110400

Malpractice Risk in Emergency Medicine

I was actually surprised by these statistics – I expected Emergency Medicine to be higher.  After all, we’re meeting people with potentially unrealistic expectations, suffering long wait times, without continuity of care, and potential bad outcomes lurking everywhere.

But, really, our claims against and claims with payout are really pretty much average across specialties.  Neurosurgery and Thoracic Surgery are the nightmare specialties, where nearly a 5th of physicians practicing in those specialties has a claim filed against them each year.  Another interesting statistic was that Gynecology, only a little above average in claims filed against, has the highest percentage of payouts.

Neurosurgery, Neurology, and Internal Medicine lead the way in median payout, but Pediatrics, Pathology, and Ob/Gyn lead the way in mean payout – apparently skewed by the occasional massive award.

Given the legislation pending in many states these days giving additional protections to Emergency Physicians and physicians on-call to Emergency Departments, it’s really not a bad time to be in EM, from a liability standpoint.

“Malpratice Risk According to Physician Specialty”
www.ncbi.nlm.nih.gov/pubmed/21848463

CT Use Is Increasing(ly Justified?)

Retrospective cohort analysis based off the NHAMCS dataset, with all the inherent limitations within.

We have a 330% increase in the use of CT in the Emergency Department – up from 3.2% in 1996 to 13.9%  in 2007.  This increase is pretty stable across all age groups (including a rate of up to nearly 5% now in patients under 18 years of age).  The interesting part of the paper that’s something we didn’t already know, is their data regarding the adjusted rate of hospitalization or transfer after receiving CT.  In 1996, 26% of patients receiving a CT were admitted to the hospital, while now only 12% of patients receiving CT are admitted to the hospital.

The problem is, I’ve seen news organizations running with the conclusion: CT rates might be higher, but since the relative risk of hospitalization is lower after a CT, therefore, it must be preventing hospitalizations.  But, you can’t draw any such conclusion from the data – particularly considering hospitalizations have climbed over that same period.

We just aren’t seeing any data that links the increase in CT use to improved outcomes.  Increased CT usage certainly has its place as the standard of care in many instances, but there’s no silver lining to this 330% increase.

“National Trends in Use of Computer Tomography in the Emergency Department.”
www.ncbi.nlm.nih.gov/pubmed/21115875

High-Risk Discharge Diagnoses

Good news – only 0.05% of your discharged patients will meet an untimely end within 7 days of the Emergency Department visit.  Not a frightening number, but definitely enough to keep you on your toes.

It’s a retrospective Kaiser Health System cohort of 728,312 visits across two years, and the authors calculated the base rate of 50 per 100,000, as well as looking at other features and discharge diagnoses that increased the OR for death within 7 days.  And, even the sickest, most elderly have OR that are low enough that you’re still going to have good outcomes the overwhelming preponderance of the time.  Age greater than 80 gives an OR of 10.6 and a score >3 on the Charlson Comorbidity Index gives an OR of 6.7.  As for the diagnoses they found that are most highly associated with bad outcomes – the only two with OR great than 5 are noninfectious lung disease (OR 7.1) and renal disease (OR 5.6).  These are kind of interesting buckets of diagnoses, specifically in the sense regarding how nonspecific they are – which the authors attribute to diagnostic uncertainty.  I.e., the reason why patients had bad outcomes with “noninfectious lung disease” is because clinicians missed finding the specific morbid diagnosis in these patients.
I don’t think this is practice-changing news, since these rates are so low in general that additional testing and hospitalization will harm more people than these missed diagnoses – but it’s an interesting number crunch article.
“Patterns and Predictors of Short-Term Death after Emergency Department Discharge”

Physicians Will Test For PE However They Damn Well Please

Another decision-support in the Emergency Department paper.

Basically, in this study, an emergency physician considered the diagnosis of pulmonary embolism – and a computerized intervention forced the calculation of a Wells score to help guide further evaluation.  Clinicians were not bound by the recommendations of the Wells calculator to guide their ordering.  And they sure didn’t.  There were 229 patients in their “post-intervention” group, and 26% of their clinicians said that evidence-based medicine wasn’t for them, and were “non-compliant” with their testing strategy.

So, did the intervention help increase the number of positive CTAs for PE?  Officially, no – their trend from 8.3% positive to 12.7% positive didn’t meet significance.  Testing-guideline complaint CTA positivity was 16.7% in the post-intervention group, which, to them, validated their intervention.

It is interesting that a low-risk Wells + positive d-Dimer or high-risk Wells cohort had only a 16% positive rate on a 64-slice CT scanner – which doesn’t really match up with the original data.  So, I’m not sure exactly what to make of their intervention, testing strategy, or ED cohort.  I think the take home point is supposed to be, if you you can get evidence in front of clinicians, and they do evidence-based things, outcomes will be better – but either this just was too complex a clinical problem to tackle to prove it, or their practice environment isn’t externally valid.

Should Rural Health Care Be Equivalent?

“All residents in the United States should have access to safe, high-quality health care and should have confidence in the health care system regardless of where they live.”

That is the final statement of the accompanying editorial to the JAMA article documenting superiority in outcomes in urban hospitals vs. critical care access rural hospitals for acute MI, CHF, and pneumonia.  The acute MI study population is slightly more ill at baseline in the rural hospital sample, but the groups are otherwise similar.  Raw mortality is higher for AMI (26.1% vs 23.9% adjusted), CHF (13.4% vs. 12.5%) and pneumonia (13.0% vs. 12.5% [not significant]) favoring urban hospitals.

The key feature – critical access hospitals were less likely to have ICUs, cardiac cath, surgical capabilities, and had reduced access to specialists.  Is it any wonder their outcomes are worse?  As someone who moonlit in one of these hospitals as a resident, I can guarantee the standard of care in a rural setting is lower.

But, coming back to the original supposition – is it realistic to dedicate the funding and resources to bring rural hospitals up to the standard?  To equip far-flung hospitals with the same standard of care as urban settings to cover the remaining 20% of the population is likely simply an unfeasible proposition.  Living in rural areas is simply going to come with the risks associated with unavoidable delays in care and reduced access to specialists and technology.

“Quality of Care and Patient Outcomes at Critical Access Rural Hospitals”
www.ncbi.nlm.nih.gov/pubmed/21730240
“Critical Access Hospitals and the Challenges to Quality Care”
www.ncbi.nlm.nih.gov/pubmed/21730248

Online Publishing of ED Wait Times

When a small city only has two Emergency Departments, you can run a study like this to see what effect publication of ED wait times has on visits.

While it is fabulously logical that if 18 to 40 people a day are looking at your Emergency Department wait times that some portion of those people will choose a facility with a shorter wait time – or choose not to come to the ED at all – or choose to come in when they might not have otherwise come in if the wait time is short – this study doesn’t actually try to study the population of interest.  They need to somehow capture individuals who are using the published information to make decisions, rather than looking generally at their overall wait time statistics – because, even though they say their results “were consistent with the hypothesis that the publication of wait time information leads to patients selecting the site with shorter wait time”, they are making a huge unsubstantiated leap.

Looking at their descriptive statistics, hardly anything changed to actually justify their conclusions, and, really, it looks like patients just based their decisions pretty heavily on which of the two hospitals was closer – particularly Victoria Hospital, which people only went to if it was nearer.  I do also find it fascinating that their mean wait time rose from about 105 minutes to 115 minutes, yet the amount of time their wait time was >2 hours (120 minutes) actually dropped from 13% to 9%.  This is how they justify their conclusion that the “spikes” are mitigated by online usage – and it may be true – but there are too many moving parts and they aren’t actually asking people if they used the website and used the information from it.

“The effects of publishing emergency department wait time on patient utilization patterns in a community with two emergency department sites: a retrospective, quasi-experiment design.”
http://www.ncbi.nlm.nih.gov/pubmed/21672236

Facebook, Savior of Healthcare

This is just a short little letter I found published in The Lancet.  Apparently, the Taiwan Society of Emergency Medicine has been wrangling with the Department of Health regarding appropriate solutions to the national problem of ED overcrowding.  To make their short story even shorter, apparently, they ended up forming a group on Facebook, and then posting their concerns to the Minister of Health’s Facebook page.  This then prompted the Minister of Health to make surprise visits to several EDs, and, in some manner, the Taiwanese feel their social networking has led to a fortuitous response to their public dialogue.

So, slowly but surely, I’m sure all these little blogs will save the world, too.

“Facebook use leads to health-care reform in Taiwan.”
http://www.ncbi.nlm.nih.gov/pubmed/21684378

“Time-Out” In The ED Is Nearly Universally Useless

…but still probably a good idea.

Out of 225 ACEP councillors responding to a survey, 5 knew of an instance in the past year where a time-out may have prevented an error.  So, a year’s worth of personal patient encounters, plus whatever they heard about in their department, multiplied by 225 – which means we’re looking at hundreds of thousands of patient encounters – and there were only a handful of events where a time-out would have helped.

That being said, time-outs have been a Universal Protocol with the National Patient Safety Goals since 2004 because performing the wrong procedure, at the wrong site, on the wrong patient really falls into a category of a “never event”.  It does seem like a no-brainer in the ED, where the procedures we’re performing on patients are specifically related to the unique presenting event, but errors still occur – and the magnitude of the harm to the patients who are being harmed is probably greater than the consequences of the additive delay in care to other patients from the cumulative time performing the time-out.

“A Survey of the Use of Time-Out Protocols in Emergency Medicine”