Mostly unrelated to Emergency Medicine – but an interesting descriptive study of a downstream phenomenon I see on a frequent basis.
For example, I’ll intermittently follow-up a patient to see how they fared as an inpatient. I’ll read the inpatient documentation, consultant reports, etc. – and find the tiny EM HPI perpetuated throughout the chart with minimal modification. This anecdotal experience is backed up by these authors who used text-compare software to identify copied passages in daily progress notes from an ICU setting. In this ICU at MetroHealth in Cleveland, 82% of resident notes copied at least >20% of the text from the previous days’ progress note – and copied 55% of the prior content on average. Attending notes were slightly less frequently copied (74%), but tended to copy more content (61%).
There’s no conclusive data regarding whether this copy/paste practice affects patient outcomes, but it’s an interesting symptom of evolving medical care and documentation in the EHR era. I hope that, as HIT evolves, documentation tools trend towards encouraging concise, effective communication, rather than this sort of (likely ineffective) chart bloat.
“Prevalence of Copied Information by Attendings and Residents in Critical Care Progress Notes”
www.ncbi.nlm.nih.gov/pubmed/23263617
Category: Informatics
The EHR – A Tool For Blocking Admissions
This is a mildly entertaining ethnographic study of how ED physicians, IM physicians, and surgeons used the Electronic Health Record (EHR) in the context of patient care in a tertiary medical center.
Essentially, the authors observed and interviewed residents and attendings in their use of the EHR, and identified its use in a function termed “chart biopsy” during the admission handoff process. Inpatient teams were observed using the EHR to get a quick overview of the patient prior to the handoff, to provide the foundation for the history & physical, and – most entertainingly – to use as a weapon in negotiation and “blocking” potential admissions with ED physicians. Other amusing anecdotes include the authors’ characterization of inpatient physicians feeling “less ‘at the mercy’ of ED physicians” after doing a pre-handoff chart biopsy, or feeling as though they could guard against the “disorganized ramblings” off the handoff process.
Overall, the authors correctly identify EHRs as increasingly prevalent supplements to traditional information gathering techniques, and make a reasonable proposal for evolution in EHRs to aid the “chart biopsy” process.
“Chart biopsy: an emerging medical practice enabled by electronic health records and its impacts on emergency department-inpatient admission handoffs.”
http://www.ncbi.nlm.nih.gov/pubmed/22962194
Keeping Children Happy
When I started in medicine – hardly long ago – Child Life, if it existed at all in the Emergency Department, might have consisted of a few plastic toys and perhaps a Nintendo Entertainment System. Now, the staple of every department is an iPad, filled with apps and distractions for children.
This is a short article from the Pediatric literature reviewing a few cases in which tablet computers proved useful, along with a review of several apps worth loading on for distraction during potentially troubling procedures. Most of the apps reviewed are for iPad, but equivalent exist for Android devices and iPhone.
I’ve definitely gotten mileage out of the movie “Toy Story 3” on my iPhone – perfect for the 3 AM laceration repair when Child Life has gone home for the night.
“Using a Tablet Computer During Pediatric Procedures – A Case Series and Review of the ‘Apps'”
How Many Emergency Physicians Are On Twitter?
672.
Or, at least, that’s how many self-identified in their Twitter profiles as professional physicians in Emergency Medicine at the time this descriptive study was undertaken. According to the author estimates, this accounts for ~1.6% of the ~20,000 U.S. board-certified Emergency Physicians. The true number may be higher, owing to profiles that do not identify themselves professionally.
About half were “active” with a tweet within the last 15 days, and the other half were “inactive”. Active accounts followed more users and were followed by more users. They also have a visualization figure showing the interconnectedness of the active Twitter accounts, and, unsurprisingly, everyone tweets to the same group of twits, and vice versa.
So, it’s a small social media extension of the greater online presence of Emergency Physicians. I’d probably say that the primary flaw with the service, regarding promoting wider interaction between online EPs, is that it is a closed, self-contained system separate from the other online resources visited by EPs. The value is probably most to those who communicate and interact professionally in an active manner, whereas it doesn’t have as much to offer the passive observer.
“Analysis of emergency physicians’ Twitter accounts”
http://www.ncbi.nlm.nih.gov/pubmed/22634832
It Feels Good To Use an iPad
Recently, there has been a great deal of coverage on internet news sites with headlines such as “Study: iPads Increase Residency Efficency.” These headlines are pulled from a “Research Letter” in Archives of Internal Medicine, reporting from the University of Chicago, regarding the distribution of iPads capable of running Epic via Citrix.
Sounds good, but it’s untrue.
What is true is that residents reported that they used the iPads for work. The additionally thought that it saved them time, and thought it improved their efficiency on the wards. This is to say, they liked using the iPad.
The part that isn’t true is where the authors claim an increase in “actual resident efficiency.” By analyzing the hour of the day in which orders are placed, the authors attempt to extrapolate to a hypothetical reality in which this means iPads are helping their residents place orders more quickly on admitted patients, and to place additional orders while post-call, just before leaving the hospital. There is, in fact, no specific data that using the iPad makes the residents more efficient, only data showing the hour of the day in which orders are placed has changed from one year to the next. The iPad has, perhaps, changed their work habits – but without prospectively observing how these iPads are being used, it is impossible to conclude how or why.
But, at least they liked them! And, considering how addictive Angry Birds is, I’m surprised their productivity isn’t decreased.
Automagical Problem Lists
This is a nice informatics paper that deals mostly with problem lists. These are meticulously maintained (in theory) by inpatient and ambulatory physicians to accurately reflect a patient’s current medical issues. Then, when they arrive in the ED, you do your quick chart biopsy from the EMR, and you can rapidly learn about your patient. However, these lists are invariably inaccurate – studies show they’ll appropriately be updated with breast cancer 78% of the time, but as low as 4% of the time for renal insufficiency. This is bad because, supposedly, accurate problem lists lead to higher-quality care – more CHF patients receiving ACE or ARBs if it was on their diagnosis list, etc.
These authors created a natural language processing engine, as well as a set of inference rules based on medications, lab results, and billing codes for 17 diagnoses, and implemented an alert prompt to encourage clinicians to update the problem list as necessary. Overall, 17,043 alerts were fired during the study period, and clinicians accepted the recommendations of 41% – which could be better, but it’s really quite good for an alert. As you might expect, the study group with the alerts generated 3 times greater additions to the patient problem lists. These authors think this is a good thing – although, I have seen some incredible problem list bloat.
What’s interesting is that a follow-up audit of alerts to evaluate their accuracy based on clinical reading of the patient’s chart estimated the alerts were 91% accurate – which means all those ignored alerts were actually mostly correct. So, there’s clearly still a lot of important work that needs to go into finding better ways to integrate this sort of clinical feedback into the workflow.
So, in theory, better problem lists, better outcomes. However, updating your wife’s problem list can probably wait until after Valentine’s Day.
“Improving completeness of electronic problem lists through clinical decision support: a randomized, controlled trial.”
www.ncbi.nlm.nih.gov/pubmed/22215056
Heart Failure, Informatics, and The Future
Studies like these are a window into the future of medicine – electronic health records beget clinician decision-support tools that allow highly complex risk-stratification tools to guide clinical practice. Tools like NEXUS will wither on the vine as oversimplifications of complex clinical decisions – oversimplifications that were needed in a pre-EHR era where decision instruments needed to be memorized.
This study is a prospective observational validation of the “Acute Heart Failure Index” rule – derived in Pittsburgh, applied at Columbia. The AHFI branch points for risk stratification are…best described below, in this extraordinarily complex flow diagram:
Essentially, the research assistants in the ED applied an electronic version of this tool to all patients given by the Emergency Physician a diagnosis of decompensated heart failure – and then followed them for the primary outcome(s) of death or readmission within 30 days. In the end, in their small sample size, they find 10% of their low-risk population meets the combined endpoint, while 30.2% of their high-risk population meets their combined endpoint. Neither group had a very high mortality – most of the difference between groups comes from re-admissions within 30 days.
So, what makes this study important isn’t the AHFI, or that it is reasonable to suggest further research might validate this rule as an aid to clinical decision-making – it’s the progression forwards of using CDS in EHR to synthesize complex medical data into potentially meaningful clinical guidance.
“Validating the acute heart failure index for patients presenting to the emergency department with decompensated heart failure”
http://www.ncbi.nlm.nih.gov/pubmed/22158534
Your Residents Would Love a Wiki
It’s not a terribly profound paper – along the lines of “we did this and we liked it” sort of thing – but it is a relevant educational application of wikis in medicine.
The BIDMC Internal Medicine department undertook an initiative to essentially convert all their little handbooks and service guides to an online reference. They chose the wiki interface so anyone could update information or add pages while allowing updates to be tracked and rolled back as necessary. They promoted it during their intern orientation and made a significant effort both to get people to update it and use it. And, for the most part, they were successful. Most residents (92%) thought it was useful, it was mostly used to find phone numbers and rotation specific clinical information, and, overall, about half of the PGY-2 and -3s updated the site during the 2009-10 year.
It probably takes a lot of effort and requires just the right collaborative environment, but there are a lot of residencies, departments, or other clinical organizations that could also probably benefit from something similar – particularly if there are a lot of rotating students/residence between difference services or sites.
“Adoption of a wiki within a large internal medicine residency program: a 3-year experience”
http://www.ncbi.nlm.nih.gov/pubmed/22140210
ED Geriatric CPOE Intervention – Win?
It does seem as though this intervention had a measure of success – based on their primary outcome – but there’s more shades of grey throughout the article.
This is a prospective, controlled trial of a contextual computer decision-support (CDS) incorporated into the computerized provider order entry (CPOE) system of their electronic health record (EHR). They do a four-phase On/Off intervention where the CPOE either suggests alternative medications or dose reductions in patients >65 years of age. They look at whether the intervention changed the rate at which medication ordering was compliant with medication safety in the elderly, and then, secondarily, at the rate of 10-fold errors, medication cancellations, and adverse drug event reports.
The oddest part of this study is their choice of primary outcome measure. Ideally, the most relevant outcome is the patient-oriented outcome – which, in this case, ought to be a specific decrease in adverse drug events in the elderly. However, and I can understand where they’re coming from, they chose to specifically evaluate the usability/acceptability of the CDS intervention to verify the mechanism of intervention. There are lots of studies out there documenting “alert fatigue”, resulting in either no change or even increasing error rates.
As far as the main outcome measure goes, they had grossly positive findings – 31% of orders were compliant during the intervention periods vs. 23% of orders during the control periods. But, 92.5% of recommendations for alternative medications were ignored during the intervention periods – most commonly triggered by diazepam, clonazepam, and indomethacin. The intervention was successful in reducing doses for NSAIDs and for opiates, but had no significant effect on benzodiazepine or sedative-hypnotic dosing.
However, bizarrely, even though there was just a small difference in guideline-concordant ordering, there was a 4-fold reduction in adverse drug events – most of which occurred during the initial “off” period. As a secondary outcome, there’s much to say about it other than “huh”. None of their other secondary outcomes demonstrated any differences.
So, it’s an interesting study. It is consistent with a lot of previous studies – most alerts are ignored, but occasionally small positive effect sizes are seen. Their primary outcome measure is one of mostly academic interest – it would be better if they had chosen more clinically relevant outcomes. But, no doubt, if you’re not already seeing a deluge of CDS alerts, just wait a few more years….
“Guided medication dosing for elderly emergency patients using real-time, computerized decision support”
http://www.ncbi.nlm.nih.gov/pubmed/22052899
Computer Reminders For Pain Scoring Improve Treatment
This is a paper on an important topic – considering the CMS quality measures coming up that will track time to pain medication for long bone fractures – that demonstrates a mandatory computer reminder improved pain treatment more than an educational campaign did.
This is a prospective study of 35,628 patients visiting an Australian emergency department in which they went through several phases of intervention, the most salient in their minds was requiring assessment of a pain score at triage. They started by simply observing their performance, then they altered their electronic medical record with a mandated input of the pain score at triage. After the mandated scoring, time to analgesia went from median of 123 minutes to 95 minutes. After the mandate phase, the ED staff underwent an education program regarding pain management in the ED – and the time to analgesia didn’t improve any further.
So, it is reasonable to infer that mandating the pain score at triage had the desired effect on decreasing time to analgesia. However, 95 minutes until analgesia is still terrible. It would be far more interesting of an article if it truly broke down all the times – such as time to triage, time to room, time to physician, time to analgesia order, etc., because there are a lot more data points to gather.
Additionally, it seems it might simply be higher yield if – in addition to asking pain in triage – they had a triage protocol to treat the pain immediately at that point, rather than later downstream.
“Mandatory Pain Scoring at Triage Reduces Time to Analgesia”
www.ncbi.nlm.nih.gov/pubmed/21908072