No One Knows How To Diagnose CAD

And, once they diagnose it – it doesn’t seem like anyone knows what to do with it, considering all the brouhaha these days about potentially unnecessary PCI and stenting.

But, this is a prospective coronary CT angiography registry that was reviewed to determine whether any value was added with the CCTA over conventional stress testing in patients without known CAD.  They reviewed 22,551 patient records, excluded patients with known CAD, incomplete data, and patients who hadn’t undergone a recent (<3 months) cardiac stress test, and ended up with 6,198 patients.

The point the authors seem to be trying to make is that CCTA is a better test than stress testing, but that’s only part of the story.  What they note that is interesting along the way is that there is absolutely no correlation between stress testing results and CCTA results.  Patients with normal, equivocal, and abnormal stress results had, essentially, the same incidence of normal, <50%, and >50% coronary stenosis.  And, the hidden story about how CCTA is being used in their patient cohort is fascinating – a younger group with typical chest pain and normal stress tests referred to CCTA vs. an older group with less typical symptoms and abnormal stress tests referred to CCTA.

But, then, finally they compare both of their disparate tests to the “gold standard” of invasive angiography, and they find that both tests are awful at predicting >50% coronary stenosis.  Stress testing was 60.4% sensitive and 34% specific, while CCTA was 94% sensitive and 37% specific.  So, we have two tests that are wrong about the presence of disease twice as often as they’re right – and these authors are using a clinically irrelevant 50% stenosis as their “gold standard”.

Rather entertaining to observe the difficulty the cardiology literature is having reconciling all their different imaging options with clinically relevant stenoses, much less outcomes.  Good thing all these inadequate tests are cheap and harmless….

“Coronary Computed Tomography Angiography After Stress Testing”

Would Free Medications Help?

It’s too bad this study doesn’t actually look at what I would have hoped it would – but it’s interesting, nonetheless.  One of my hospitals is a true safety-net hospital and we see, repeatedly, repeatedly, repeatedly, the complications of neglected chronic disease.  One of our frequent laments is whether the costs of recurrent acute hospitalization wouldn’t be prevented a hundred times over if we’d simply sink some costs into preventative maintenance care, free medications, etc.

This study almost looks at that.  This is from the NEJM which compared the outcomes of patients following myocardial infarction, and they follow a group which receives completely free medication and a group that does not.  Unfortunately, the group that does not receive free medications is still receiving heavily subsidized medication support, and is only responsible for a co-pay.

Despite only needing to come up with a co-pay, there’s a significant difference in medication compliance, with an average absolute difference in full adherence with medications of ~5-6%.  With this minimal absolute difference in adherence, the full adherence group had significantly fewer future vascular events – mostly from stroke and myocardial infarction – approximately a 1% absolute decrease.  There was a non-significant decrease in total costs associated with the patients who were on the full-coverage medication plan.

Now, they don’t follow-up any medication-related adverse events, so this is the most optimistic interpretation of benefits of full-coverage, but it would seem that it is overall cheaper and more beneficial to supply medications for free.  And, it makes me wonder what the results of a similar cost/health-benefit study would show in our safety-net population.

“Full Coverage for Preventive Medications after Myocardial Infarction”

Heart Failure, Informatics, and The Future

Studies like these are a window into the future of medicine – electronic health records beget clinician decision-support tools that allow highly complex risk-stratification tools to guide clinical practice.  Tools like NEXUS will wither on the vine as oversimplifications of complex clinical decisions – oversimplifications that were needed in a pre-EHR era where decision instruments needed to be memorized.

This study is a prospective observational validation of the “Acute Heart Failure Index” rule – derived in Pittsburgh, applied at Columbia.  The AHFI branch points for risk stratification are…best described below, in this extraordinarily complex flow diagram:

Essentially, the research assistants in the ED applied an electronic version of this tool to all patients given by the Emergency Physician a diagnosis of decompensated heart failure – and then followed them for the primary outcome(s) of death or readmission within 30 days.  In the end, in their small sample size, they find 10% of their low-risk population meets the combined endpoint, while 30.2% of their high-risk population meets their combined endpoint.  Neither group had a very high mortality – most of the difference between groups comes from re-admissions within 30 days.

So, what makes this study important isn’t the AHFI, or that it is reasonable to suggest further research might validate this rule as an aid to clinical decision-making – it’s the progression forwards of using CDS in EHR to synthesize complex medical data into potentially meaningful clinical guidance.

“Validating the acute heart failure index for patients presenting to the emergency department with decompensated heart failure”
http://www.ncbi.nlm.nih.gov/pubmed/22158534

Cardiology Corner – More Brugada Tidbits

Most physicians are aware of the Brugada Syndrome cardiac repolarization phenotype – the most recognizable being Type 1, or “coved” type.

Type 2 and Type 3, however, are essentially indistinguishable from an incomplete right bundle branch block with ST-segment elevation and a positive T-wave.  These authors, based on a small case series, took 38 patients referred for ajmaline provocation testing and compared their baseline ECGs.  Of the 14 patients who converted to Type 1 following ajmaline infusion, they found the baseline angle of the R’ wave differed significantly – with an alpha angle cut-off of 50 degrees and a beta angle cut-off of 58 degrees.

A little esoteric, but fascinating.

“New Electrocardiographic Criteria for Discriminating Between Brugada Types 2 and 3 Patterns and Incomplete Right Bundle Branch Block”
http://www.ncbi.nlm.nih.gov/pubmed/22093505

Yet Another Highly Sensitive Troponin – In JAMA

…peddling the same tired phenomenon of magical thinking regarding the diagnostic miracle of highly sensitive troponins.  However, this one is different because it’s been picked up by the AP, CBS News, Forbes, etc. saying: “Doctors are buzzing over a new blood test that might rule out a heart attack earlier than ever before” and other such insanity.  Yes, our hearts are in atrial flutter around the water cooler about a new assay that changes sensitivity from 79.4% to 82.3% at hour 0 and 94.0% to 98.2% at hour 3.


Unless you actually read the article.

Somehow, contrary to every other high-sensitivity troponin study, this particular highly-sensitive troponin had increased specificity as well – which simply doesn’t make sense.  If you’re testing for the presence of the exact same myocardial strain/necrosis byproduct as a conventional assay, it is absolutely inevitable that you will detect a greater number of >99th percentile values in situations not reflective of acute coronary syndrome.  The only way to increase both sensitivity and specificity is to measure something entirely different.


Or, if it suits your study aims, you can manipulate the outcomes on the back end.  In this study, the final diagnosis of ACS “was adjudicated by 2 independent cardiologists” whose diagnostic acumen is enhanced by financial support including Brahms AG, Abbott Diagnostics, St Jude Medical, Actavis, Terumo, AstraZeneca, Novartis, Sanofi-Aventis, Roche Diagnostics, and Siemens.

I am additionally not impressed by their results reporting – sensitivity and specificity, followed by the irrelevant positive predictive and negative predictive values.  Since the PPV and NPV are determined by the incidence of disease in their cohort, they’re giving us numbers that are potentially not externally valid.  Rather, they should be reporting positive and negative likelihood or odds ratios – which are relatively cognitively unwieldy, but at least not misleading, but conceptually facile, like PPV and NPV.

And this is from JAMA.  Oi.

“Serial Changes in Highly Sensitive Troponin I Assay and Early Diagnosis of Myocardial Infarction”

How Frequently Is The Cath Lab Cancelled?

In North Carolina – a fair bit, actually.

This is a 14-hospital registry of cardiac catheterization activations for which the authors retrospectively evaluated how many were subsequently cancelled after activation.  They don’t delve into a great deal of detail regarding specific findings that accounted for the cancellation – they simply observe the broad categories of cancellation.

Of all cath lab activations, it was judged that 15% were “inappropriate”, with the gold standard being the consulting cardiologist opinion.  Of the cancellations, 40% were based on the EMS ECG, 31% were ED ECG, and the remainder were “not cath lab candidates”.  The author’s main focus in their conclusion is on the difference between EMS ECG cancellation and ED ECG cancellation due to ECG reinterpretation following activation.

What’s more interesting from the paper, however, is when they break it down to the precise cohorts of activation and arrival – and note that 24.7% of EMS activations were subsequently judged inappropriate.  It is also interesting that 13% of non-PCI center activations were inappropriate vs 8% of PCI center activations.  Reading between the lines, there’s probably some experiential component to the differences in activation rates, but this study doesn’t specifically look at volume and training.

“Rates of Cardiac Catheterization Cancelation for ST Elevation Myocardial Infarction after Activation by Emergency Medical Services or Emergency Physicians: Results from the North Carolina Catheterization Laboratory Activation Registry (CLAR)”
http://www.ncbi.nlm.nih.gov/pubmed/22147904

High-Sensitivity Troponin Dead End

Another article trying to work the unworkable – the balance between sensitivity and specificity.

From New Zealand, an attempt to evaluate the Roche Laboratories hsTnT assay in the interests of performing accelerated rule outs in the ED – looking at any combination of initial value, 2-hour value, delta between 0-2 hour value, etc.  And, essentially, any strategy you choose is wrong.
On one hand, you can get up to 91.4% specific for their gold  standard of AMI by requiring a hsTnT  >14 ng/L and a 20% delta change at 2 hours – but your sensitivity will drop to 72%.  Conversely, you can have sensitivity of 98.8% – which is the point of these hsTnT testing strategies – but your specificity drops to 56.4%.  Unless you’re doing something intelligent with all those false positives that isn’t harmful, expensive, or invasive, the costs of zero-miss are, once again, too high.
“High-sensitivity troponin T for early rule-out of myocardial infarction in recent onset chest pain”

It’s Another Chest Pain Prediction Rule!

Yet again, the insanity of the race to a zero-miss culture funds another chest pain discharge prediction rule.  In fact, the most telling part of this paper is in the very end when they compare the chest pain admission rates of the Canadian hospitals in this article to the U.S. hospital – 18% and 20% in Canada compared to 96% in the U.S. (combined ED observation status and inpatient).  The difference in those numbers is insane – and I’m sure people could easily debate which is the preferred side of those numbers to be on.

In any event, the study is a prospective, observational data-gathering study of 64 variables related to the presentation of chest pain – some of which are objective and some of which are historical.  It’s an interesting read – in part because the inter-observer kappa for a lot of the historical variables is so terrible they weren’t even usable.  After collecting all their data, they did 30-day telephone follow-up or vital records review to evaluate the combined endpoint of death, myocardial infarction, or revascularization.

Via the magic of recursive partitioning, a patient without new EKG changes, a negative initial troponin, no history of CAD, atypical pain, and age less than 40 years separated out 7.1% of their study population that had zero 30-day outcomes.  Adding a second negative troponin six hours later for the 41-50 year group gives another 11.2% of patients that had zero outcomes.  So, a facility that admits 96% of their patients could potentially reduce admissions – but it might have less utility in Canada.

I’d rather see a two-hour second troponin than a six-hour one; it might reduce sensitivity, but it’s wholly impractical to tie up a bed in the ED for 6 hours for a patient you want to send home.  And, like most of these articles, the combined endpoint of death, MI, and revascularization is irritating.  Considering there were twice as many revascularizations as myocardial infarctions, there really ought to be more granularity in these sorts of studies with regard to the actual coronary lesions identified rather than simply lumping them into a combined endpoint.

“Development of a Clinical Prediction Rule for 30-Day Cardiac Events in Emergency Department Patients With Chest Pain and Possible Acute Coronary Syndrome”
www.ncbi.nlm.nih.gov/pubmed/21885156

We Overestimate CAD Pretest Probability

The ACC/AHA clinical practice guidelines have a set of reference values for the pretest probability of >50% stenotic coronary artery disease based on the type of pain and age.  These values range from 2% in a 30 year old woman with non-anginal pain to 94% in a 60 year old man with typical angina.

And, turns out, this is way off.

This is a CTCA registry study of patients undergoing coronary angiography, 14,048 consecutive patients with suspected CAD, looking at both the incidence of 50% luminal narrowing (clinically interesting) and the incidence of 70% luminal narrowing (potentially flow-limiting), and correlating it to asymptomatic, non-anginal, atypical angina, typical angina, or “dyspnea only”.

The meaningful tables of results somewhat defy summarization, but, they have plenty of hypertensives with dyslipidemia – but not very many diabetics or smokers – in their cohort.  In the end, however, none of the observed CAD was anywhere close to the predicted pretest probabilities.  The cohort with the highest prevalence of CAD was the typical angina in age 70+ males – but even that led to only 53% having a 50% lesion.  More than anything, age and gender the most significant predictors of CAD – with no population of women having greater than 29% incidence.

It’s an interesting table worth looking at – CAD really doesn’t kick in until after age 40, and, even then, only mostly in men, and, even then, only in patients with typical symptoms.  Once you hit age 50 in men, however, there’s CAD everywhere, even with atypical (or no) symptoms.

There was also some variability by study site – with the 2,225 from Korea having very little CAD and the 29 from the Swiss site having markedly more, but the remainder are relatively similar.

I love studies that just present reams of data and don’t try to push any particular sponsored agenda.

“Performance of the Traditional Age, Sex, and Angina Typicality–Based Approach for Estimating Pretest Probability of Angiographically Significant Coronary Artery Disease in Patients Undergoing Coronary Computed Tomographic Angiography”
http://www.ncbi.nlm.nih.gov/pubmed/22025600


Prolonged QT – Don’t Believe The Hype?

Much ado is made about the risk of QT prolongation and the development of malignant arrhythmias, particularly Torsades de Pointes – but how frequently does TdP actually occur in these patients who QT prolongation?  Should we be worried about every EKG that crosses our paths with a prolonged QT?

It seems, like so many things, the answer is yes and no.  This is a prospective observational study from a single institution that installed cardiac monitoring that enabled minute-by-minute measurement and recording of QT intervals in their monitored inpatient population.  They evaluated 1,039 inpatients for 67,648 hours worth of time, and found these patients spent 24% of their monitored time with a prolonged QTc (>500ms).  One single patient had a cardiac arrest event where TdP was evident on the monitoring strip – a comorbidly ill heart failure patient whose QTc ranged as high as 691ms.

The authors then went back to attempt to determine whether the prolonged QT was associated with all-cause mortality with the 41 patients who died during their study period, and they found that 8.7% had QT prolongation versus 2.6% who did not.  However, as you can imagine, there are massive baseline differences between the QT prolonged population and the non-QT prolonged population, many of which contribute greater effects to in-hospital all-cause mortality.  The authors attempt logistic regression and finally come up with an OR of 2.99 for QT prolongation for all-cause mortality – which is lower in effects than CVA, obesity, pro-arrhythmic drug administration, and high serum BUN.

It’s reasonable to say that patients with a prolonged QT are at higher risk for death – but it’s also reasonable to say that sick patients at a higher risk of death are more likely to have a prolonged QT.  Torsades was rare, even with the thousands of hours of QT prolongation noted.  I would not get over-excited about QT prolongation in isolation, but, rather, only in the context of multiple risk factors for mortality in acute illness.

“High prevalence of corrected QT interval prolongation in acutely ill patients is associated with mortality: Results of the QT in Practice (QTIP) Study”
http://www.ncbi.nlm.nih.gov/pubmed/22001585