To bias is to be human, and this is a nice review of some of our own intrinsic publication biases. It’s fun to get excited about a new biomarker promising more sensitive or specific identification of disease, promising to streamline our medical decision making. And then you get stuck with something like d-Dimer or BNP that gives us information people rarely use appropriately.
These authors pulled “highly-cited” articles evaluating biomarker utility, examined the reported findings, and then pooled the results of subsequent, larger follow-up studies and meta-analyses. 83% of their “highly cited” studies had effect sizes larger than the corresponding meta-analyses, and only 7 of the 35 biomarkers they reviewed even had RR estimates greater than 1.37 in the meta-analyses.
Jerry Hoffman likes to say on Emergency Medical Abstracts that if you just sit back and skeptically critique everything – you’ll end up being right most of the time. This article demonstrates just how frequently you’ll look smart by not getting overexcited by the most recent fantastic discovery.
“Comparison of Effect Sizes Associated With Biomarkers Reported in Highly Cited Individual Articles and in Subsequent Meta-analyses.”
http://www.ncbi.nlm.nih.gov/pubmed/21632484