Chiropractic (General)

Journal Bias: Obvious and Occult

Arthur Croft, DC, MS, MPH, FACO

Readers of scientific literature gleefully devour the latest great discovery. Likewise, health care practitioners eagerly consume the newest breakthroughs in their respective field. Veterans of both worlds, however, are also accustomed to the obverse: the overturning of the grand new theory or the refuting of last year's breakthrough. It's part and parcel of our world because the very nature of anything scientific is its vulnerability in the crucible of the scientific method. No theory can ever be proved beyond question, but all are subject to what Sir Karl Popper called "falsificationism."

Why do we so often see these scientific about-faces? Explanations include the very common reality of regression to the mean – one-shot wonders rarely are reproducible and they litter the long highway of medical progress. Several years back, a group out of Norway, using an application of MRI for the neck usually applied to extremities, in conjunction with a somewhat unique method of viewing the studies, determined that the alar ligaments of many of the patients with chronic whiplash showed evidence of internal degeneration of destruction. This seems to have triggered a nearly manic interest in the little ligament.

A few years later, though, a German group attempted to repeat the study, but was unable to reproduce the results. Regression to the mean simply guarantees that any group status, difference, or condition found initially to be unusual (i.e., statistically significant), is likely to be absent in subsequently selected groups, in the same way that the freshman wunderkind basketball player is likely to be something of a disappointment in their junior year, or the way children of very tall parents are typically shorter than their parents. Nature seeks her mean in all things.

There are other explanations, of course – some not so innocent, many due to scientific misadventure or naive blunder, and some opaque to even the sharpest eye. Journal bias, which might also be called publication bias, is a particularly important member of this fraternity of occult bias. It takes many forms and, although it's natural to suspect it, it's nearly impossible to prove, especially for the end user – you.

Having served as anything from editor and associate editor to manuscript reviewer on a number of journals in fields from engineering to chiropractic to surgery and general medicine for more than two decades, I've glimpsed this kind of bias from an insider's perspective because of these relationships. Or perhaps I should say I've seen more confirmation of what many astute readers will have suspected all along. This editorial touches on a few and then provides a stunning example; one, in fact, that inspired me to write this piece.

Sensationalism

The first kind of journal bias is the most obvious: sensationalism. Journal editors I'm sure have secret fantasies of being the first to publish a revolutionary breakthrough or discovery. The antithetical – negative results on hypotheses that didn't pan out – are hardly interesting by comparison. However, these noble works should be published for a number of reasons that are probably obvious. Science, after all, is a self-correcting enterprise.

In the real world, as they say, these papers rarely make the grade in prestigious journals like The New England Journal of Medicine or Spine. Once rejected there, the authors farm them around until they are finally accepted, often in obscure start-ups or Third World journals.

Peer Review

Peer-reviewed journals typically have a cadre of invited reviewers who invariably do the work pro bono publico. Many are themselves researchers or writers and, as a result, may be inclined (subconsciously or otherwise) to protect a sacred cow or pet theory. And even though the reviewers are blinded to the identity of the authors, it is often easy to tell who they are, especially when they reference themselves. The editors, of course, are never blinded to the authors' identity. The chiropractic profession has made strides in this area, but the turf is far from level. When I first started at Spine, for example, they would reluctantly publish a paper with a DC as author, but only in the company of other (medical or PhD-level) authors, and the "DC" would be quietly omitted. Those days are mostly behind us.

A less appreciated form of bias is the reverse application of the fundamental purpose of the peer-review process. Imagine, for example, that the editor is eager to fill each issue with quality work. They know that if two of three manuscript reviewers decide a paper needs major revisions, or condemn it outright as unpublishable, the editor is more or less obliged to reject that manuscript.

Papers are subjected to the deepest scrutiny, of course, when the subject is the specialty of the reviewer. Because they know that literature so well, they catch all of the subtle errors in interpretation of prior work, and most or all of the citation errors. These experts also know what literature those authors have failed to consider – studies, in fact, that might present awkward or sticky challenges to the author's hypotheses. A generalist reviewer will usually miss those problems.

Anticipating this surgical-level, hyper-rigorous review, an editor eager to publish a paper expeditiously might bypass the very reviewer(s) who would be most qualified to review the manuscript in order to reduce the level of criticism and the risk of rejection recommendations.

Does this pestiferous event really occur, or am I just a bit paranoid? That is something I suppose we'll never know with certainty, but I do know that for more than one journal for whom I review, I virtually never see the whiplash-related manuscripts anymore. I get virtually anything else, including things I am not really qualified to review. I think it would be impossible that this is just an oft-repeated chance happening.

Selective Reporting

Here is an example of publication bias that comes by way of a very prestigious journal, The New England Journal Medicine.1 The authors of this study compared the results of trials of 12 antidepressant drugs. This information, obtained from the FDA, covered study data for 12,564 patients participating in numerous studies. They matched these outcomes with published papers and found that, among the FDA-registered trials, 31 percent were never published. Only one of the studies that the FDA considered had shown positive results (i.e., evidence that the drug was more effective than a placebo) was not published, while, with only a few exceptions, trials that were not considered by the FDA as being positive were either not published or were presented in such a way as to convey a positive outcome.

Of the published papers, 94 percent of the trials were reported as being positive. Of the actual total data set from the FDA, which is really the gold standard in this case, only 51 percent were positive.

Selective reporting (i.e., failing to report negative or adverse results) can result in adverse consequences for patients, doctors and insurers. This same literature has recently confirmed via meta-analysis that one of the most popular classes of antidepressants (selective serotonin re-uptake inhibitors or SSRIs) are generally not more effective than placebo, while another recent study has shown that in a large subpopulation of the elderly, these drugs are commonly prescribed even though no clinical indicators of depression are present.

This should provoke some caution, particularly in view of the tendency to forget that older patients may have reduced drug breakdown and clearance, and the fact that some of the common side effects will increase the likelihood of falls or other mishaps, which, in this population, can be disastrous.

Reference

  1. Turner EH, Mathews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. New England J Medicine, 2008;358(3):252-260.
September 2011
print pdf