Chiropractic (General)

The Researcher's Toolbox: The Wrong Stuff

Anthony Rosner, PhD, LLD [Hon.], LLC

In Barry Levinson's cinematic masterpiece "Diner," close friends Fenwick and Boogie are out for a country drive when Boogie espies and attempts to pick up a spectacularly beautiful woman on horseback. He asks her what her name is; she casts a wry smile and answers, "Chisolm - as in Chisolm Trail," and promptly rides off into the distance without uttering another word. Fenwick's classic comment on the botched situation is, "Do you ever think that there's something going on here that we don't know about?"1

Yes, Virginia, there is indeed something going on here that we don't always know about. It is an unfortunately widespread phenomenon in clinical research, in which we come away with misleading information for the simple reason that we have asked the wrong question or used the wrong frame of reference.

The most recent example, which garnered a fair share of the media, concerned a meta-analysis of the effects of high dosages of vitamin E. In this review of 19 clinical trials involving 135,967 chronically ill participants taking vitamin E, the authors concluded that there appeared to be a slight elevation of the mortality risk ratio for those pooled trials at dosages exceeding 400 IU/d.2 Before jumping to the conclusion that vitamin E at these higher doses (with 400 IU/d commonly sold over the counter) is taboo, we have to take stock of the operative, "chronically ill." Without distinguishing the types of patient ailments, this investigation could easily have oversampled seriously ill patients and then misleadingly laid their increased mortality solely at the feet of vitamin E. So, the take-home lesson here is: Can this observation be generalized to the population at large? The answer would seem to be in the negative.

[Editor's note: Dr. Rosner's comprehensive review of the Miller, et al., study on vitamin E is available on the FCER Web site: www.fcer.org/html/News/vitaminE.htm.]

Not only do we sometimes face design flaws in terms of the representativeness of the patients studied, but also in whether or not the treatment arm truly represents the therapeutic intervention that the patient may face. Elsewhere, I3 and several others4-6 have questioned Cherkin's use of a single side-posture manipulation in a clinical trial, which is then incorrectly said to be representative of chiropractic management.7 In other myopic medical applications, the cancer activist Frank Wiewel has argued that several investigations involving beta-carotene8-10 have used only a single synthetic carotenoid, rather than the complex which occurs in nature, and that the coloring agent in the beta-carotene pill contains a yellow dye which is carcinogenic. Wiewel has argued that the National Cancer Institute commits a common error seen in fastidious randomized clinical trials by routinely fixating "on single magic bullet items," diminishing or bypassing the effect seen naturally8 from a combination of elements.

A third example of a clinical disconnect is evident when we compare the results in pain relief of two clinical trials addressing dysmenorrhea - performed by the same research team, but as we shall see, under startlingly different circumstances. The first pilot study reported that side-posture spinal manipulation yielded substantial pain relief in patients diagnosed with dysmenorrhea, compared to a low-force sham procedure.9 But in the full-scale trial that followed, no significant improvement in pain scores could be seen in the manipulated group compared to the sham cohort.10 Does this larger trial trump the pilot and lead us to conclude that spinal manipulation is not likely to produce a beneficial effect in women with menstrual cramping? Not likely, because when you take a closer look at the initial pain scores in the full-scale trial, you immediately see that those numbers are substantially lower - in fact, almost as low as the final scores in the pilot study. Then you read that the authors had difficulty recruiting patients in the larger trial, so they changed the admission requirements such that women could enter the trial as long as 48 hours after their last encounter with pain. This essentially means that many women with no pain at all embarked upon the trial. How, then, could they ever have been expected to improve? In retrospect, one wonders why the penny had not dropped sooner so that this second full-scale trial, appearing to be a road to nowhere, would not have ensued. To their credit, the authors of this second trial concluded that the negative result obtained was not necessarily due to the lack of an effect of spinal manipulation, but possibly due to an insufficient placebo treatment.10

A fourth case in which an obviously misleading answer was preordained before the clinical trial even began comes to us from the medical literature, involving two pharmaceutical agents compared in their ability to treat cancer complicated by neutropenia. In this almost mind-boggling case, which I have discussed in detail elsewhere,3 the drug which was consistently found to be inferior had been administered in most cases orally, when it was already known that it is poorly absorbed and best given intravenously. Curiously (but perhaps not by coincidence), virtually all of the investigators involved in these particular investigations had financial ties to Pfizer, the manufacturer of the superior drug!11

The story becomes even more grave when we turn to life-threatening situations, such as cancer. Here, Robert Houston raises a compelling argument about spontaneous remissions, the rate of which is extremely low. Therefore, argues Houston, if one encounters as few as two cases in which such remissions are seen without simultaneous treatment by conventional therapy, we have an issue that needs to be taken extremely seriously. There is actually much more detail here than in clinical trials, so it would be remiss to dismiss these remissions as merely anecdotal. Finally, states Houston, if case studies were "an entirely worthless methodology," why do they continue to be published in The New England Journal of Medicine and JAMA?12

What are the lessons of import here? The first is certainly not to dismiss randomized controlled trials out of hand. On the other hand, there is an impressive list of questions that one should apply if RCTs are to be judiciously employed:

  1. Are the treatments representative of the actual clinical practices perceived to be effective?
  2. Are the patient groups representative of those seen in practice?
  3. What is the severity of the condition, its chronicity, and its comorbidities?
  4. What are the potential long-term effects that might extend beyond the treatment period (potential benefits and harms)?

One possible escape from this dilemma is what has recently been described as the pragmatic clinical trial (PCT), which possesses the following characteristics:13

  1. The hypothesis and study design are formulated on the information needed to make a decision.
  2. Practical questions about the risks, benefits and costs of intervention as they would occur in clinical practice are explored.
  3. Clinically relevant interventions are selected for investigation.
  4. PCTs include a diverse population of study participants, recruited from a variety of practice settings.
  5. PCTs collect data from a broad variety of health outcomes.

With the traditional hierarchy of medical evidence having been amply challenged by Wayne Jonas,14 David Sackett,15,16 and others,17,18 it is imperative not to become slavishly bound to RCTs as the sole standard by which guidelines or best practice recommendations are founded. For it has been reported elsewhere that some 37 percent of medical interventions are supported by an RCT, while an average of 76 percent of such treatments are supported by "some form of compelling evidence."19 How, then, do we account for a discrepancy which is larger than the actual number representing the percentage of medical procedures supported by RCTs? It has to be in our ability to trust sound clinical observation, which should never be forgotten in the patient-centered paradigm that has increasingly become the model of chiropractic care. In following this principle, we can avoid using "the wrong stuff," and wasting precious resources and time in our clinical investigations.

To sum all this up in other words, just a half-month away from the Ides of March when this communication was dispatched, it is tempting to borrow that time-honored quote from Shakespeare's Julius Caesar: "The fault, dear Brutus, is not in our stars but in ourselves, that we are underlings."20

References

  1. Levinson B. Diner. Warner Brothers, 1982.
  2. Miller ER, Barrius RP, Dalal D, et al. Meta-analysis: high-dosage vitamin E supplementation may increase all-cause mortality. Annals of Internal Medicine 2005;142(1):1-11.
  3. Rosner A. Fables of foibles: inherent problems with RCTs. Journal of Manipulative and Physiological Therapeutics 2003;26(7):460-467.
  4. Chapman-Smith D. Back pain, science, politics and money. The Chiropractic Report November 1998;12(6).
  5. Freeman MD, Rossignol AM. A critical evaluation of the methodology of a low-back pain clinical trial. Journal of Manipulative and Physiological Therapeutics 2000;23(5):363-364.
  6. Royal College of General Practitioners, unpublished update of CSAG Guidelines [reference 2], 1999.
  7. Cherkin DC, Deyo RA, Battie M, et al. Comparison of physical therapy, chiropractic manipulation, and provision of an educational booklet for the treatment of patients with low back pain. New England Journal of Medicine 1998;339(14):1021-1029.
  8. Wiewel F. In Hess D. Evaluating Alternative Cancer Therapies: A Guide to the Science and Politics of an Emerging Medical Field. New Brunswick, NJ: Rutgers University Press, 1999, pp. 57-64.
  9. Kokjohn K, Schmid DM, Triano JJ, Brennan PC. The effect of spinal manipulation on pain and prostaglandin levels in women with primary dysmenorrhea. Journal of Manipulative and Physiological Therapeutics 1992;15(5):279-285.
  10. Hondras MA, Long, CR, Brennan PC. Spinal manipulative therapy vs. a low-force mimic maneuver for women with primary dysmenorrhea: a randomized, observer-blinded, clinical trial. Pain 1999;81(1-2):105-114.
  11. Johansen HK, Gotzsche PC. Problems in the design and reporting of trials of antifungal agents encountered during meta-analysis. Journal of the American Medical Association 1999;282(18):1752-1759.
  12. Houston R. In Hess D. Evaluating Alternative Cancer Therapies: A Guide to the Science and Politics of an Emerging Medical Field. New Brunswick, NJ: Rutgers University Press, 1999, pp. 132-144.
  13. Tunis SR, Styer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 2003;290(12):1624-1632.
  14. Jonas WB. The evidence house: how to build an inclusive base for complementary medicine. Western Journal of Medicine 2001;175:79-80.
  15. Sackett DL. Editorial: Evidence-based medicine. Spine 1998;23(10):1085-1086.
  16. Sackett DL, Rosenberg WMC, Gray JAM, et al. Evidence-based medicine: what it is and what it isn't. British Medical Journal 1996;312:71-72.
  17. Di Fabio RP. What is "evidence"? Journal of Orthopedic & Sports Physical Therapy 2000;30(2):52-55.
  18. Feinstein AR. Meta-analysis: statistical alchemy for the 21st century. Journal of Clinical Epidemiology 1995;48(1):71-79.
  19. Imrie R, Ramey DW. The evidence for evidence-based medicine. Complementary Therapies in Medicine 2000;8:123-126.
  20. Shakespeare W. Julius Caesar; Act I, Scene II, lines 140-141.

Anthony Rosner, PhD
Brookline, Massachusetts

rosnerfcer@aol.com

April 2005
print pdf