While there may be no “magic bullet” when it comes to health, this should not dissuade patients or practitioners from seeking out ingredients that offer multiple health benefits. When it comes to dietary supplements, there are thousands upon thousands of choices. So, why not choose one that can address pain and assist with mental health? A supplement that can address inflammation, while also preventing certain types of cancer.
| Digital ExclusiveGilding the Gold Standard Lily: The Limits of Evidence-Based Medicine
Systematic, impartial review of information is the keystone upon which all scientific progress is made. That said, I have become more aware than ever in the past year how that process can easily run amok, looking more like the chaotic "broom" scene in Disney's "The Sorcerer's Apprentice" than true common sense. Sometimes we become so enamored with the process that we forget the actual information contained in an inquiry, let alone its original intent. It is simply a matter of missing the woods for the trees, with a couple of lumberjacks thrown in, sporting their own agendas.
In a recent report submitted to the British government on the advisability of foxhunting, for instance, it was concluded that: "Despite a lack of firm scientific evidence, the inquiry was satisfied that hunting seriously compromises the welfare of the fox." It said the same thing about hunting deer, hares and mink.1
"Duh," I say. Rest assured that at the Foundation for Chiropractic Education and Research (FCER), we would never be caught investing in an undertaking such as this, wasting resources and personnel to pursue a research project whose outcome was obvious or hopelessly trivial. There has to be a question or controversy involving whether the proposed hypothesis or its null will be supported. If there is a consensus for a better treatment or outcome, the null hypothesis becomes a straw man and the research becomes invalid. In outcome studies, this principle has been described as clinical equipoise, a situation in which the ethics of subjects being recruited have not been compromised because patients have not been put into a treatment regimen whose outcome is known to be inferior. Rather, there is an honest question of which of the treatment alternatives being compared will produce better results.2 This then becomes a criterion by which we should accept or reject proposed clinical trials.3
A look at studies published in the medical journals over the past five years offers a revealing glimpse into how frequently randomized clinical trials (the so-called "gold standard") have been misinterpreted or even corrupted. Design flaws, sometimes appearing to be intentional, are responsible for this. Consider the following:
- A meta-analysis comparing different heparin preparations for their respective abilities to prevent postoperative thrombosis may yield diametrically opposing results, depending upon which of 25 scales is used to distinguish between high and low-quality trials.4
- A comparison of two pharmaceutical agents used to treat patients with advanced cancer reveals that the presumably less effective agent was routinely administered in an inactive form, primarily by authors with financial ties to the manufacturer of the opposing agent.5
- Duplicate "sausage" publications are often crafted by authors, creating the impression that these are each independent studies when they are highly redundant - the result being, of course, that these studies are given more weight than is warranted in systematic reviews.6-9
- The clinical trial appearing in The New England Journal of Medicine by Cherkin, comparing three interventions in the management of acute low-back pain, 10 suffered from poor design and inappropriate statistical procedures, creating the spurious impression that the fastidious interventions employed represent clinical practice.11-13
- Inappropriate sham procedures and possible Type II errors have compromised another clinical trial, which loudly proclaimed that spinal manipulation conferred no additional benefit to children with mild or moderate asthma.14,15
- Methodological scores attached to clinical trials create a misleading profile of high-and-low quality studies if they place too much emphasis upon sham procedures and blinding, especially for studies involving physical methods of intervention such as spinal manipulation.16
The "take-home" lesson in all this adheres to the often-cited principle of GIGO ("garbage in, garbage out") used in relation to computers. Information extracted from an investigation is only as good as the quality of data put into the analysis; definitive results are never delivered by process alone with no forethought. In the realm of patient health care, it is easy to forget that sound, clinical observations in the doctor's office remain the cornerstone upon which all experimental approaches, including randomized clinical trials, are based. In the world of clinical treatment, erroneous judgments are as much the product of improper generalizations of RCTs (which by their definition take place within a highly restricted setting) as the quality of the RCTs themselves. Indeed, the entire structure of evidence-based medicine is put into perspective by the renowned epidemiologist David Sackett, who argues: "External clinical evidence can inform, but never replace individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all [italics mine], and, if so, how it should be integrated into a clinical decision."17
The importance of the observations attending the individual patient has been eloquently carried a step farther by Mark Tonelli, who argues that there will always be a region (an epistemological zone) in which the individual differences between individuals cannot be made explicit and quantified.18 To assume otherwise, to categorically assume that the entire answer to clinical treatment by any modality has been captured by the precision of analytic methods in the scientific literature, would be like saying that a medical librarian who has access to systematic reviews, meta-analyses, Medline, and practice guidelines provides the same quality of health care as an experienced physician.19
This is precisely where FCER recognizes the challenge of future research. Our program will always endeavor to involve the individual clinician and elevate the importance of the case study to its proper level of recognition. It is sometimes overlooked that all clinical trials were originally conceived by observations taken from individual cases. We ask that you provide every bit of input, through both advice and continuing membership, to FCER so that we may overcome an all-too-prevalent myth and ultimately provide the missing pieces to what is commonly regarded as evidence-based medicine.
References
- Burns Report, The Daily Telegraph, June 13, 2000, p. 1.
- Freedman B. Equipose and the ethics of clinical research. New England Journal of Medicine 1987; 317:141-145.
- Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? Journal of the American Medical Association 2000; 283(20): 2701-2711.
- Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. Journal of the American Medical Association 1999; 282(11): 1054-1060.
- Johansen HK, Gotzsche PC, Problems in the design and reporting of trials of antifungal agents encountered during meta-analysis. Journal of the American Medical Association 1999; 282(18): 1752-1759.
- Rennie D. Fair conduct and fair reporting of clinical trials. Journal of the American Medical Association 1999; 282(18): 1766-1768.
- Gotzsche PC. Multiple publication of reports of drug trials. European Journal of Clinical Pharmacology 1989; 36: 429-432.
- Huston P, Moher D. Redundancy, disaggregation, and the integrity of medical research. Lancet 1996; 347:1024-1026.
- Tramer MR, Reynolds DJM, Moore RA, McQuay HJ. Impact of covert duplicate publication on meta-analysis: case study. British Medical Journal 1997; 315: 635-640.
- Cherkin DC, Deyo RA, Battie M, Street J, Barlow W. A comparison of physical therapy, chiropractic manipulation, and provision of an educational booklet for the treatment of patients with low back pain. New England Journal of Medicine 1998; 339: 1021-1029.
- Rosner AL. Response to the Cherkin article in The New England Journal of Medicine. Dynamic Chiropractic November 2, 1998; 16(23).
- Rosner AL. Rebuttal to Cherkin comments in Dynamic Chiropractic. Dynamic Chiropractic January 1, 1999; 1(1).
- Chapman-Smith D. Back pain, science, politics and money. The Chiropractic Report November 1998;12(6).
- Balon J, Aker PD, Crowther ER, Danielson C, Cox PG, O'Shaugnessy D, Walker C, Goldsmith CH, Duku E, Sears MR. A comparison of active and simulated chiropractic manipulation as adjunctive treatment14. for childhood asthma. New England Journal of Medicine 1998; 339: 1013-1020.
- Vernon H. The Semantics and Syntax of Research: Critique of Impure Reason. Research Agenda Conference IV, Arlington Heights, IL, July 24, 1999.
- Koes BW, Bouter LM, van der Heijden GJMG. Methodological quality of randomized clinical trials on treatment efficacy in low back pain. Spine 1995; 20(2): 228-235.
- Sackett DL. Editorial: evidence-based medicine. Spine 1998; 23(10): 1085-1086.
- Tonelli MR. The philosophical limits of evidence-based medicine. Academic Medicine 1998; 73(12): 1234-1240.
- Horwitz RI. The dark side of evidence-based medicine. Cleveland Clinic Journal of Medicine 1996; 63: 320-323.
Anthony Rosner,PhD
Brookline, Massachusetts
rosnerfcer@aol.com