Health & Wellness / Lifestyle

The X Factor in Clinical Research: The Patient

Anthony Rosner, PhD, LLD [Hon.], LLC

It was the great baseball legend, former New York Yankees catcher Yogi Berra – he of countless aphorisms, each with a mind-bending twist – who once declared, "You can observe a lot by watching." And so it goes for considering what is happening with patients in clinical research. For while the world seems tidy and reasonably well-controlled when one considers the elegant – and expensive – design in randomized, controlled trials, it is easy to overlook the degree to which a patient is invested in the treatment he/she is receiving for a particular condition.

The first signs that meaningful clinical results could be deduced from patient observation came in two studies published in 2000. The first, by Concato, et al., tracked five clinical topics in 99 original articles in five major medical journals from 1991-1995, identifying meta-analyses of randomized, controlled trials and meta-analyses of either cohort or case-control studies that assessed the same intervention. The clinical topics thus investigated were Bacille-Guerin vaccine and tuberculosis, mammography and mortality from breast cancer, cholesterol levels and death due to trauma, treatment of hypertension and stroke, and treatment of hypertension and coronary heart disease.

It turned out that the average results of the observational studies were "remarkably similar" to those gleaned from the randomized, controlled trials. In fact, less heterogeneity of the results was actually seen in the observational studies compared to the RCTs.1

The second study, by Benson, et al., appeared in the very same journal immediately preceding the Concato, et al. paper. Here, the authors searched the Abridged Index Medicus and Cochrane databases from 1985-1998 to identify, and then compare, two or more treatments for the same condition. For each intervention, the magnitudes of effects in various observational studies were combined by a weighted analysis of variance procedure; and then compared with combined magnitudes in the randomized, controlled trial that evaluated the same treatment. A total of 136 reports with 19 diverse treatments were retrieved.

In most cases, the estimates of treatment effects from observational studies and randomized, controlled trials were similar. In only two of 19 analyses did the magnitude of the observational studies lie outside the 95 percent confidence interval for the combined magnitude of the randomized, controlled trials. The conclusion was there was little evidence that estimates of combined treatment effects from observational studies reported after 1984 were either consistently larger or qualitatively different from those obtained in the randomized, controlled trials.2

Jump ahead 14 years and we find this issue coming home to roost where it affects chiropractors right in their wheelhouse: addressing low back pain. This time around, 70 randomized, controlled trials and 19 cohort studies were retrieved from respective totals of 1,134 and 653, pooling within-group changes in pain in individuals over the age of 18 experiencing low back pain. These patients were found to undergo rapid improvement during the first six weeks, followed by a smaller gain for the remainder of one year.

The finding was that there was no statistically significant difference in the standardized mean change between randomized, controlled trials and cohort studies at any time point. Consistent with the previous reports,1-2 the conclusion this time around was that the clinical course of low back pain symptoms followed matching patterns in randomized, controlled trials and cohort studies. The authors actually concluded that enrollment of low back pain patients in clinical studies is likely to reflect the nonspecific effects of seeking and receiving care, independent of study design.3

These findings compel us to take a second look at two issues. The first is the grooming of cohort studies, which, while never capable of replacing randomized, controlled trials, appear to have gained a modicum of respectability from the foregoing studies. Baldwin, et al., offered a set of guidelines to ensure cohort studies could pass muster:4

  1. Is the research question clearly focused, with the aim preferably stated as a working hypothesis?
  2. Do the general demographic and other descriptors allow the results to be generalized to other populations?
  3. Is a clear operational definition of the outcomes provided?
  4. Within reason, are all subjects at a similar point in the course of a particular condition followed for the same length of time?
  5. Do the instruments used measure what they claim to measure, and are the results reproducible?
  6. Are there adjustments for potential important confounders?
  7. Were all statistical methods used appropriate and properly applied?
  8. Did exposure / prognostic factors clearly occur, and were they measured at a time before the outcome?
  9. Did the study have enough statistical power?

The second issue is an amalgam of patient expectations and preference. In this case, it is a matter of simply considering the patient's own investment in the intervention undertaken. We begin with the fact that in instances of physical medicine, patient blinding in a randomized, controlled trial is virtually impossible. That said, if a patient in such a study is involuntarily inducted into a protocol he/she doesn't care for, said individual is likely to fare more poorly in the outcomes – whether by complying halfheartedly, not complying at all or (worse still) dropping out.

This suspicion is given credibility by the unique twin studies of disc herniation patients undertaken by Weinstein, et al., some years ago in the Spine Outcomes Research Trial (SPORT). When patients were simply randomized without their input, patients who underwent surgery or any number of nonoperative procedures (education and counseling, NSAID administration, injections, physical therapy or chiropractic) displayed nearly identical outcomes in no less than eight outcome measures (bodily pain, physical function, Oswestry Disability Index, sciatica bothersomeness, work status, satisfaction with symptoms, satisfaction with care, and self-rated improvement) over a two-year period.5 However, if patients chose their method of treatment in the same experimental setting, surgical patients significantly outperformed their cohorts in every one of the eight dimensions!6

The Weinstein, et al., results seem almost counterintuitive, given the wealth of literature that has described the limited effectiveness of spinal surgery and complications therein. Yet one could speculate that the very intensity of a surgical intervention might have spiked one's adrenaline to astronomical levels; or at least compelled such patients to closely comply with the physician's recommendations. Under those circumstances, an improved outcome might be anticipated.

All of which is to say that the role of the patient, often underestimated in traditional randomized, controlled trials or meta-analyses, is a major factor to be reckoned with. Or as the character Fenwick (Kevin Bacon) in Barry Levinson's movie "Diner" so thoughtfully mused after his pal Boogie (Mickey Rourke) was rebuffed by a winsome young lady after an unsuccessful pick-up attempt: "Do you ever wonder that there's something going on here that we don't know about?"

References

  1. Concato J, Shah N, Horwitz RI. Randomized, controlled trails, observational studies and the hierarchy of research designs. NEJM, 2000;342(25):1887-1892.
  2. Benson K, Harz AJ. A comparison of observational studies and randomized, controlled trials. NEJM, 2000;342(25):1878-1886.
  3. Artus M, van der Windt D, Jordan KP, Croft PR. The clinical course of low back pain: a meta-analysis comparing outcomes in randomised clinical trials (RCTs) and observational studies. BMC Musculoskel Disord, 2014;15:68.
  4. Baldwin ML, Cote P, Frank JW, Johnson WG. Cost-effectiveness studies of medical and chiropractic care for occupational low back pain: a critical review of the literature. Spine J, 2001;1:138-147.
  5. Weinstein JN, Tosteson TD, Lurie JD, et al. Surgical vs. nonoperative treatment for lumbar disk herniation: the Spine Outcomes Research Trial [SPORT]: a randomized trial. JAMA, 2006;296(20):2441-2450.
  6. Weinstein JN, Lurie JD, Tosteson TD, et al. Surgical vs nonoperative treatment for lumbar disk herniation: the Spine Outcomes Research Trial [SPORT]: observational cohort. JAMA, 2006;296(20):2451-2459.
November 2014
print pdf