When sports chiropractors first appeared at the Olympic Games in the 1980s, it was alongside individual athletes who had experienced the benefits of chiropractic care in their training and recovery processes at home. Fast forward to Paris 2024, where chiropractic care was available in the polyclinic for all athletes, and the attitude has now evolved to recognize that “every athlete deserves access to sports chiropractic."
Improving the Case
"When the case is weak, pound the table."
-- Abraham Lincoln
As a professional scientist, when I was first exposed to chiropractic material I experienced a vague feeling of incompleteness. Finally it dawned on me what was missing: Evidence which did not support the specific chiropractic hypothesis of discussion was generally not part of the presentation.
In contrast, scientific lectures I had attended (and scientific articles I had read) typically spent as much time describing the evidence against the hypothesis as was spent describing supportive evidence. Occasionally, the bulk of a presentation was devoted to contraindications and anomalous cases to situations where the reported effect did not work. Most of us are aware that in scientific presentations even extremely strong effects are usually described with mild terms such as, "encouraging" or "suggestive." Speculation by authors about the meaning of findings, such as might appear in "conclusion" or "discussion" sections of papers, are typically brief or nonexistent, even for strong effects.
This sort of conservative understatement, which is generally characteristic of scientific writing, is not done for the sake of style. It is done because the investigators really believe it.
Allow me to explain. When a student is introduced to basic scientific methods, the world seems a straightforward, predictable, controllable place. You simply work on the experimental design until you get it right, and then you execute the design and do the obvious, appropriate analysis. It seems very straightforward, and for those who do not become involved in actually doing research, that naive opinion is left largely unchanged. People imagine that research is always definite, that data are always unambiguous, that arbitrary decisions are not involved, that things get "proved" conclusively, and new knowledge is automatically created. Scientific findings and research directions are thought to have appeared that way from the beginning of an investigation, just as portrayed in Hollywood movies. People mistakenly believe that the scientific process generates certainty, and moreover, that it does so in a mindless, mechanical, and robotic way.
Nothing could be further from the truth. Scientists, like other explorers, could well be described as people who spend most of the time totally lost. Moreover, experienced professionals have had everything go wrong so many times, had experiments come out one way one time and totally different another time despite their best efforts, that they have become conservative, in self-defense. Consistency aside, scientists know that even if an effect appears to be real at the moment, it is only a matter of time before all the data involved will be reanalyzed within the context of a better hypothesis, a more predictive model, and a more comprehensive theory.
This justifiably cautious attitude has become so characteristic of good science that reckless, simplistic, overenthusiastic, or one-sided presentations are likely to set off all sorts of alarms in the experienced listener. To paraphrase Lincoln, if the table is being pounded, people are likely to think the case is weak.
If the objective of scientific interpretations (e.g., model making and testing) was to determine the ultimate "truth" of things, there might be some sort of justification for having one-sided presentations, i.e., for omitting interpretations which are not "true." But the point of presenting findings which either support or don't support hypothetical models has nothing to do with truth, per se. Scientific models are not shown to be "true" or "false" by supportive evidence; they are shown to be more or less useful for specific kinds of predictions. Models are only temporarily useful analogies, after all, which is yet another reason why scientists generally bend over backwards to present alternative interpretations of findings. Those circumstances where effects don't work are the very stuff from which better, more comprehensive models are made.
By far the most important shortcoming of one-sided presentations is simply that they are impoverished science. They contain far less information, they prohibit the synthesis and integration of discrepant information necessary for the development of more predictive models, and because they contain biased information, they may actually misdirect or reduce a knowledge base rather than improve it. In addition, from a psychological or political point of view, one-sided presentations are transparently defensive, not only to scientists but to most of our professional colleagues.
If the chiropractic "case" is to be fairly received, it must be fairly presented. And if chiropractic models are to become more comprehensive and more predictive in the future, we must demand that presentations routinely include fair statements of opposing viewpoints, and most especially, anomalous, negative or contradictory evidence, whenever available.
Robert Jansen, Ph.D.
Executive Director, Consortium for Chiropractic Research
Sunnyvale, California
Individual memberships in CCR are $65 per year and include a subscription to the Consortium newsletter, which keeps doctors updated on Consortium activities and projects, e.g., standards of care. The membership dues may be charged to you Mastercard, or Visa by calling 1-800-327-2289, or you may send a check to: The (Pacific) Consortium for Chiropractic Research, 1095 Dunford Way, Sunnyvale, California 94087.