When sports chiropractors first appeared at the Olympic Games in the 1980s, it was alongside individual athletes who had experienced the benefits of chiropractic care in their training and recovery processes at home. Fast forward to Paris 2024, where chiropractic care was available in the polyclinic for all athletes, and the attitude has now evolved to recognize that “every athlete deserves access to sports chiropractic."
To Prove or To Improve? That Is the Question
At its simplest, the purpose of science is to learn stuff we don't already know. Sheer, unbridled curiosity is a great motivator for many investigators, but there are a lot of other things that motivate people to get new knowledge. For much of chiropractic's existence, we have found ourselves on the defensive in realms such as licensure, medical ostracism, reimbursement and politics - both internal and external. Understandably, this has fostered a generational culture of victimization at the hands of so-called science, and hence some distrust. Relative to science, because it was frequently harnessed as a tool of opponents to constrain us, many chiropractors have seen it as tainted-by-agenda.
Even those of us who recognize its value have often filtered it through a defensive mindset. Why do we need research? To prove chiropractic works, of course! Much of the early energy in our profession's fledgling research enterprise has been directed at finding studies to "prove" our methods are better than conventional ones, in order to persuade policy-makers, regulators, litigators and politicians that we deserve various professional practice rights. While such a "defensive" perspective for science does have some currency in a pragmatic sense, culturally and socially, this aspect of science is relatively miniscule and may be suspect. It is human nature that professional and business groups frequently latch onto and promote only studies that support their interests. In the current era of evidence-based medicine and technology assessment, policy-makers and academicians see right through special-interest spin masquerading as science, whether it comes from drug companies, surgical societies or chiropractic associations.
Embracing the real science of chiropractic means doing it for the knowledge accrued. As Scott Haldeman has so often pointed out, the doers of a clinical method are merely the tradesmen; it's the knowers who are the owners and keepers of the profession, as well as the framers of its cultural authority. In practical terms, what this all means for chiropractors is that we are at a period in our own scientific and professional evolution in which the methods of science become important tools for us to objectively assess our methods and apply what we learn to improve our methods. In other words, it's all about the R&D (research and development) to do a better job with our patients.
The contention that all chiropractic techniques are created equal represents a great example. When I was in chiropractic school (I think that was somewhere around the time D.D. Palmer discharged his first patient, Harvey Lillard), we were taught a variety of chiropractic technique systems, with all kinds of variability in how to palpate or assess spinal function (and even a few other more esoteric kinds of function). All of the justification for the various techniques focused on what the originator of the technique had reasoned, frequently attenuated by nothing more than their own personal practice experience.
Today, our colleges and many technique proponents are actively engaged in meaningful scientific inquiry, and the volume of worthy chiropractic literature has grown substantially in recent decades. However, we have only recently begun to see studies aimed at comparative interventions such as manipulation alone versus manipulation with exercise, or passive mobilization compared to high-velocity manipulation. Getting under the hood of how variations in chiropractic and other manual approaches impact a patient's outcome is still in its infancy. Further, the development of a professional culture and infrastructure that spreads this kind of knowledge into general practice has not really even entered the conception stage.
The majority of continuing education seminars in chiropractic (and frankly, in many in other CAM and conventional health fields) still follows the "expert" presenter model, by which a proponent prattles on about their own approach to patient care. Everybody looks on in awe when an attendee with a lingering shoulder problem or spasmed multifidus muscle proclaims, "Gee, it feels looser" after the presenter demonstrates the intervention in the seminar. Even the more systematic and investigative-minded folks seem to present data on theory development (reliability of diagnostic tests, changes in physicals findings or physiological measurements such as range of motion or radiographic markings) instead of real patient outcomes compared to the various alternatives a real patient would have.
Our scientific evolution now requires information on meaningful measures of effectiveness on real patients with real problems, and enough of them to make it persuasive. While a well-done case report can be a valuable starting point, it's almost never a scientific breakthrough. Reproducibility of effect across a broad spectrum of patients in real-world settings is what matters. And when information that shows meaningful benefit (say, ability to return to regular work activity, rather than a few degrees improvement on a provocative exam test), all of us need to start looking at what we might do to adapt our care approaches to harness such improvements.
Improving doesn't mean we have to start over, but it does mean we need to keep our eye on the ball of overall sustainable, functional improvement of the patient. At the individual practitioner level, it means a few things. Track more than clinical findings in the objective portion of your SOAP notes. A quick and easy one is to use an anchored scale (such as a visual analog scale) to document things such as how far a patient can go before pain stops them (distance, time, etc.). Another thing individuals can do is ask the hard questions of you presenters: "What proportion of patients have been documented to benefit, what kinds of inclusion and exclusion criteria apply to the patients it was tried on" or "How were the patients assessed (e.g., by the presenter or developer, or by an independent observer)?" Technique developers and teachers also can adopt postures of explicit assessment, with aims toward improvement in patients' self-reliance and functional improvement.
We've made some incredible scientific strides in the past three decades, primarily in awareness of the importance of engaging in and supporting scientific discovery. What we need to do now is increase our capacity to do meaningful science with more research-savvy folks throughout all walks of the profession (practitioners, political leaders, instructors, as well as more researchers, per se). Further, we need to look at scientific inquiry, and the findings from such inquiry, as essential tools for refining what we do into true "best practices" that improve how we intervene with our patients, compared to real choices the patient has to make. As this accrues to us, so do our position, status, and cultural authority within health care. Oh yeah, and our patients will get better faster and at lower cost and/or higher value, making us the provider everyone want to use.