While there may be no “magic bullet” when it comes to health, this should not dissuade patients or practitioners from seeking out ingredients that offer multiple health benefits. When it comes to dietary supplements, there are thousands upon thousands of choices. So, why not choose one that can address pain and assist with mental health? A supplement that can address inflammation, while also preventing certain types of cancer.
| Digital ExclusiveHow Should Chiropractors Use Scientific Evidence?
Evidence-based medicine is all the rage these days, and for good reason. When you reflect on your time in practice, no doubt you can recall a good number of patients who had taken ineffective medication, experienced side-effects worse than the condition, or had to cope with a failed surgical outcome. To the extent that scientific research can place better information in the hands of doctors and patients about when an intervention is likely to be of value, evidence-based practice will make a great contribution. However, science does not and cannot provide all the answers to all clinical situations.
Playwright Bertoldt Brecht, in The Life of Galileo, intoned, "The aim of science is not to open the door to everlasting wisdom, but to set a limit to everlasting error." The goals of evidence-based practice include reducing the number and severity of medical errors, which reduces time and money spent on useless or less-than-effective treatment; and increasing better clinical outcomes. Unfortunately, the information science provides is nearly always incremental, and often ambiguous - even contradictory. This makes using scientific evidence as a resource for improving clinical practice a challenge. People and systems typically need to make definitive, dichotomous decisions (e.g., give the adjustment, prescribe the medication, authorize the reimbursement - or not) and need to justify the decisions they make.
All scientific studies have limitations (the better ones always discuss these in their reports), and such limitations, and the context of the reported findings, are essential in determining how to use such information for making individual clinical decisions. Moreover, no one has the time or expertise to sort through the thousands of pages of reports to determine what is relevant, and what is not. Instead, we rely on user-friendly syntheses of reports in such forms as literature reviews and practice guidelines. Frequently, these resources possess not just the limitations of the studies they were based on, but additional limitations of their own (e.g., emphasis on studies that favor a preconceived bias, or flawed reviews of published studies).
It is critical that clinicians, patients and policy-makers understand the context in which "evidence" was gathered and reported in the first place, and the context in which it was appraised and synthesized. Better journals are increasingly requiring that abstracts of articles be "structured" -organized into sections that clearly specify purpose, methods, findings and conclusions. Literature reviews and quick-reference cards for practice guidelines often neglect to include detailed information on methodology and limitations regarding the processes by which they were developed, although this practice also is changing.
As doctors, patients and policy-makers mature in their use of evidence, the shape of (and form of) what is being presented is improving. Organizing information about evidence in readily revisable, accessible and useable formats is key. One of my favorite formats is that used by the National Cancer Institute's Web site on treatment summaries (http://imsdd.meb.uni-bonn.de/cancernet/pdqphy.menu.html). The site provides doctor-oriented technical versions that include detailed background information, and companion patient versions devoid of highly technical jargon. The information is concise, condition- and situation-specific and updated regularly.
Attributes of information about evidence synthesized in a useful fashion include the following:
- summaries of studies organized by the patient situation under which the research was obtained (e.g., condition, gender, population, ages, exposures, risk factors);
- quantity and quality of studies available;
- discussion of reasons studies may be in conflict; and
- limitations in applicability to given clinical scenarios.
Doctors and patients are not the only ones evolving in their use of scientific information. Policy-makers, regulators and payers also are using more systematic approaches to evaluate the quality of scientific information, as a means of deciding how to spend resources and what risks to permit their members and policy-holders to take. Health care purchasers are using technology assessments to help make coverage decisions. New and expensive diagnostic tests, devices and procedures are regularly subjected to systematic reviews. These reviews include the magnitude of any outcome benefit for the patient compared to alternatives; the kinds of patients it was tested on compared to the type of patient it is being used on ("off-label" use); and the strength and limitations of the studies available. For example, is it worth paying $100 more a month for an antihistamine medication that offers only the marginal added convenience of needing to be taken once a day instead of twice?
The goals and interests are the same for everyone: reducing medical errors; consistency in decision-making in the absence of evidence (as opposed to when evidence against effectiveness is clear); assessing the applicability of the available evidence to the patients in question; evaluation of the role of patient or doctor preference in treatment decisions, assuring the quality of the care received; and weighing risks and costs involved (individually and collectively).
So, how does chiropractic stack up in the evolving era of evidence-based information and decision-making? The fact that more and more quality studies exist about some of our methods (spinal manipulation compared to and in combination with other treatments) is influencing the development and composition of general guidelines. However, the existing data about spinal and extremity manipulation remain deficient in terms of characterizing patient populations, different types of interventions, and variation in applications (such as different frequencies and durations of care).
In addition, most of our chiropractic guideline efforts to date are limited in their information, timeliness, design or process. General practice parameters published over the past decade offer little substance or specificity for given clinical situations, and tend to be extremely practitioner- and profession-centered. As a result, these parameters are often ignored or misused by those needing to make those "bottom-line," dichotomous decisions mentioned earlier. The future of "evidence-based chiropractic" has yet to unfold, but a reasonable amount of evidence exists for some of our methods, as do more systematic processes for identifying and coping with limitations in available information.
Accurate, informative, unbiased, clinically and situation-specific, and user-friendly syntheses for chiropractic practice and clinical decision-making are still needed. But, just like scientific information in general, the aim of such syntheses of evidence should not be to open the door to some kind of "everlasting wisdom," but to reduce "everlasting error" and enhance the effectiveness and efficiency of the care we provide.
Robert Mootz's, DC
Olympia, Washington
thinkzinc@msn.com