On Oct. 21, 2025, a judge in Florida issued a groundbreaking decision in Complete Care v State Farm, 25-CA-1063. It concerns a fact pattern that many chiropractic doctors have faced wherein an insurer, such as State Farm or Allstate, decides to simply stop paying all claims submitted by a healthcare provider.
| Digital ExclusiveAI’s Blind Spots in Healthcare
- AI models are largely restricted to open-access journals, yet roughly 75% of medical journals remain locked behind subscription paywalls, leaving much of the literature inaccessible.
- Even when AI has access to full-text studies, it struggles to detect deeper methodological weaknesses, such as biased designs, inappropriate endpoints or underpowered samples.
- Just as “Dr. Google” once convinced people they had found miracle cures, “Dr. AI” can present incomplete or distorted information that fuels unnecessary anxiety, inappropriate treatments, or misplaced confidence in unproven solutions.
Not long ago, patients turned to “Dr. Google” to self-diagnose and chase miracle cures. Today, the trend has shifted to “Dr. AI.” While AI can rapidly summarize information and even analyze medical research, it carries significant blind spots every clinician should recognize.
Access Limitations
AI models are largely restricted to open-access journals, yet roughly 75% of medical journals remain locked behind subscription paywalls, leaving much of the literature inaccessible. As a result, AI-generated answers are often built on only a fraction of the available evidence.
Abstract Dependence
When full-text access is blocked, AI often defaults to abstracts. This is problematic because abstracts frequently misrepresent study findings. For example:
- A review published in BMJ Open of 185 publications found that nearly 32% of abstracts contained errors, omissions or “spin” that distorted conclusions.1
- Li, et al., reported a 39% discordance between abstracts and full reports.2
- Nascimento, et al., found that in 66 low back pain systematic reviews, 80% of abstracts contained some form of spin.3
If AI builds its recommendations on these summaries, the guidance can rest on shaky ground.
Difficulty Spotting Methodological Flaws
Even when AI has access to full-text studies, it struggles to detect deeper methodological weaknesses, such as biased designs, inappropriate endpoints or underpowered samples. Identifying these flaws requires domain expertise and critical appraisal skills that extend beyond AI’s current abilities.
The Bottom Line
AI may sound authoritative, but its blind spots can easily mislead patients who place too much trust in its answers. Just as “Dr. Google” once convinced people they had found miracle cures, “Dr. AI” can present incomplete or distorted information that fuels unnecessary anxiety, inappropriate treatments, or misplaced confidence in unproven solutions.
In chiropractic practice, this could mean patients coming in with AI-generated conclusions about back pain treatments or the safety of cervical manipulation that sound convincing, but rest on weak evidence. Unless clinicians step in to clarify, provide context and ensure decisions remain grounded in sound clinical judgment, patients risk being guided by errors rather than evidence.
Actions to Take
If you encounter a clinically important finding from AI, your first step should be to verify its foundation. Determine whether the claim is based on the full study report or only on an abstract, which may omit critical details or misrepresent results.
Always ask AI for a citation and, ideally, a direct link to the full-text article. If AI cannot provide access to the full study, it is reasonable to assume the information may be drawn from an abstract alone.
Abstracts are helpful for quick summaries, but are well-documented to contain errors, omissions or spin that distort the true findings. Relying on them without deeper review risks building conclusions on a weak foundation.
Ultimately, until you have read the full study yourself, you cannot be confident that AI’s representation of the evidence is accurate. Critical clinical decisions should never rest on an abstract or on AI’s partial interpretation without direct confirmation from the primary source.
References
- Wise A, Mannem D, Arthur W, et al. Spin within systematic review abstracts on antiplatelet therapies after acute coronary syndrome: a cross-sectional study. BMJ Open, 2022;12:e049421.
- Li G, Abbade LPF, Nwosu I, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol, 2017;17:181.
- Nascimento DP, Gonzalez GZ, Araujo AC, et al. Eight in every 10 abstracts of low back pain systematic reviews presented spin and inconsistencies with the full text: an analysis of 66 systematic reviews. J Orthop Sports Phys Ther, 2020;50:17-23.