Introduction
Eight hundred million people now use ChatGPT each week. Google AI Overviews appear above traditional search results for hundreds of millions of queries daily. Perplexity, Claude, Gemini, Copilot, Grok, and DeepSeek are reshaping how investors, patients, regulators, and the public find information about organizations.
These platforms do not always get it right. They get it wrong, and the consequences are not theoretical. Organizations face defamation claims, regulatory scrutiny, lost revenue, and reputational damage that traditional search engine management cannot address.
This publication collects verified, sourced cases of AI platforms generating falsehoods about specific organizations, products, and individuals. Every source cited is a reputable publication: a major news outlet, legal journal, government agency, academic institution, or established trade press. The cases are organized by sector and include the platform involved, what the AI stated, what was actually true, and the documented impact.
The pattern is consistent. The scale is growing. The legal and regulatory landscape is responding.
Organizations Harmed by AI-Generated Falsehoods
Wolf River Electric v. Google
- Platform
- Google AI Overviews
- What AI Stated
- Google AI Overviews told users that Wolf River Electric, a Minnesota solar contractor, was being sued by the state Attorney General for deceptive sales practices, hidden fees, and high-pressure tactics.
- What Was True
- No such lawsuit existed. The AI fabricated the entire claim, misattributing information from unrelated cases.
- Impact
- Wolf River Electric reported lost contracts and terminated nonprofit partnerships as a direct result of the false AI-generated claims.
- Legal Status
- Defamation lawsuit filed March 2025, seeking $110–210 million in damages. Case returned to Minnesota state court January 2026 after Google missed the statutory removal deadline. Active litigation.
Source: Reason / Volokh Conspiracy, June 2025
Source: Futurism, June 2025
Walters v. OpenAI (ChatGPT Defamation)
- Platform
- ChatGPT
- What AI Stated
- ChatGPT generated a fabricated legal complaint summary claiming a Georgia radio host had been accused of embezzling money from a nonprofit organization he had no connection to.
- What Was True
- A journalist asked ChatGPT to summarize a real lawsuit. ChatGPT fabricated the defendant’s identity, inserting the wrong person’s name into the summary.
- Legal Status
- Georgia Superior Court dismissed the case May 2025, granting summary judgment to OpenAI. The dismissal turned on case-specific factors: the plaintiff was a public figure who suffered no documented damages, and OpenAI’s explicit disclaimers warned users of potential inaccuracies. For private organizations with demonstrable losses, the legal landscape remains unsettled.
Source: Loeb & Loeb LLP, May 2025
Source: Gibson Dunn, May 2025
Major Publishers v. Perplexity AI (Fabricated Attribution)
- Platform
- Perplexity AI
- What AI Stated
- Perplexity fabricated content and falsely attributed it to The New York Times, Wall Street Journal, New York Post, and Chicago Tribune.
- What Was True
- The attributed content was fabricated. The publications never wrote it. Multiple lawsuits allege Perplexity illegally ingested paywalled content and used it to generate fabricated summaries presented as the publishers’ own reporting.
Source: Deadline, October 2024
Source: TechCrunch, December 2025
Air Canada: AI Chatbot Fabricated Organization Policy
- Platform
- Air Canada website chatbot
- What AI Stated
- The chatbot told a customer that Air Canada offered bereavement fares and that a retroactive discount could be applied within 90 days of travel.
- What Was True
- Air Canada’s actual policy explicitly states it will not provide refunds for bereavement travel after the flight is booked. The policy the chatbot described did not exist.
- Precedent
- British Columbia Civil Resolution Tribunal ruled Air Canada liable. Rejected the defense that the chatbot was a “separate legal entity.” Organizations bear full liability for AI outputs on their platforms.
Source: CBC News, February 2024
AI Falsehoods in Healthcare
Healthcare carries unique risk. AI platforms deliver incorrect drug information, fabricate clinical research, and misrepresent safety data. The downstream consequences extend beyond reputation. ECRI Institute, the independent patient safety organization, named AI chatbot misuse the number-one health technology hazard for 2026.
#1 ECRI Institute named “Misuse of AI Chatbots” the number-one health technology hazard for 2026.
Source: ECRI Institute, 2026 Top 10 Health Technology Hazards
Google AI Overviews: Documented Health Falsehoods
- Platform
- Google AI Overviews
- Documented Failures
- Advised pancreatic cancer patients to avoid high-fat foods (clinicians recommend the opposite). Generated misleading explanations of liver blood test results. Provided false information about cancer screening.
Source: The Guardian, January 2026
FDA Internal AI Tool Generated Nonexistent Studies
- Platform
- ELSA (FDA internal AI tool)
- What Happened
- The FDA’s own internal AI chatbot cited studies that do not exist in regulatory summaries and drug approval reviews.
- Significance
- AI-generated falsehoods contaminating the regulatory review process itself. An FDA official noted the system produces confident citations to nonexistent research.
Source: CNN Politics, July 2025
ChatGPT Fabricated Medical References at Scale
- Platform
- ChatGPT
- What AI Stated
- Asked medical questions across 20 clinical domains, ChatGPT generated citations that appeared authentic — using real author names, plausible journal titles, and coherent volume and page numbers. The references looked indistinguishable from legitimate medical literature.
- What Was True
- Sixty-nine percent of the references were entirely fabricated. An earlier study found similarly catastrophic rates: 47% of medical references entirely fabricated, 46% inaccurate, and only 7% accurate.
Source: PMC / NIH, 2024
Source: PMC / NIH, 2023
AI Falsehoods in the Legal System
The legal profession’s encounter with AI-generated falsehoods provides a useful analog for regulated industries. Professionals rely on AI output without verification. The consequences are public, documented, and increasingly punished.
MyPillow Attorneys Sanctioned for AI-Fabricated Citations
- Platform
- AI-generated legal research (specific tool undisclosed)
- What Happened
- Two attorneys submitted a federal court filing containing approximately 30 fabricated case citations in a Mike Lindell defamation case.
- Outcome
- U.S. District Court in Denver fined each attorney $3,000, July 2025.
Source: NPR, July 2025
Mata v. Avianca
- Platform
- ChatGPT
- What Happened
- An attorney used ChatGPT to prepare a legal brief and submitted it to federal court. The brief cited entirely fabricated court cases, including fake judge initials and citations. The judge ordered the attorney to show cause.
- Outcome
- Attorney fined $5,000 by the U.S. District Court for the Southern District of New York, June 2023.
Source: CNBC, June 2023
Source: Wikipedia: Mata v. Avianca, Inc.
Regulatory Enforcement
Federal regulators are moving. The pattern across agencies is consistent: AI-related false statements are being treated as enforcement priorities, not future risks.
FTC: Operation AI Comply
- Agency
- Federal Trade Commission
- Actions
- FTC launched “Operation AI Comply” enforcement campaign targeting deceptive AI claims. Enforcement actions against DoNotPay (false “AI Lawyer” claims), FBA Machine ($15.9 million consumer fraud via AI storefronts), and Rytr (AI-generated fake reviews).
SEC: Enforcement Actions for AI-Related False Statements
- Agency
- Securities and Exchange Commission
- Actions
- SEC charged investment advisers Delphia ($225,000 fine) and Global Predictions ($175,000 fine) for false and misleading statements about their use of AI in securities disclosures.
- Significance
- SEC incorporated “AI-washing” into 2024 examination priorities. CEO and CCO personal liability for failing to supervise AI-generated statements.
Source: SEC Press Release 2024-36, March 2024
Source: Mayer Brown, April 2024
AI citation volatility compounds the problem. Only 9.2% of URLs cited in Google’s AI Mode remain consistent across three searches of the same query on the same day.
The information layer between organizations and the public is being rewritten by systems that do not distinguish between fact and fabrication. The cases documented here are not anomalies. They are the visible portion of a structural problem.
Contributors
Andrew David Linde
Founder and Principal
Craton Meridian™ | AI Integrity™