Skip to content

State Farm hit with lawsuit over AI bias, discrimination and unpaid insurance claims

State Farm hit with lawsuit over AI bias, discrimination and unpaid claims

State Farm Insurance Co. is under fire in federal court over claims it denied or canceled policies using what the plaintiffs call “cheat-and-defeat AI algorithms.”

The lawsuit, filed in the U.S. District Court for the Middle District of Alabama, also accuses the company of discriminating against policyholders based on disability, prior abuse, credit scores, and criminal history.

According to the complaint, the algorithms disproportionately harmed Black and nonwhite policyholders.

The filing goes further, alleging State Farm flagged, targeted, and tagged these customers’ claims for extra scrutiny while white policyholders faced lighter review.

That, the plaintiffs argue, violates the Fair Housing Act and results in payment delays and stalled home repairs.

One example cited: State Farm has not paid $372,437 tied to losses from lightning strikes and water damage.

The suit also claims the company engaged in elder exploitation fraud by pushing unnecessary products through deceptive sales tactics.

Plaintiffs allege a longer pattern too — accusing State Farm of “doctoring” engineering reports and greenlighting fraudulent assessments of wind and water damage.

State Farm responded cautiously. The company declined to address specific allegations but said it remains committed to treating customers “with respect and dignity.”

A spokesperson added: “We take pride in our customer service and are committed to paying what we owe, promptly, courteously, and efficiently. Each claim is unique and handled based on its own merits and the facts of the loss.”

“Cheat-and-defeat” is a legal framing: cheat customers of valid benefits, defeat oversight and accountability.

  • The AI or algorithm screens claims or policy applications with rules that push outcomes toward denials, cancellations, or delays.
  • It may use indirect data (like credit scores, zip codes, criminal history, or even medical or abuse history) as proxies for race, disability, or socioeconomic status. That’s where the discrimination allegations arise.
  • These systems are often described as opaque—customers don’t know why they were flagged, and insurers can say “the system decided it.”
  • Plaintiffs argue the algorithms create an unfair advantage for insurers: many customers don’t appeal or don’t understand the denial, which saves the company money.

What “cheat-and-defeat AI algorithms” imply in the lawsuit

  • The complaint alleges insurers used algorithmic models that “cheat and defeat” — meaning they’re engineered to game or override fair decision logic in ways that mask discrimination.
  • The plaintiffs claim these algorithms systematically flag, delay, or deny claims from certain groups (e.g. disabled, nonwhite, abuse survivors) at a higher rate—using hidden or opaque decision rules.
  • In effect, the “cheat-and-defeat” phrasing suggests intentional design to evade detection, accountability, or regulatory oversight—even as outcomes may breach discrimination laws.
  • It’s part of a wave of legal challenges against insurers using AI or algorithmic systems to reject or limit claims, often with minimal human oversight.
  • In health insurance, for example, algorithms like nH Predict (used by some insurers) have been accused of rigidly enforcing cost-cutting rules, overruling clinician opinions, and causing wrongful denials.
  • Some suits argue that even non-AI, rule-based engines function similarly: they can produce decisions that seem automatic, opaque, and hard for claimants to contest.