A Texas Senate bill aims to prevent health insurance companies from using AI-based algorithms as the sole basis for denying claims, whether fully or partially.
Senate Bill 815 also seeks to ban AI from altering health care services based on medical necessity. The bill grants the state insurance commissioner the authority to audit and inspect a utilization review agent’s AI use at any time.
If passed, the law would take effect on Sept. 1.
Requests for comment from Republican state Sen. Charles Schwertner, the bill’s sponsor, and trade groups America’s Health Insurance Plans and the Alliance for Artificial Intelligence in Healthcare were unsuccessful.
In October 2024, California enacted similar restrictions, requiring that AI-driven decisions rely on an individual’s data rather than a group dataset.
California’s law, which took effect Jan. 1, limits insurers to using an individual’s medical or clinical history, circumstances presented by the requesting provider, and other relevant clinical details from their records.
Texas’ proposed law, along with regulations in California, Utah, and Colorado, represents growing efforts to regulate AI in health insurance. A report from consumer representatives for the National Association of Insurance Commissioners highlighted these laws as effective, though it noted that insurers’ AI use is advancing faster than regulations in most states.

As of late 2024, 17 states had adopted an AI model law from the NAIC, while four others had issued insurance-specific regulations or guidance.
If the first step is admitting a problem, many states have not even gotten to this first step, much less tried to actively address it through regulation or policy
NAIC consumer representative Silvia Yee, senior staff attorney and policy analyst for the Disability Rights Education and Defense Fund
Yee added that consumer input should shape AI regulations.
“Tech experts and well-meaning regulators can’t identify all the new obstacles AI may introduce or the existing ones it may worsen without consulting consumers,” Yee said.
Research Assistant Professor Will Fleisher from Georgetown University’s Center for Digital Ethics told that a major concern with AI in health care is transparency.
“AI systems are often difficult to understand,” Fleisher said. “For advanced AI models, including large language models, even their developers do not fully grasp how they function. This lack of transparency creates problems for patients. If a claim is denied by a complex AI system, the insurer may not be able to explain why. Even if they do understand the reasoning, they might use AI’s complexity as an excuse to avoid providing an explanation.”
Fleisher also highlighted the issue of algorithmic bias.
AI systems frequently produce biased decisions, particularly against marginalized groups. This bias could result in higher denial rates for people of color or other disadvantaged populations.
He warned that AI-driven systems could be designed in ways that prioritize profit over patient care.
Katherine McLane, spokesperson for the Texas Coalition for Patients, echoed these concerns in a statement responding to the proposed bill. “Patients should be treated as individuals, not data points,” she said. “AI can help streamline processes, but critical medical decisions should remain in the hands of doctors who understand patients’ unique needs.”
Texas Senator Charles Schwertner also weighed in, telling that AI has significant potential in health care but should not be used for life-or-death decisions. “This technology is still developing,” Schwertner said. “It should not replace human judgment in critical cases. Algorithms alone cannot account for the complexities of patient care.”