California is taking a significant step to address public concerns regarding artificial intelligence (AI) in health care. Senate Bill 1120 (SB 1120), signed into law by Governor Gavin Newsom in 2024, prohibits health insurance providers from using AI as the sole basis for denying claims. The law aims to uphold the critical role of human judgment in health care decision-making. Striking a balance between technological advancements and ensuring patient care, the legislation is set to have both immediate and far-reaching impacts.
AI’s Growing Role in Health Insurance and Its Controversies
The rapid adoption of AI in industries like health care has led to efficiency improvements, but it has also sparked ethical and practical concerns. Health insurance companies, for instance, have increasingly relied on AI algorithms to streamline claims processing and decision-making. However, critics argue this reliance can lead to unjust claim denials and a lack of empathy in addressing unique patient needs.
- 26% of California Health Insurance Claims Denied: According to data from the California Nurses Association, approximately one in four insurance claims in the state was denied in 2024. A significant portion of these denials were attributed to automated systems, frustrating patients and providers alike.
- Federal Concerns: A 2023 lawsuit against UnitedHealthcare highlighted the misuse of AI in determining claim outcomes, claiming that some systems prioritized cost-cutting over patient well-being.
SB 1120 directly addresses such cases, mandating that every claim involves human oversight. While AI remains a tool for assisting decisions, doctors’ expertise and professional judgment are now firmly required.
Key Provisions of SB 1120
This legislation outlines specific parameters to ensure fairness and transparency in insurance-related decisions. Notable provisions include:
- Human Oversight in Decision-Making: AI systems can no longer deny, delay, or alter services deemed medically necessary by licensed doctors.
- Strict Decision Deadlines: Insurance providers must make determinations within:
- Five business days for standard cases
- 72 hours for urgent cases
- 30 days for retrospective reviews
- Penalties for Noncompliance: Insurance companies face fines for missing deadlines or improperly using AI, reinforcing accountability.
- Regulator Oversight: The California Department of Managed Health Care will monitor implementation, audit denial rates, and improve transparency across the industry.
Examples of the Law’s Impacts
Paula Wolfson, a manager at Avenidas Care Partners, emphasized the challenges faced by older adults. Many patients, she explained, “experience immense stress when denied access to necessary care.” With SB 1120 now in effect, vulnerable populations may find relief as human oversight becomes a mandated part of the claims process.
State Senator Josh Becker, who authored the bill, explained that the legislation is not about rejecting AI technology. Instead, it’s about emphasizing medical expertise. “Patients are unique, with complex medical histories no algorithm can fully understand,” he noted, adding that the law aims to balance innovation with patient care.
Other Industries Facing AI Challenges
Health insurance is not the only sector grappling with AI implementation issues. Across various industries, similar concerns about transparency, fairness, and human oversight have emerged:
Auto Insurance
AI has been embraced in auto insurance for assessing claims and determining responsibility in accidents. However, over-reliance on these systems often leads to customer dissatisfaction and disputes over liability.
- Privacy Risks: Companies have faced backlash for using user data without consent to predict driving behavior.
- Errors in Claims Processing: Automated accident analyses can sometimes misjudge road conditions or vehicle positions, leading to incorrect claim denials.
- Bias Concerns: There are rising complaints that AI models may unintentionally disadvantage drivers based on geographic location or car ownership history.
Recruiting and Hiring
AI-powered tools have become common in screening job applicants, but critics argue these systems often exhibit bias. For example:
- Certain algorithms filter out qualified candidates based on keywords, creating hiring gaps.
- There have been publicized cases of AI tools discriminating against applicants of specific genders or backgrounds, reinforcing systemic biases.
Consumer Lending
Many financial institutions have adopted AI for credit scoring and lending decisions. While AI models can quickly analyze vast amounts of consumer data, they risk perpetuating inequalities.
- Unintended Discrimination: Algorithms may penalize borrowers from marginalized communities or inaccurately assess creditworthiness due to data gaps.
- Appeals Process Issues: Consumers find it difficult to contest AI-driven loan rejections when these systems lack human review mechanisms.
Ensuring Responsible Use of AI Across Industries
California’s legislative effort can serve as a model beyond health insurance to address the ethical dilemmas presented by AI. Drawing on its example, industries should focus on three key strategies to integrate technology responsibly:
- Preserving Human Judgment: Automated systems should complement—not replace—human expertise. Whether in health care or finance, decisions affecting individuals’ lives require empathy and context that AI cannot yet provide fully.
- Improving Transparency: Companies must ensure their processes are understandable and traceable. Clear documentation about how decisions are made enhances public trust.
- Regular Oversight and Accountability: Regulators should mandate audits to prevent misuse of AI and impose penalties when companies violate ethical or procedural standards.
Looking Ahead
Artificial intelligence holds immense potential to improve systems across sectors. From expediting health care authorizations to enhancing auto safety, the possibilities are vast. However, its integration must be guided by a commitment to prioritizing human welfare. California’s SB 1120 underscores a broader lesson for responsible AI development—it should enhance efficiency without sacrificing fairness or empathy.
By combining technological advancement with ethical safeguards, businesses and policymakers can work together to create a future where AI serves as an ally, not a barrier, to improving lives.