Estados rojos y azules buscan limitar el uso de la inteligencia artificial en seguros de salud. Trump quiere lo opuesto

How to use artificial intelligence (AI) to ensure health security? The answer to this unusual public policy question, faced by the same group of Republican Governor Ron DeSantis of Florida and the Democratic Governor of Maryland, both against President Donald Trump and California Governor Gavin Newsom.
The regulation of artificial intelligence, particularly its use by health care workers, is becoming a politically divisive issue that shifts traditional partisan lines.
Quienez the impetus, with Trump at the helm, does not only want to insert Lleno’s AI into the government, as in the Medicare experiment that uses prior authorizations (the process to authorize certain treatments and drugs), if we also have to help states that claim to set rules and limits. A decree issued in December aims to invalidate the majority of regular staff jobs, where there is “a career with adversaries for supremacy” in a new “technological revolution”.
“To ensure, established AI companies must have the freedom to innovate without strict regulations,” according to Trump’s order. “But regulation is excessively frustrated is imperative.”
Across the country, states are rebelling. In four other countries—Arizona, Maryland, Nebraska, and Texas—we approved laws last year that limit the use of AI for health security. Others, in Illinois and California, had laws similar to the previous year.
Rhode Island lawmakers plan to kick off a new year after failing to sanction a plan in 2025 for regulators to copy data on technology use. An initiative in North Carolina last year requires insurers not to use AI as the sole basis for deciding coverage generated by interested Republican lawmakers.
DeSantis, a former Republican presidential candidate, introduced an “AI Bill of Rights,” with provisions including restrictions on its use in safety claims and requirements for a state regulator to inspect the algorithms.
“We have a responsibility to ensure that new technologies develop in moral and ethical ways, in ways that reflect our established values, not just erosions,” DeSantis said during his annual statewide address in January.
List for regular
The findings show that the United States is not confident in AI. In December, Fox News reported that 63% of voters were described as “very” or “extremely” concerned about artificial intelligence. The concern is majority across the political spectrum. Thirds of Democrats and few 3 of every 5 Republicans will have repairs on AI.
Health insurance tactics to reduce costs are also of public concern. A KFF income survey showed widespread dissatisfaction over issues like prior authorization.
In recent years, information from ProPublica and other outlets has established the use of algorithms to quickly search for security requests or prior authorization requests, to be verified with much review by a medical professional.
In January, the House Media and Adjudicators Committee convened executives from Cigna, UnitedHealth Group and other major insurers to discuss concerns about the high costs of medical care.
When engaging directly, implementers refuse or avoid referring to the use of the most advanced technology to request authorization requests or upload claims.
AI “is never used for whistleblowing,” assured lawmaker David Cordani, executive director of Cigna. Similar to other companies in the health security industry, the company is requesting methods to research claims, as requested by ProPublica. Justine Sessions, voice of Cigna, said the company’s claims process “is not AI-powered.”
Indeed, companies insist on presenting AI as an action tool that does not decide alone. Optum, part of healthcare giant UnitedHealth Group, announced Feb. 4 that it is implementing tech-enabled prior authorization, allowing for faster trials.
“We are transforming the prior authorization process to address the points of conflict generated,” John Kontor, senior vice president of Optum, said in a press release.
In addition, Alex Bores, computer scientist and member of the New York Assembly, a key figure in the state’s legislative debate on AI – which ends in a comprehensive law for the regularity of this technology –, assured that AI is a camp that, naturally, requires regulation.
“A lot of people find the responses their insurance companies are getting to be difficult to understand,” said Bores, a Democrat who has seen evasion in Congress. “Aggregating technology that can’t explain your own decisions won’t help you make things clearer.”
At least some healthcare ambito — for example, many doctors — responds to lawmakers who defend the regulations.
The Asociación Médica Americana (AMA) “advocates state regulations that provide more accountability and transparency to commercial insurers that use AI tools and machine learning to review prior authorization requests,” said John Whyte, its executive director.
Whyte reported that insurers have used AI and “doctors have suffered consequences for patient attention, unclear decisions from insurers, inconsistent authorization rules and administrative burden burn.”
Insurers respond
According to Daniel Schwarcz, a law professor at the University of Minnesota, with legislation approved or awaiting approval for new states, it’s unclear how much of an impact these laws actually have. States cannot regularly use “self-insured” aircraft, which many employers use; Only the federal government belongs to this faculty.
But there are deeper problems, Schwarcz says: Most statutory laws aim to require that a human being can make an AI-specific decision, but it’s not made clear what that means in practice.
The laws don’t provide a clear framework for hearing whether revision is sufficient and, over time, humans tend to overturn some findings and simply look good at any suggestion from a computer, he says.
Also, insurers are seeing these bills as a problem.
“In broad terms, the regulatory burden is real,” said Dan Jones, senior vice president of federal services for the Alliance of Community Health Plans, a trade group that represents some health insurance plans without profit penalties. If insurers spent a lot of time dealing with a patchwork of state and federal laws, that meant they would respond to “less time and less recourse for care in what we supposedly need to do: ensure that patients have adequate access to medical monitoring.”
Linda Ujifusa, a Democratic senator from Rhode Island, said insurers exploited a plan introduced last year to restrict the use of AI to denials of coverage. It was approved in one camera, but in the other it did not advance.
“There is enormous opposition” to any intention of regular practices like prior authorization, he said, and also “huge opposition” to representing intermediaries — like private insurers or pharmaceutical beneficiary administrators — “as part of the problem.”
In a charter that criticizes the project, AHIP, the main group that represents insurers, put in place “balanced policies that promote innovation and, at the same time, protect patients.”
“Health plans recognize that AI has the potential to drive better outcomes in healthcare by improving the patient experience, reducing attention, accelerating innovation and reducing administrative burden and costs to improve patient care,” said Chris Bond, portavoz of AHIP, a KFF Health News.
And I added that the sector needs “a coherent national investigation based on a comprehensive federal AI policy framework.”
In search of balance
In California, Newsom has signed into law some laws that regulate AI, including one that requires health insurers to ensure their algorithms are applied fairly and equitably. But the Democratic governor has taken other steps with a broader investigation, such as a plan that imposes the most requirements to make the technology work and must disclose its use to regulators, doctors and patients when they learn about it.
According to Chris Micheli, a Sacramento lobbyist, it is likely that the governor will want to ensure that the state presupposition — which will be maintained thanks to the great grace of the big companies of the stock market, especially technology companies — will not be resident. And for that, he says, there is a lack of balance.
Newsom is “ensuring that this flow of money continues and, for the duration, that there are some protections for California consumers,” he said. Añadió that insurers consider that they are subject to a large amount of regulations.
The Trump administration appears to be able to do this. The President’s recent executive order proposes to demand justice and restrict certain federal funds to any state that may be one that characterizes “excessive” state regulation, with certain exceptions, such as policies intended to protect children.
That may have been ruled unconstitutional, said Carmel Shachar, a health policy expert at Harvard Law School. Authority to invalidate statutory laws is usually sought from Congress, explicitly, and federal lawmakers have twice considered, but ultimately rejected, a provision barring regular states from AI.
“Just our prior knowledge of federalism and the balance of power between Congress and the Executive Branch makes it very likely that a challenge will take place,” Shachar said.
Some lawmakers in Trump’s order are highly skeptical and point out that the administration has eliminated controls and prevented others from establishing themselves, to an extreme degree.
“Right now, it’s not a question of whether the regulation should be federal or state,” Alex Bores said. “The question is whether to regulate at the state level or directly not to do so. »



