We must not let AI ‘pull the doctor out of the visit’ for low-income patients | Leah Goodridge and Oni Blackstock

IIn Southern California, where homelessness rates are among the highest in the country, a private company, Akido Labs, runs clinics for homeless and low-income patients. The warning? Patients are seen by medical assistants who use artificial intelligence (AI) to listen to conversations and then issue potential diagnoses and treatment plans, which are then reviewed by a doctor. The company’s goal, its chief technology officer told the MIT Technology Review, is to “take the doctor out of the visit.”
It’s dangerous. Yet this is part of a larger trend where generative AI is being integrated into healthcare for medical professionals. By 2025, an American Medical Association survey found that two out of three doctors were using AI to help them in their daily work, including diagnosing patients. An AI startup has raised $200 million to provide healthcare professionals with an app called “ChatGPT for Doctors.” US lawmakers are considering a bill that would recognize AI as capable of prescribing medications. While this AI trend in healthcare affects almost all patients, it has a more profound impact on low-income people who already face significant barriers to care and higher rates of mistreatment in healthcare settings. Unhoused and low-income people should not serve as a testing ground for AI in healthcare. Instead, their voices and priorities should determine if, how, and when AI will be implemented in their care.
The rise of AI in healthcare has not happened in a vacuum. Overcrowded hospitals, overworked clinicians, and relentless pressure for medical practices to operate seamlessly, moving patients in and out of a vast for-profit health system, set the conditions. The demands placed on healthcare workers are often compounded in economically disadvantaged communities where healthcare facilities are often under-resourced and patients are uninsured, with a greater burden of chronic illness due to racism and poverty.
This is where someone might ask, “Isn’t something better than nothing?” » Well, actually, no. Studies show that AI-based tools generate inaccurate diagnoses. A 2021 study in Nature Medicine examined AI algorithms trained on large chest X-ray datasets for medical imaging research and found that these algorithms systematically underdiagnosed Black and Latinx patients, patients registered as women, and patients with Medicaid insurance. This systematic bias risks worsening health inequities for patients already facing barriers to care. Another study, published in 2024, found that AI misdiagnosed breast cancer screenings in black patients – the risks of false positives for black patients screened for breast cancer were greater than for their white counterparts. Due to algorithmic bias, some AI clinical tools have notoriously performed worse on Black patients and other people of color. This is because AI does not “think” independently; it relies on probabilities and pattern recognition, which can reinforce bias against already marginalized patients.
Some patients are not even aware that their healthcare provider or healthcare system uses AI. A medical assistant told MIT Technology magazine that his patients know an AI system is listening to them, but it doesn’t tell them it’s making diagnostic recommendations. This reminds us of a time of exploitative medical racism where black people were experimented on without informed consent and often against their will. Can AI help healthcare providers by quickly providing them with information that can allow them to move on to the next patient? Maybe. But the problem is that this could come at the expense of diagnostic accuracy and worsen health inequalities.
And the potential impact goes beyond diagnostic accuracy. TechTonic Justice, an advocacy group working to protect economically marginalized communities from the harms of AI, released a groundbreaking report that estimates that 92 million low-income Americans “are having some fundamental aspects of their lives decided by AI.” These decisions range from how much they receive from Medicaid to their eligibility for Social Security Administration disability insurance.
A real-life example of this is currently playing out in federal courts. In 2023, a group of Medicare Advantage customers sued UnitedHealthcare in Minnesota, alleging they were denied coverage because the company’s AI system, nH Predict, erroneously deemed them ineligible. Some of the plaintiffs are the estates of Medicare Advantage customers; these patients would have died as a result of refusing medically necessary care. UnitedHealth sought to dismiss the case, but in 2025 a judge ruled that the plaintiffs could move forward with some of the claims. A similar case was filed in federal court in Kentucky against Humana. There, Medicare Advantage customers alleged that Humana’s use of nH Predict “spits out generic recommendations based on incomplete and inadequate medical records.” That case is also ongoing, with a judge ruling that the plaintiffs’ legal arguments were sufficient to move forward, surviving the insurance company’s motion to dismiss. Although the final decision in these two cases remains pending, they indicate a growing trend to use AI to decide health coverage for low-income people – and its pitfalls. If you have financial resources, you can benefit from quality health care. But if you’re unhoused or have low income, AI can prevent you from even fully accessing health care. This is medical classism.
We should not experiment with deploying AI on patients who are unhoused or have low incomes. The documented harms outweigh the unproven potential benefits promised by start-ups and other technology companies. Given the barriers faced by unhoused and low-income people, it is essential that they receive patient-centered care with a human health care provider who is attentive to their health needs and priorities. We cannot create a norm in which we rely on a healthcare system in which healthcare practitioners take a back seat while AI – run by private companies – takes the lead. An AI system that “listens” and is developed without rigorous evaluation by communities themselves disempowers patients by removing their decision-making power to determine what technologies, including AI, are implemented in their healthcare.
-
Leah Goodridge is an attorney who has worked in the field of homelessness prevention for 12 years.
-
Oni Blackstock, MD, MHS, is a physician, founder and executive director of Health Justice, and Public Voices Fellow on technology in the public interest with The OpEd Project.




