AI Is Revolutionizing Health Care. But It Can’t Replace Your Doctor

The next time you have a blood test, radiography, mammography or colonoscopy, there is a good chance that an artificial intelligence algorithm (AI) will first interpret the results before your doctor has seen it.
In just a few years, AI has spread quickly in hospitals and clinics around the world. More than 1,000 health AI tools have been authorized for use by the Food and Drug Administration of the United States (FDA), and more than 2 in 3 doctors say they use AI to a certain extent, according to a recent survey by the American Medical Association. The potential is extraordinary. AI – in particular in the form of AI agents who can reason, adapt and act by themselves – can shed light on the workloads of doctors by writing patients and graphs, supporting precision medicine through more targeted therapies, and report subtle anomalies in scannings and slides that the human eye could miss. It can accelerate the discovery of drugs and targets of drugs through new processes, such as the prediction and design of the structure of the proteins led by AI that led to the Nobel Prize for Chemistry from last year. The AI can offer patients faster and more personalized support by planning appointments, answering questions and signaling side effects. He can help twin candidates for clinical trials and monitor health data in real time, alerting clinicians and patients early to prevent complications and improve results.
But the promise of the AI in medicine will only be made if it is built and used in a responsible manner.
Today AI algorithms are powerful tools that recognize models, predict and even make decisions. But they are not infallible omniscient oracles. They are not about to match human intelligence, despite what certain evangelists of artificial general intelligence suggest. A handful of recent studies reflect possibilities but also traps, stressing how medical AI tools can misunderstand patients and how doctors’ own skills can weaken with AI.
A team from the Duke University (one of which has tested an AI tool brought by the FDA intended to detect swelling and micro-lists in cerebral MRI of patients with Alzheimer’s disease. The tool has improved the ability of expert radiologists to find these subtle spots in an MRI, but it has also raised false alarms, often confusing harmless vagueness with something dangerous. We have concluded that the tool is useful, but radiologists should first read attentive to MRIs, then use the tool as a second opinion – not vice versa.
These types of results are not limited to the tool we have examined. Few hospitals independently evaluate the AI tools they use. Many assume that just because a tool has been erased by the FDA, it will work in their local context, which is not necessarily true. AI tools work differently for different patient populations, and each has unique weaknesses. This is why it is essential for health systems to show reasonable diligence and quality control before the implementation of an AI tool to ensure that it will work in this local framework, then educate clinicians. In addition, AI algorithms and the ways that humans interact with them change over time, which has prompted the former FDA commissioner Robert Califf, to exhort constant monitoring after the medical AI tool market to ensure that they remain reliable and safe in the real world.
In another recent study, gastroenterologists in Europe received a new AI assisted system to identify polyps during colonoscopies. Using the tool, they initially found more polyps – growths that can be transformed into cancer – AI suggestion helped them identify the areas they could have missed otherwise. But when the doctors then returned to the realization of colonoscopies without the AI system, they detected fewer precancerous polyps than before they use AI. Although it is not clear exactly why, the authors of the study think that clinicians may have become so dependent on AI that, in its absence, they have become less concentrated and less capable of identifying these polyps. This phenomenon of “dequerrage” is supported by another study which has shown that overcoming computerized aid can make human eyes less likely to scan the peripheral visual fields. The very tool intended to refine the medical practice may have blunted it.
AI, if used without criticism, can not only propagate erroneous information, but erode our even ability to verify it. This is the Google Maps effect: drivers who have once sailed by memory often lack basic geographical conscience because they are used to blindly following the voice of their car. Earlier this year, a researcher interviewed more than 600 people in various age groups and educational horizons and found that the more AI tools, the more their critical reflection capacities. This is known as “cognitive unloading”, and we are just starting to understand how it relates to the use of AI by clinicians.
Learn more: Why do taxi drivers have a lower risk of Alzheimer’s?
All this emphasizes that AI in medicine, as in all areas, works better when it increases the work of humans. The future of medicine does not consist in replacing health care providers with algorithms – it is a question of conceiving tools that sharpen human judgment and amplifying what we can accomplish. Doctors and other suppliers must be able to assess when AI is wrong and must maintain the capacity to work without AI tools if necessary. The way to make it happen is to build medical AI tools in a responsible manner.
We need tools built on a different paradigm – another than suppliers push suppliers to look again, weigh alternatives and remain actively committed. This approach is known as architecture of intelligent choice (ICA). With ICA, AI systems are designed to support judgment rather than supplant it. Instead of declaring “here is a bleeding”, an ICA tool could highlight an area and invite, “carefully check this region”. ICA increases the skills that medicine depends on clinical reasoning, critical thinking and human judgment.
Apollo Hospitals, the largest private health system in India, recently started using an ICA tool to guide doctors in the prevention of heart attacks. A previous AI tool had provided a single risk of heart attack risk for each patient. The new system provides more personalized ventilation of what this score means for them and what has contributed to it so that the patient knows which risk factors to approach. This is an example of the kind of soft boost that can allow doctors to succeed in their work without resuming their autonomy.
There is a temptation to overshadow the AI as if he had all the answers. In medicine, we must temper these expectations to save lives. We must train medical students to work with and without AI tools and to treat AI as a second opinion or an assistant rather than an expert with all the correct answers. The future is that humans and AI agents work together.
We have already added tools to medicine without weakening the skills of clinicians. The stethoscope amplifies the ear without replacing it. Blood tests provide new diagnostic information without eliminating the need for a medical history or physical exams. We have to hold AI on the same level. If a new product makes doctors less attentive or less decisive, it is not ready for great listening hours, or it is used in the wrong direction.
For any new medical AI, we must ask ourselves if it makes the clinician more thoughtful, or less. Does he encourage a second look or invite a rubber stamp? If we undertake to design only the systems that sharpen rather than replace our capacities, we will have the best of both worlds, combining the extraordinary promise of AI by critical thinking, compassion and the real judgment that only humans can bring.
:max_bytes(150000):strip_icc()/Health-GettyImages-1382278240-f5a58d71cce34dd1ac19ad801af88c92.jpg?w=390&resize=390,220&ssl=1)

