ChatGPT is not always reliable on medical advice, new research suggests : NPR

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c
Digitally generated image of a young African American man wearing a suit standing on a purple ramp and looking at several message chat icons. Artificial intelligence chatbot communication concept.

Andriy Onufriyenko/Moment RF/Getty Images

As technology companies deploy purpose-built platforms for healthcare consultations, AI is quickly becoming a key player in many people’s medical decisions. According to OpenAI, the creator of ChatGPT, more than 40 million people visit the platform every day for health information.

But new research suggests that AI could mislead users in certain medical scenarios.

A risk: while AI puts vast medical knowledge at your disposal, many laypeople do not know how to use it effectively. In a study recently published in the journal Natural medicineResearchers attempted to simulate how people use AI chatbots by providing participants with medical scenarios and asking them to consult AI tools. After talking with the robots, participants only correctly identified the hypothetical condition about a third of the time.

Only 43% made the right decision about next steps, like going to the emergency room or staying home.

“People don’t know what they’re supposed to tell the model,” says Andrew Bean, who studies AI systems at the University of Oxford and is one of the authors of the study.

Bean often says that when using AI, arriving at a useful conclusion comes down to word choice. “Doctors are trained to ask you about symptoms you might not have realized you should have mentioned,” says Bean.

In one scenario, two different users gave slightly different representations of the same scenario. One described “the worst headache I’ve ever had” and the AI ​​ordered him to go to the emergency room immediately. The other – who did not use this explicit description – was told to take an aspirin and stay home. “It turns out it was actually a potentially fatal disease,” says Bean.

In some cases, AI excels at identifying medical problems: In some studies, large language models have sometimes matched or even outperformed doctors in diagnostic reasoning tasks. But the way people use AI chatbots, Bean says, is far more complicated than the controlled clinical situations in which they work well.

Good diagnosis, bad advice

Even in circumstances where AI is able to correctly identify the disease, it often does not present next steps with the appropriate degree of urgency, according to another study.

The researchers presented the AI ​​robots with different medical scenarios. In 52% of emergency cases, the robots “undertriaged,” that is, treated the illness as less serious than it was. In one example, he failed to order a hypothetical patient suffering from diabetic ketoacidosis and impending respiratory failure – a potentially fatal illness – to go to the emergency room.

“When there was a classic medical emergency, ChatGPT got it right,” said Girish Nadkarni, a physician and AI researcher at Mount Sinai and author of the study. The problem, Nadkarni said, is that there are more complicated scenarios in which a “time element” comes into play: The robot often overestimates and underestimates how long a patient can wait before receiving care.

An OpenAI spokesperson said this study does not represent how people actually use ChatGPT, and that the previous study used an older version of ChatGPT that the company says has since been fixed for some of the concerns that surfaced.

AI can improve doctor visits

Despite concerns about inaccuracy, doctors who study AI say it is useful for patients to use it for health care information, and point out that it has sometimes even provided lifesaving advice.

“I encourage patients to use these tools,” says Robert Wachter, a physician at UC San Francisco and author of the recently published book, One giant leap: How AI is transforming healthcare and what it means for our future.

Wachter argues that with healthcare difficult to access and affordable, AI consultation is often better than alternatives. “The advice you get from the tools is much better than nothing and better than what you would get from your first cousin,” Wachter says.

However, Wachter emphasizes, AI does not replace a doctor.

Adam Rodman, a hospitalist who studies AI programs at Harvard Medical School, discourages people from using AI to triage emergency situations, but says AI can add significant value to a patient’s interaction with a human doctor.

“A good time to use a large speech pattern is when you’re about to go see a doctor or after seeing your doctor,” says Rodman. This can help you be more informed about your condition before an appointment and use time with your providers effectively, he says, by giving patients the opportunity to collaborate with their doctor on decisions rather than engaging in lengthy question-and-answer sessions.

“There’s no downside to understanding your health better,” says Rodman.

AI in healthcare is here to stay

The doctors interviewed for this story recognize that AI and medicine are already inextricably linked and imagine that AI and humans will become more capable of interacting with each other.

“I hope you can think of AI as an extension of a human relationship,” Rodman says. He imagines a future in which doctors and humans partner with AI to facilitate communication and overcome medical bureaucracy.

Rodman says there is risk in AI. He fears a time when humans would be informed of scary diagnoses – like cancer – by a robot rather than a human. Studies show that when health care is treated more like a commercial or commercial product, people trust doctors less.

“What I hope is that this technology can be used in a way that improves humanity in medicine,” says Rodman, “and not in a way that removes the doctor-patient relationship.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button