Is Giving ChatGPT Health Your Medical Records a Good Idea?

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Your AI doctor’s practice is growing. On January 7, OpenAI announced that over the next few weeks it would roll out ChatGPT Health, a dedicated health tab that allows users to upload their medical records and connect apps like Apple Health, personalized health testing platform Function, and MyFitnessPal.

According to the company, more than 40 million people ask ChatGPT a healthcare-related question every day, accounting for more than 5% of all global messages on the platform. So, from a business perspective, looking at healthcare makes sense. But what about from the patient’s point of view?

“I was not shocked to hear this news,” says Dr. Danielle Bitterman, radiation oncologist and clinical lead for data science and AI at Mass General Brigham Digital. “I think it speaks to an unmet need that people have for health care. It’s difficult to access a doctor, it’s difficult to find medical information today and there is unfortunately some distrust in the medical system.”

We asked experts if feeding your health data to an AI tool is a good idea.

What is ChatGPT Health?

The new feature will be a platform where people can upload their medical records, including lab results, visit summaries and clinical histories. That way, when you ask the bot questions, they will be “based on the information you’ve logged in,” the company said in its announcement. OpenAI suggests asking questions such as: “How is my cholesterol level changing? » “Can you summarize my latest blood tests before my appointment? » “Give me a summary of my general health.” Or: “I have my annual physical tomorrow. What should I talk to my doctor about?”

Learn more: 9 Doctor-Approved Ways to Use ChatGPT for Health Advice

Users can also connect ChatGPT to Apple Health, so the AI ​​tool has access to data such as the number of steps per day, sleep duration, and the number of calories burned during a workout. Another new addition is the ability to sync with data from Function, a company that tests for more than 160 markers in blood, so ChatGPT has access to lab results as well as health suggestions from clinicians. Users can also connect MyFitnessPal for nutrition tips and recipes, and Weight Watchers for meal ideas and recipes for people taking GLP-1 medications.

OpenAI, which has a licensing and technology agreement allowing the company to access TIME’s archives, notes that Health is designed to support healthcare, not replace it, and is not intended to be used for diagnostic or treatment purposes. The company says it spent two years working with more than 260 doctors in dozens of specialties to shape what the tool can do, as well as how it responds to users. This includes the urgency with which it encourages people to follow up with their provider, the ability to communicate clearly without oversimplification, and prioritizing safety when people are in mental distress.

Is it safe to download your medical data?

OpenAI has partnered with b.well, a data connectivity infrastructure company, to allow users to securely connect their medical records to the tool. The Health tab will have “enhanced privacy,” including a chat history and memory feature separate from other tabs, according to the announcement. OpenAI also stated that “health conversations are not used to train our base models” and that health information will not flow into non-health chats. Additionally, users can “view or delete health memories at any time.”

Some experts nevertheless call for caution. “The most conservative approach is to assume that any information you upload to these tools, or any information that may be in applications that you otherwise link to the tools, will no longer be private,” Bitterman says.

No federal regulator governs health information provided to AI chatbots, and ChatGPT provides technology services that fall outside the scope of HIPAA. “It’s a contractual agreement between the individual and OpenAI at this point,” says Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center. “If you provide data directly to a technology company that does not provide any health care services, then buyer beware.” In the event of a data breach, ChatGPT users would have no specific rights under HIPAA, he adds, although it’s possible the Federal Trade Commission could intervene on your behalf or you could sue the company directly. As medical information and AI begin to intersect, the implications are so far unclear.

“When you see and interact with your health care provider, there is a professional agreement that they will keep this information confidential, but that is not the case here,” Malin says. “You don’t know exactly what they’re going to do with your data. They say they’re going to protect it, but what exactly does that mean?”

Learn more: The 4 words that drive your doctor up the wall

When asked for comment on January 8, OpenAI directed TIME to a post about X from Chief Information Security Officer Dane Stuckey. “Conversations and files in ChatGPT are encrypted by default at rest and in transit as part of our core security architecture,” he wrote. “For health, we’ve built additional layered protections on top of this. This includes another layer of encryption… enhanced isolation and data segmentation.” He added that the company’s changes “give you maximum control over how your data is used and accessed.”

The question every user needs to ask is “do you trust OpenAI to keep its word,” says Dr. Robert Wachter, chair of the department of medicine at the University of California, San Francisco and author of One giant leap: How AI is transforming healthcare and what it means for our future.

Does he trust her? “It’s sort of what I do, partly because it’s in the companies’ interest not to screw it up,” he says. “If they want to address sensitive topics like health, their brand is going to depend on you feeling comfortable doing that, and the first time there’s a data breach, it’s like, ‘Get my data out of there, I’m not sharing it with you anymore.’ »

Wachter says that if there was information in his records that could be damaging if leaked — like a history of drug use, for example — he would be reluctant to upload it to ChatGPT. “I would be a little careful,” he said. “Everyone is going to be different about this, and over time as people become more comfortable, if you think what you’re getting out of it is useful, I think people will be more than willing to share information.”

The risk of bad information

Beyond privacy concerns, there are known risks of using chatbots based on extended language models for health information. Bitterman recently co-authored a study that found that models are designed to prioritize utility over medical accuracy and to always provide an answer, especially one that the user is likely to respond to. In one experiment, for example, models trained to know that acetaminophen and Tylenol are the same drug nevertheless produced inaccurate information when asked why one was safer than the other.

“The threshold between usefulness and accuracy is more on the usefulness side,” Bitterman says. “But in medicine we need to be more precise, even if it comes at the cost of helping us.”

Additionally, several studies suggest that if information is missing from your medical record, models are more likely to hallucinate or produce incorrect or misleading results. According to a report on Supporting AI in Healthcare from the National Institute of Standards and Technology, the quality and completeness of health data a user provides to a chatbot directly determines the quality of results generated by the chatbot; Poor or incomplete data leads to inaccurate and unreliable results. A few common traits help improve data quality, the report notes: information that is correct and factual, complete and consistent, without any outdated or misleading ideas.

In the United States, “we receive our health care from different sites, and it’s fragmented over time, so most of our health records are not complete,” Bitterman says. This increases the likelihood that you’ll see errors when it comes to guessing what happened in areas where there are gaps, she says.

The best way to use ChatGPT Health

Overall, Wachter sees ChatGPT Health as a step forward from the current iteration. People were already using the bot for health questions, and by providing it with more context via their medical records, such as a history of diabetes or blood clots, he believes they will receive more useful answers.

“I think what you’ll get today is better than what you had before if all your basic information is there,” he says. “Knowing that context would be helpful. But I think the tools themselves will need to improve over time and be a little more interactive than they are now.”

When Dr. Adam Rodman watched the ChatGPT Health introductory video, he was pleased with what he saw. “I thought it was pretty good,” says Rodman, a general internist at Beth Israel Deaconess Medical Center, where he leads the working group on integrating AI into the medical school curriculum, and an assistant professor at Harvard Medical School. “The goal was really to use it to better understand your health, not as a replacement, but as a way to improve it.” Since people were already using ChatGPT for things like analyzing lab results, the new feature will just make that easier and more convenient, he says. “I think this is more reflective of what health care is going to look like in 2026 rather than some kind of super new thing,” he says. “This is the reality of how health care is changing. »

Learn more: 10 Questions You Should Always Ask During Doctor Appointments

When Rodman advises his patients on how best to use AI tools, he tells them to avoid health management questions, like asking the robot to choose the best treatment plan. “Don’t let him make autonomous medical decisions,” he says. But it’s perfectly legitimate to ask your doctor if there’s anything missing, or to explore “low-risk” issues like diet and exercise programs, or interpretation of sleep data.

One of Bitterman’s favorite uses is to have ChatGPT help him brainstorm questions before a doctor’s appointment. Augmenting your existing care in this way is a good idea, she says, with one obvious benefit: “You don’t necessarily need to upload your medical records. »

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button