The chatbot will see you now: how AI is being trained to spot mental health issues in any language | Global development

When patients call Butabika Hospital in Kampala, Uganda, seeking help for mental health issues, they help future patients themselves by helping them create a therapeutic chatbot.
Calls to the clinic’s hotline are used to train an AI algorithm that researchers hope will eventually power a chatbot offering therapy in local African languages.
One in ten people in Africa suffer from mental health problems, but the continent has a severe shortage of mental health workers and stigma is a huge barrier to care in many places. AI could help solve these problems where resources are scarce, experts say.
Professor Joyce Nakatumba-Nabende is the Scientific Lead of the Makerere AI Lab at Makerere University. His team works with Butabika Hospital and Mirembe Hospital, in Dodoma, neighboring Tanzania.
Some callers simply need factual information about hours of operation or staff availability, but others talk about suicidal feelings or reveal other red flags about their mental state.
“Someone probably won’t say the word ‘suicidal’ or ‘depression,’ because some of those words don’t even exist in our local languages,” Nakatumba-Nabende says.
After removing patient identifying information from call recordings, Nakatumba-Nabende’s team uses AI to sift through them and determine how people speaking in Swahili or Luganda — or another of Uganda’s dozens of languages — might describe particular mental health disorders such as depression or psychosis.
Over time, the recorded calls could be passed through the AI model, which would establish that “based on this conversation and the keywords, there is perhaps a tendency towards depression, there is a tendency towards suicide.” [and so] can we escalate the call or call the patient back for follow-up,” says Nakatumba-Nabende.
Current chatbots tend not to understand the context in which care is provided or what is available in Uganda, and are only available in English, she says. The end goal is to “deliver mental health care and services all the way to the patient” and quickly identify when people need the more specialized care offered by psychiatrists.
The service could even be provided via SMS messaging for people who don’t have a smartphone or internet access, Nakatumba-Nabende says.
The benefits of a chatbot are numerous, she says. “When you automate, it’s faster. You can easily provide more services to people and get a result faster than if you trained someone to a medical degree, then a specialty in psychiatry, then internship and training.”
Scale and scope are also important: an AI tool is easily accessible at any time. And, Nakatumba-Nabende says, people are hesitant to seek mental health care at clinics because of the stigma. A digital intervention gets around this.
She hopes the project will enable the existing workforce to “provide care to more people” and “reduce the burden of mental illness in the country.”
Miranda Wolpert, director of mental health at the Wellcome Trust, which funds various projects investigating AI for mental health globally, says the technology shows promise in diagnostics. “Right now we rely a lot on people filling out paper and pencil questionnaires, and it could be that AI can help us think more effectively about how we can identify someone who is struggling,” she says.
Technology-facilitated treatments could also be very different from traditional mental health options like talk therapy or medication, Wolpert says, citing Swedish research on how playing Tetris could alleviate PTSD symptoms.
Regulators, however, are still grappling with the implications of increased use of AI in healthcare. For example, the South African Health Products Regulatory Authority (SAHPRA) and health NGO Path are using Wellcome funding to develop a regulatory framework.
Bilal Mateen, director of AI at Path, says it is important for countries to develop their own regulations. “Does this thing work well in Zulu?” “, which is an issue that concerns South Africa, is not an issue that the FDA [US Food and Drug Administration]I think, has already considered,” he said.
Christelna Reynecke, operations director at SAHPRA, wants users of an AI algorithm for mental health to have the same assurance as someone taking a medication that it has been verified and is safe. “It’s not going to start hallucinating and give you strange results and cause more harm than good,” she says.
In the background, the specter of suicides linked to the use of chatbots and cases where AI seems to have fueled psychosis.
Reynecke wants to develop an advanced monitoring system capable of identifying “at risk” results from generative AI tools in real time. “It can’t be an ‘after the event’ intervention, so long after the event that you might have put other patients at risk, because you didn’t intervene quickly enough,” she says.
The UK regulator, the Medicines and Healthcare products Regulatory Agency (MHRA), has launched a similar initiative and is working with technology companies to understand how best to regulate AI in medical devices.
Regulators must decide which risks are important to monitor, says Mateen. Sometimes the benefits outweigh the potential harms to the extent that there is “an impetus for us to put this in people’s hands because it will help them.”
While much of the discussion around AI revolves around chatbots such as Google Gemini and ChatGPT, Mateen suggests that “AI and generative AI…could be used for so much more,” such as using it to train peer counselors to provide higher quality care, or finding people the best type of treatment more quickly.
“A billion people in the world today suffer from a mental health problem,” he says. “We not only have a shortage of labor in sub-Saharan Africa; we have a workforce shortage everywhere – talk to someone in the UK about how long they have to wait to access talking therapies.
“Unmet needs around the world could be met more effectively if we had better access to safe and effective technology. »


