WHO's AI-Powered Chatbot SARAH Giving Inaccurate Medical Responses

By:- Team VOH

OneUp Banner

20 Apr 2024

The World Health Organization's (WHO) AI chatbot, SARAH (Smart AI Resource Assistant for Health), has come under scrutiny for providing inaccurate medical information. Designed to offer immediate health advice and fill the gap left by healthcare worker shortages, SARAH's early prototype has been flagged for its inconsistent and sometimes incorrect responses.

The WHO cautioned on its website that SARAH's responses "may not always be accurate," revealing that the bot occasionally produces erroneous answers, referred to as "hallucinations" in AI terminology, which could spread misinformation on public health matters.


About SARAH:


Introduced ahead of World Health Day under the theme 'My Health, My Right,' SARAH is a virtual health worker available 24/7 in eight languages. It aims to educate users on topics like mental health, tobacco use, and nutrition, leveraging empathetic responses powered by generative AI to revolutionize global health information access.


Challenges with SARAH's Accuracy:


SARAH was trained on OpenAI's ChatGPT 3.5 using data up to September 2021, resulting in outdated medical information. For example, SARAH incorrectly stated that the Alzheimer's drug Lecanemab was still in clinical trials when it had received approval for early disease treatment in January 2023. Moreover, SARAH struggled to provide immediate updates from a recent WHO report on hepatitis deaths and failed to assist a user seeking mammogram locations in Washington, DC.


WHO's Stance on SARAH's Limitations:


The WHO clarified that SARAH lacks diagnostic capabilities and avoids topics outside its purview, directing users to consult healthcare providers or visit the WHO website for further information. Ramin Javan, a radiologist from George Washington University, commented on SARAH's limitations, stating, "It lacks depth, but this is just the first step."


SARAH's Development and Concerns:


SARAH is an evolution of the WHO's 2021 virtual health worker project, Florence, which focused on COVID-19 and tobacco education. Developed by New Zealand-based company Soul Machines, SARAH utilizes GPT data to enhance its performance, although it doesn't have access to SARAH's specific data set.


However, the use of open-source data like GPT's can expose users to cyber threats, including malware attacks and camera hacking. Despite these risks, WHO's communications director, Jaimie Guerra, assures that data breaches are unlikely due to session anonymity.


Alain Labrique, WHO's director of digital health and innovation, emphasized that SARAH and similar technologies are not substitutes for professional medical advice.


WHO's Future Plans for SARAH:


WHO envisions SARAH collaborating with researchers and governments to deliver accurate public health information. The agency is seeking ways to improve SARAH's performance, especially in emergency health situations, acknowledging that SARAH is still a work in progress.


Earlier this year, WHO issued ethical guidelines for health-related AI models, emphasizing data transparency and safety.


The Safety of AI in Healthcare:


The reliability of AI in healthcare remains a contentious issue. A study by Vanderbilt University Medical Center found that ChatGPT, a widely used AI chatbot, was "spectacularly and surprisingly wrong" in answering several medical questions. Despite AI's advancements, technologies like Deep Neural Networks (DNNs) can still make critical errors, underscoring the need for rigorous testing before deploying AI systems for critical healthcare tasks.

Also read about News and PR


Unlock the power of Synergy

Collaborate with us