The World Health Organization is calling for caution in using artificial intelligence-generated large language model tools (LLMs) such as ChatGPT, Bard, Bert and others that imitate understanding, processing and human communication. 

LLMs increasing use for health-related purposes raises concerns for patient safety, WHO said. The precipitous adoption of untested systems could lead to errors by healthcare workers and cause harm to patients. 

Skip to content