WHO urges caution with healthcare AI deployments
As artificial intelligence deployments increase in pace and scope in healthcare organizations around the globe, the World Health Organization this week issued a plea for vigilance and deliberation when it comes to how AI and machine learning models are put to use.
WHY IT MATTERS
The WHO called for “caution to be exercised” in how AI is used in clinical and other healthcare settings – particularly the fast-evolving large language model tools such as ChatGPT.
In order to “protect and promote human well-being, human safety and autonomy” – as well as preserving public health – officials said it’s “imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity.”
WHO acknowledged that the recent “meteoric public diffusion and growing experimental use” of tools such as ChatGPT, Bard, Bert others is “generating significant excitement around the potential to support people’s health needs.”
While experts at the UN body said they’re also enthusiastic about the “appropriate use” of those leading-edge algorithms, they’re also concerned that “caution that would normally be exercised for any new technology is not being exercised consistently with LLMs.”
WHO officials worry that “precipitous adoption of untested systems” not only cause harm to patients through medical errors and inaccurate information, but also “erode trust in AI and thereby undermine (or delay) the potential long-term benefits” of its use.
Specifically, the statement cited concerns about values of “transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.”
WHO wants those imperatives to be top-of-mind as AI is deployed, and called for “clear evidence of benefit be measured” before widespread and routine use of LLMs and other AI models in healthcare delivery.
ON THE RECORD
“WHO reiterates the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health,” officials said.
“The six core principles identified by WHO are: (1) protect autonomy; (2) promote human well-being, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; (6) promote AI that is responsive and sustainable.”
Mike Miliard is executive editor of Healthcare IT News
Email the writer: [email protected]
Healthcare IT News is a HIMSS publication.
Source: Read Full Article