Countries are still largely unprepared for risks and opportunities presented by generative AI.  / Photo: AFP

By Sylvia Chebet

In the fast and futuristic world of artificial intelligence, one of the most widely used and rapidly evolving precision models is becoming a game-changer for healthcare.

Large multi-modal models (LMMs) are known for their ability to analyse and integrate diverse data inputs — images, text and videos — and provide multiple, generative outputs that potentially have a range of uses in medical treatment.

These models are unique in their mimicry of human communication and ability to carry out tasks they were not explicitly programmed to perform.

But as with all good things that have the potential to be great, regulation is critical.

The World Health Organisation (WHO) has just come out with a set of guidelines covering the ethics and governance of these LMMs.

The guidance outlines over 40 recommendations for consideration by governments, technology companies, and healthcare providers to ensure the appropriate use of LMMs to promote and protect the health of populations.

Dr Ishmael Mekaeel Maknoon, a general medical practitioner in Kenya's Kwale County, believes AI has immense potential in healthcare, especially in Africa.

"We have a huge shortfall of doctors and nurses, but we can leverage technology," he tells TRT Afrika.

"Technology can help us utilise our limited resources at scale, not just in urban sectors, but also rural areas."

Improved internet connectivity across the continent is another advantage. In rural Kwale, where Dr Maknoon works, many patients possess smartphones.

He is working with engineers and doctors to develop an AI tool to offer personalised medical services to patients with chronic conditions such as hypertension, diabetes and kidney disease.

Dr Maknoon says governments should capitalise on the advances made in AI, arguing that it is affordable to develop or deploy, contrary to popular belief.

Risk analysis

WHO notes that LMMs have been adopted faster than any consumer application in history, with several AI platforms like ChatGPT, Bard and Bert entering the public consciousness in 2023.

"Generative AI technologies have the potential to improve healthcare, but only if those who develop, regulate and use these technologies identify and fully account for the associated risks," says Dr Jeremy Farrar, WHO's chief scientist.

"We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities."

The new WHO guidance outlines five broad applications of LMMs for healthcare: diagnosis and clinical care, patient-guided use for investigating symptoms and treatment, and clerical and administrative tasks such as documenting and summarising patient visits.

It also covers the application of LMMs in medical and nursing education, including providing trainees with simulated patient encounters and scientific research and drug development.

Pros and cons

While LMMs are starting to be used for specific health-related purposes, there are fears over the risks of producing false, inaccurate, biased, or incomplete statements, which could harm people using such information in health decisions.

New WHO guidance aims to manage the design, development, and use of generative AI. Photo: TRT Bulkans

The WHO guidance also details broader risks to health systems, such as accessibility and affordability of the best-performing LMMs.

LMMS can also encourage "automation bias" by healthcare professionals and patients, whereby errors that would otherwise have been identified are overlooked, or difficult choices are improperly delegated to an LMM.

But Dr Maknoon believes that with attentiveness to detail, AI could help reduce some of the human errors in healthcare.

"If I fill in the right data, then definitely it is going to be able to give me what I want, the right form of algorithms," he explains to TRT Afrika.

LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the trustworthiness of the algorithms used in healthcare.

To create safe and effective LMMs, WHO underlines the need to engage various stakeholders – governments, technology companies, healthcare providers, patients, and civil society – in all stages of development and deployment of such technologies, including their oversight and regulation.

"Governments of all countries must cooperatively lead efforts to regulate the development and use of AI technologies effectively," says Dr Alain Labrique, WHO's director for digital health and innovation in the science division.

Recommendations

WHO's guidance is meant to assist governments in mapping the benefits and challenges associated with using LMMs for health, besides framing policies and practices for appropriate development, provision and use.

Under the new guidance, governments must provide public infrastructure, including computing power and public data sets accessible to developers in the public and private sectors who adhere to stipulated ethical principles and values.

It also recommends that governments develop laws, policies and regulations to ensure that LMMs and applications used in healthcare meet ethical obligations and human rights standards to protect people’s dignity, autonomy and privacy.

Potential users and all direct and indirect stakeholders, including medical providers, scientific researchers, healthcare professionals and patients, should be engaged from the early stages of AI development.

Developers are expected to give stakeholders opportunities to raise ethical issues, voice concerns, and provide inputs for the AI application under consideration.

Beyond ensuring accuracy and reliability, the guidance requires developers to be able to predict and understand potential secondary outcomes.

Dr Maknoon believes that with proper regulation, adequate investments and patient-centred approaches, AI has the potential to revolutionise healthcare.

TRT Afrika