Crypto Gloom

WHO issues guidance on ethical use of generative AI in healthcare

WHO issues guidance on ethical use of generative AI in healthcare

In a move toward ethical governance of advancing generative artificial intelligence (AI) technologies in healthcare, the World Health Organization (WHO) has published comprehensive guidance on large-scale multimodal models (LMMs). These models, which can accommodate a variety of data inputs such as text, video, and images, have witnessed unprecedented adoption with platforms such as ChatGPT, Bard, and Bert gaining public attention in 2023.

WHO’s guidance, which consists of more than 40 recommendations, is aimed at governments, technology companies and healthcare providers and aims to ensure the responsible use of LMMs to promote and protect population health. Dr. Jeremy Farrar, WHO Chief Scientist, highlighted the potential benefits of generative AI technologies in healthcare, while emphasizing the need for transparent information and policies to manage the associated risks.

Known for their ability to mimic human communication and perform tasks that have not been explicitly programmed, LMMs demonstrate five broad applications in healthcare, as outlined by WHO: This includes diagnostic and clinical care, patient-centered use for symptom and treatment investigation, clerical and administrative tasks within electronic health records, medical and nursing education through simulated patient encounters, scientific research to identify new compounds, and drug development. It’s possible.

However, the guidance highlights the documented risks associated with LMMs, including the creation of false, inaccurate or biased information. This could potentially cause harm to individuals who rely on such information to make important health decisions. The quality and bias of training data related to factors such as race, ethnicity, ancestry, gender, gender identity, or age can compromise the integrity of LMM output.

Beyond individual risks, WHO acknowledges the broader challenges to health systems that stem from LMM. These include concerns about the accessibility and affordability of the most advanced LMMs, potential ‘automation bias’ for healthcare professionals and patients, cybersecurity vulnerabilities that jeopardize patient information, and the reliability of AI algorithms in healthcare delivery.

Stakeholder participation required for LLM deployment

To address these issues, WHO emphasizes the need for the involvement of multiple stakeholders throughout LMM development and deployment. Governments, technology companies, healthcare providers, patients and civil society must actively engage in ensuring the responsible use of AI technologies.

This guidance provides specific recommendations to governments and places primary responsibility on governments to set standards for developing, distributing and integrating LMMs into public health and medical practice.

We urge governments to invest in or provide non-profit or public infrastructure, including computing power and public data sets, that can be accessed by developers across a variety of sectors. These resources depend on users adhering to ethical principles and values. Laws, policies and regulations must be adopted to ensure that LMMs in healthcare meet ethical obligations and human rights standards and protect aspects such as dignity, autonomy and privacy.

The guidance also suggests assigning existing or new regulatory agencies to evaluate and approve medical LMMs and applications within the constraints of available resources. Additionally, mandatory post-launch audits and impact assessments by an independent third party are recommended for large-scale LMM deployments. These assessments should include data protection and human rights considerations, and outcomes and impacts should be disaggregated by user characteristics such as age, race, and disability.

Key responsibilities are also delegated to LMM developers. They must ensure that all direct and indirect stakeholders, including potential users and healthcare providers, scientific researchers, medical professionals and patients, are involved from the earliest stages of AI development. A transparent, inclusive, and structured design process should allow stakeholders to raise ethical issues, express concerns, and provide input.

Additionally, LMMs must be designed to perform well-defined tasks with the accuracy and reliability necessary to strengthen healthcare systems and improve patient care. Developers must also have the ability to predict and understand potential second-order consequences of AI applications.

disclaimer

In accordance with the Trust Project Guidelines, the information provided on these pages is not intended and should not be construed as legal, tax, investment, financial or any other form of advice. It is important to invest only what you can afford to lose and, when in doubt, seek independent financial advice. We recommend that you refer to the Terms of Use and help and support pages provided by the publisher or advertiser for more information. Although MetaversePost is committed to accurate and unbiased reporting, market conditions may change without notice.

About the author

Kumar is an experienced technology journalist specializing in the dynamic intersection of emerging fields including AI/ML, marketing technology, cryptocurrency, blockchain, and NFTs. With over three years of experience in the industry, Kumar has established a proven track record in crafting compelling narratives, conducting insightful interviews, and providing comprehensive insights. Kumar’s expertise lies in producing impactful content, including articles, reports and research publications for prominent industry platforms. With a unique skill at combining technical knowledge and storytelling, Kumar excels at communicating complex technical concepts in a clear and engaging way to diverse audiences.

more articles

Kumar is an experienced technology journalist specializing in the dynamic intersection of emerging fields including AI/ML, marketing technology, cryptocurrency, blockchain, and NFTs. With over three years of experience in the industry, Kumar has established a proven track record in crafting compelling narratives, conducting insightful interviews, and providing comprehensive insights. Kumar’s expertise lies in producing impactful content, including articles, reports and research publications for prominent industry platforms. With a unique skill at combining technical knowledge and storytelling, Kumar excels at communicating complex technical concepts in a clear and engaging way to diverse audiences.