Opinion: How Hong Kong can lead in patient-focused ethical AI healthcare | SCMP
Opinion: How Hong Kong can lead in patient-focused ethical AI healthcare | SCMP

Opinion: How Hong Kong can lead in patient-focused ethical AI healthcare | SCMP

How Hong Kong can lead in patient-focused ethical AI healthcare | SCMP

Hong Kong is uniquely positioned to lead the world in AI-based advances in healthcare. This was the view expressed by experts at the recent annual Asia Summit on Global Health. Firms like Bain and Company similarly see diverse patient populations and strong government support as key advantages in the Asia-Pacific’s development as a hub for medical technology innovation. This raises big hopes and important ethical questions about AI’s role in the future of healthcare.

As Hong Kong strives to lead innovation in the field, will it ensure AI application is ethical and genuinely serves the interests of patients?

The evolving ethics of AI in biomedicine was the topic of a recent seminar we spoke at, hosted by the Medical Ethics and Humanities Unit at the University of Hong Kong’s Li Ka Shing Faculty of Medicine. The forum discussed key ethical frameworks and encouraged thoughtful public conversation on the use of AI in healthcare.

Ethical reflection, discussion with stakeholders and carefully crafted guidelines are all vital to ensuring this technology is used to improve human lives. If implemented well, AI has tremendous potential for good. It may enhance patient experience, improve population health, reduce costs and increase physician well-being by reducing fatigue and making the work more meaningful.

With medical imaging, for example, AI’s capability in analysing images exceeds the human eye. AI algorithms based on deep learning models can detect anomalies easy for a human to miss. AI can therefore enhance doctors’ ability to determine the results of tests for rectal cancer, lung disease and other illnesses. This elevates the standard of care, a clear win for patients.

Similarly, where AI is used to enhance robotic-assisted surgeries, this means greater precision. If the ethics of data storage can be addressed, such applications should be relatively uncontroversial.

More controversial are cases where AI directly alters the doctor-patient relationship. If patients are guided through the decision-making process before surgery, for example, by a ChatGPT or similar large language model (LLM) AI, this could bring both positive and negative consequences.

It would provide patients with more information to access at their own pace. It could reduce bias that emerges where advice is unduly influenced by a doctor’s personal commitments. It could save time and money for healthcare providers struggling under pressure from limited resources. Yet it would diminish or remove an important point of contact between doctors and patients before a major medical and personal event.

Face-to-face conversation is an opportunity for physicians to build trust, enact responsibility and care for patients as whole persons. Conscientious healthcare professionals can do a lot of good that an AI cannot. They can help a patient find their voice amid pressures from the environment and people around them. They can tailor advice to a patient’s needs as they unfold in real time. They can empathise because they share the experience of being human, having suffered and being able to imagine a life with disability or loss.

Conversely, interaction with patients is an opportunity for physicians to understand the gravity of their work and develop a range of attitudes and emotions fitting with it.

As innovation moves forward, it is important to remember medicine is a humanistic endeavour. Along with respect for dignity and choice, it is a goal of the medical profession to care for patient well-being in the broader sense. It’s written into the Declaration of Geneva, which first-year medical students at HKU will pledge to uphold in September.

The achievement of this goal is not inevitable. It takes commitment from medical professionals, educators and institutions. It is in the public interest that this is done. The conversation around this is one that everyone should be a part of. We need to build consensus around the idea that AI cannot be allowed to diminish what we value most about being human.

What might this look like in practice? John Tasioulas, director of the Institute for Ethics in AI at Oxford University, argues that people have the right to a human decision in things like court cases, which could soon plausibly be adjudicated by AI. In medicine, this might look like the right to a human discussion.

It might mean, first, providing access to specialised LLMs that ensure the accuracy of clinical information and advice. These could be engaged in a hospital or at home over an extended period before a decision. This could empower patients to be maximally informed before a human discussion. Rather than displacing that discussion, it could make it richer and more supportive.

Optimists like Eric Topol, author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, see AI freeing up more time for human connection. Making a difference in the lives of patients is a key motivator for many who pursue a medical career. It can be hard to appreciate the difference one makes when one is overburdened by electronic health records and excessive administrative tasks. If AI can help to carry this load, it can enable doctors to spend more time doing the sort of work they entered the profession to do. This is not inevitable – it is a future we must work towards.

At one time or another, we will all be patients. We have a stake in seeing AI-based innovations serve genuinely human interests. In this rapidly evolving landscape, Hong Kong has an opportunity to become a leader in AI’s ethical implementation and governance. This can have a lasting impact on the region and the broader human community for years to come.


Prof Carl Hildebrand

Assistant Professor, Medical Ethics and Humanities Unit, School of Clinical Medicine, LKS Faculty of Medicine
Research Fellow, Centre for Medical Ethics and Law
The University of Hong Kong

Prof Rebecca Brendel
Director, Harvard Medical School Center for Bioethics
Francis Glessner Lee Associate Professor of Global Health and Social Medicine in the Field of Legal Medicine
Associate Professor of Psychiatry

Source: https://www.scmp.com/opinion/hong-kong-opinion/article/3319328/how-hong-kong-can-lead-patient-focused-ethical-ai-healthcare?module=perpetual_scroll_0&pgtype=article