In this learning activity, we will examine the medical ethics principles and key considerations practitioners must keep in mind as they adapt to a healthcare system that is increasingly powered by AI.
 

 

Overview




A New Era In Medicine

To start our discussion, we'd like you to take a moment to visualise this scenario:

Dr. X, an oncologist working in a busy tertiary hospital, sat at his desk, reviewing the latest medical images sent to him by his colleague, who was asking for his advice. 

The patient, a 52-year-old woman with a long-standing history of diabetes, had recently experienced significant weight loss and persistent fatigue that could not be attributed to diabetes. Despite conducting a battery of tests and implementing a range of therapeutic interventions, the referring doctor was unable to pin down a diagnosis, and the patient continued to deteriorate. He asked Dr X for his help.


The oncology clinic was unusually busy that day, and he was already running significantly behind on his consults. To help manage his workload, Dr. X had recently implemented an AI-powered diagnostic tool, which he kept running in the background of his laptop. The tool was designed to review and track patient histories, including scans, blood tests and various other records, and to flag potential concerns. As he examined the scans sent to him by his colleague, the tool issued an alert. Accessing the area highlighted by the AI, he reviewed the attached AI-generated note. The system had detected subtle anomalies - minute changes in tissue density and microvascular patterns. When taken in context with the patient's history and her recent blood work, the AI now flagged these as suggestive of the early signs of a pancreatic neuroendocrine tumour (PNET), a rare and often elusive cancer. 

This would not have been his first diagnostic consideration. Yet, the algorithm’s ability to detect fine, early-stage changes, together with the patient's history, made a compelling case for further investigation. 

 
Dr X instructed the AI tool to document the findings and send these and the scan notation to his colleague for discussion. He requested his AI desktop assistant to schedule a follow-up series of scans for the patient with the radiology department, ensuring the necessary notes and record-keeping were completed.

Having just finished this task, he received a silent alert on his phone from the hospital’s critical care unit. AI had detected early signs of deterioration in one of his post-operative patients in the ICU. The algorithm, programmed to analyse a series of parameters, had predicted the patient would experience a rapid decline in respiratory function within the next six hours.

A pre-emptive intervention plan was already in motion. The AI suggested an adjustment in ventilation settings and flagged the patient for priority monitoring. Dr. X acknowledged the alert, trusting the ICU team to act on the AI’s recommendations in line with their clinical expertise.

 
As Dr X attended to the patients in his clinic, in the medical intake unit, a machine-learning algorithm analysed genomic data to identify individuals predisposed to conditions that might affect their treatment outcomes. The system flagged one patient with an increased risk of adverse reactions to chemotherapy, notifying the shift leader and recommending an alternative approach tailored to the patient's unique profile.
 
Meanwhile, in the hospital’s administrative wing, AI optimised resource allocation. Bed occupancy levels, surgical slots, and specialist availability were managed with predictive algorithms, ensuring that critical cases were prioritised while avoiding bottlenecks. 

What once took hours of manual coordination now happens in seconds.
 
These are not distant, futuristic scenarios. These are real-world examples of how AI is being integrated into healthcare settings across the globe. They were chosen to illustrate AI’s potential: faster and more accurate diagnoses, tailored treatment plans, and improved patient outcomes. From early disease detection to precision medicine, predictive analytics, and operational efficiency, the way we practice patient care will be fundamentally changed by the integration of AI into our every day practice.

But while AI offers remarkable benefits, its true potential lies in augmenting, not replacing clinical expertise. For AI to truly enhance healthcare, it must be integrated responsibly. The very qualities that make AI so powerful—its data-driven insights, predictive capabilities, and automation—also raise ethical and practical concerns.

How do we ensure that AI-driven decisions remain transparent? How do we prevent biases in AI models from reinforcing healthcare inequalities? And how do we strike the right balance between clinical judgment and machine-generated recommendations?

As we embrace AI’s extraordinary promise, healthcare professionals must navigate these challenges with care and consideration, ensuring that AI serves as a trusted ally rather than a source of unintended harm.

In the first article of this series, The Use of WhatsApp in Clinical Practice, we explored the regulations and guidelines governing digital technologies in healthcare. Our second article focused on the practical application of these technologies, specifically examining the practicalities of deploying ChatGPT in your practice.

In this article, we will examine the medical ethics principles and key considerations practitioners should keep in mind as they adapt to a healthcare system that is increasingly powered by AI.


The Ethical Crossroads

Healthcare stands at a pivotal crossroads as AI becomes increasingly woven into daily practice. The promise of innovation is vast, but so are the ethical challenges. To navigate this shift without compromising patient care, we must anchor ourselves in the fundamental principles that have long guided medicine: beneficence, non-maleficence, autonomy, and justice. These ethical cornerstones must remain our compass—ensuring that AI enhances, rather than undermines, the core values of medicine.

Let's take a moment to reflect on how these principles will both influence and be shaped by AI.

 

Beneficence and Non-Maleficence


The principle of beneficence requires that healthcare professionals act in the best interests of the patient, promoting their well-being and striving to provide the highest standard of care. 

Non-maleficence, conversely, demands that healthcare professionals do not harm patients, avoiding actions that might cause unnecessary suffering or harm.

In the context of AI, these two principles converge around the concept of patient safety. AI models are trained to identify patterns and make predictions based on vast datasets, but these algorithms are not perfect. A model might suggest a treatment based on historical data that worked for the majority of patients, but what if this data doesn’t fully account for an individual patient’s unique medical history, genetic makeup, or coexisting conditions?

For example, consider an AI tool that suggests a particular course of treatment for papillary thyroid carcinoma based on a large dataset of information obtained from various sources. If the dataset excludes patients from certain ethnic backgrounds or socioeconomic classes, the model may not fully capture the nuances of how the disease behaves in a broader, more diverse population. This could lead to treatment suggestions that are suboptimal or even harmful to some individuals. 

Healthcare professionals should, therefore, take care to validate AI-driven recommendations against evidence-based practices to ensure that the insights offered align with current clinical standards and are personalised to each patient's specific needs, enhancing patient care rather than inadvertently introducing risks.


Autonomy

Patient autonomy—the right to make informed decisions about one’s healthcare—is a fundamental principle of medical ethics. 

However, in an era where artificial intelligence (AI) plays an increasing role in medical decision-making, ensuring that autonomy is upheld becomes more complex. How can practitioners support informed patient choice when the accuracy and transparency of AI-generated recommendations remain uncertain?

AI models, particularly those based on deep learning, can operate as "black boxes," making decisions based on complex algorithms that even the creators of the model may not fully understand


Take, for example, our patient of earlier researching treatment options for papillary carcinoma using ChatGPT. The AI suggests a regimen that differs from the practitioner’s recommendation. Given that large language models frequently misattribute or fabricate sources, the patient may not fully understand the basis of the AI’s suggestion. Without transparency, they cannot assess the reliability of the recommendation or the nature of the data informing it. Yet, they are inclined to follow the AI's suggested course of action. In such cases, how can a healthcare provider effectively guide the patient while respecting their autonomy?

This challenge underscores a critical issue: trust in medical decision-making depends on clarity. Patients need to understand not just their options but also how those options were generated. AI models, however, remain largely opaque, with difficult decision-making processes—if not impossible—to interpret. Until AI becomes more transparent, the burden falls on healthcare professionals to bridge this gap.



Justice
 

The integration of AI into medical practice is not just about the technology itself—it’s about how we use it to enhance human judgment while safeguarding the core values of medicine. 

Justice in healthcare means fair and equitable access to resources and treatment, ensuring that care is not influenced by race, gender, or socioeconomic status.

In theory, AI has the potential to advance healthcare equity—but in practice, it may also deepen existing disparities.

Many AI models are trained on datasets that primarily reflect urban populations in high-income countries. As a result, these systems may not adequately account for the needs of rural communities, lower-income groups, or minority populations. This lack of representation can lead to biased algorithms that reinforce healthcare inequalities rather than reduce them.

For example, an AI tool designed to predict disease risk might be trained on data from a specific region, ethnic group, or socioeconomic background. When applied to a more diverse population, it may fail to account for differences in genetic factors, environmental exposures, or healthcare access—leading to inaccurate diagnoses or inappropriate treatment recommendations.

True justice in AI means preventing these biases from widening the healthcare gap. To achieve this, AI models must be trained on diverse datasets that reflect the full spectrum of human populations. Only then can AI serve as a tool for equitable healthcare, particularly in regions with limited access to specialists and advanced medical technologies.

The challenge isn’t just about developing AI—it’s about using it responsibly. If AI is to enhance medical decision-making, it must do so in a way that upholds the core values of medicine: fairness, accuracy, and patient-centred care.


The Data Dilemma

Data is the lifeblood of AI. It fuels algorithms, drives predictions, and allows machines to learn patterns that can improve patient outcomes. However, the reliance on patient data introduces significant ethical challenges. Electronic health records (EHRs), genomic sequences, and imaging studies provide invaluable insights, but they also raise serious concerns about privacy, security, and consent. These concerns become even more pronounced when we consider the use of AI in healthcare, where data is not just used to inform decisions but actively drives them.

Let’s take a moment to understand what this could mean in a practical sense. 


A Case Study: The AI Model with Unintended Consequences

To assist in improving patient outcomes at its Diabetic outpatient clinic, a large metropolitan hospital deployed an AI model to assist in predicting patients most at risk for complications.

The AI system leveraged data from the hospital’s electronic health records (EHRs), which reflect some twenty years of data and include lab results and medical histories. The goal was to identify high-risk patients early, enabling clinicians to intervene promptly and allocate resources efficiently.

Several months following the system's implementation, clinicians began noticing discrepancies in the model’s predictions. Some patients flagged as low-risk had experienced severe complications, while others categorised as high-risk remained stable. Eventually, the discrepancies became so noticeable, and the fall-out of its blanket adoption so concerning that an audit of the system was commissioned.

Upon closer examination, the hospital’s data science team discovered that the AI model had incorporated biases from the historical patient records.

Certain groups of patients were systematically assigned lower risk scores despite having similar clinical profiles to those flagged as high-risk.

This stemmed from the AI’s reliance on historical data, which reflected systemic disparities in diabetes management and outcomes. Historically, marginalised communities had faced reduced access to hospitalised care, later-stage diagnoses, and under-treatment of complications, leading to skewed data that falsely suggested they were at lower risk of deterioration.

Based on the flagging or non-flagging of these patients, several patients did not receive timely interventions. This led to increased emergency room visits, higher rates of diabetic ketoacidosis, and prolonged hospital stays for some affected patients.


Broader Implications: The Dangers of Narrow Data

This case study highlights one of the key issues in AI development: data bias. Currently, the majority of AI systems are trained on data from developed countries in the Western Hemisphere —groups that often have better access to healthcare and more robust medical records.

When these AI tools are applied to populations whose demographics and health profiles differ significantly from the data on which the models were trained, the real-world impacts become apparent.

For instance, a model designed to predict the risk of cardiovascular disease in a population using predominantly European genetic data might miss important genetic or environmental factors that are more relevant to populations in Africa, Asia, or South America. It could, therefore, fail to account for the unique challenges faced by populations in Sub-Saharan Africa, where factors like malnutrition, infectious diseases, and differing genetic predispositions play a far greater role than in industrialized nations. Inaccuracies in predictions can lead to misdiagnosis, delayed treatment, and, ultimately, a widening gap in health outcomes between different populations.

The case above further underscores the growing need for ethical AI deployment.  AI should not be seen as a “quick fix” for the healthcare system’s deep-rooted issues. While these tools can be powerful, they are not infallible, and human oversight is necessary to identify and address errors or biases in AI-driven recommendations. Regular auditing of AI tools for fairness, accuracy, and accountability is crucial to ensure they are working as intended and not perpetuating harm.
 

The ultimate goal should be to ensure that AI serves everyone equally, irrespective of race, geography, or socioeconomic status.

 
 
The Trust Equation


In healthcare, where the stakes are often life and death, trust is paramount.  

For AI to truly serve as a trusted partner in medical decision-making, it must operate with transparency. However, one of the most significant challenges AI faces in healthcare today is, as mentioned previously, its tendency to operate in a “black box.” 
The complexity of algorithms and the sheer volume of data they analyse can make it difficult, if not impossible, for doctors and patients to understand exactly how an AI arrived at its decision.
 

Example Image

Consider a future scenario where AI recommends a chemotherapy regimen for a cancer patient. The system suggests a specific combination of drugs and dosages based on a wealth of patient data, including genetic information, prior treatment history, and clinical markers. However, the clinician is unable to explain how the AI came to this recommendation.

There might be thousands of variables influencing the AI’s decision. Still, without a clear understanding of the reasoning process, the oncologist faces a difficult choice: should they trust the AI and follow its recommendation, or should they rely on their own expertise and clinical judgment?

This lack of transparency poses a significant issue. Doctors are trained to understand the nuances of patient care and make informed decisions based on their clinical knowledge and experience. When AI enters the picture, trust in the technology becomes critical. If clinicians are unable to verify or understand how an AI arrived at its conclusions, their confidence in the system can be severely diminished. This not only undermines the utility of AI in medical practice but also places clinicians in a challenging ethical position. Should they take the risk of following an AI-generated recommendation without fully understanding it?

For patients, the issue of trust is even more personal. Most patients, especially those undergoing life-altering treatments like chemotherapy, want to feel assured that the treatment plan is tailored specifically for them. If the decision-making process is hidden behind a "black box," patients might question the fairness, accuracy, or even the potential biases that may have influenced the AI’s recommendation. 

Explainability: Making AI Understandable

In response to these concerns, there is a growing focus on developing 'explainability' into AI tools and techniques to make these systems more understandable and accountable. One such approach is model interpretability, which refers to the ability to interpret and explain the behaviour of a machine learning model. This allows healthcare professionals to better understand the “why” behind an AI's recommendation.

Transparency in AI is not just a matter of understanding how decisions are made—it is about creating an environment where patients, doctors, and healthcare systems can confidently rely on AI to improve health outcomes.
.


For instance, instead of simply presenting a list of recommended treatments, an explainable AI model could outline the key factors that influenced its decision, such as the patient’s genetic makeup, tumour characteristics, or response to previous treatments. By offering insight into the variables and data points that led to a recommendation, healthcare professionals can make more informed decisions and ensure that AI is being used as a support tool rather than a “black-box” decision-maker.

Another development gaining momentum is the inclusion of fairness-aware algorithms into AI tools. These algorithms aim to minimize biases in AI models by ensuring that predictions are not unduly influenced by factors such as race, gender, or socioeconomic status. If an AI system can explain how it arrived at a recommendation while also demonstrating fairness in its decision-making process, clinicians and patients can be more confident in its accuracy and reliability.

Finally, providing clear information about the data sources used by AI systems is key to the development of trust. Patients and healthcare providers need to know where the data comes from, how it was collected, and whether it represents a diverse and representative sample of the population. 

These shift will require ongoing research and development in AI interpretability, the ethics of AI deployment, and patient-centred design. Only by making AI decisions clear, explainable, and fair can we ensure that the technology lives up to its potential in transforming healthcare
.
 

Regulation and Responsbility

Unlike a drug or medical device, AI is not a static entity—it evolves continuously as it learns from new data. In the South African context, where access to healthcare can be highly variable across urban and rural areas, AI’s dynamic nature presents unique challenges for regulators. 

AI systems are not built with a fixed set of parameters; they adapt and refine their algorithms as they interact with more data. This raises critical questions: How do we ensure that an AI model remains safe and effective as it evolves? How do we keep up with rapid changes while also protecting patient safety, particularly in under-resourced healthcare settings?

In addition to these concerns, the question of accountability is complex.

If an AI system makes an error, who is responsible? Is it the healthcare provider, the developer who created the algorithm, or the healthcare institution that deployed it? In South Africa, where the healthcare system is still grappling with issues such as resource constraints, unequal access to technology, and high disease burden, these questions are even more pressing. Governments and regulatory bodies are actively working to create frameworks for overseeing AI in healthcare, but healthcare professionals bear the ultimate responsibility for patient care.
 


The Road Ahead

Example Image


Meanwhile, back in his office, our oncologist, Dr. X, closes his laptop. The hum of the day has faded, leaving only the quiet stillness of his thoughts. Leaning back in his chair, he reflects on how AI has woven itself into nearly every patient interaction. Yet, its true power lies not in replacing human expertise but in augmenting it.  

For AI to be a trusted partner in healthcare, transparency is non-negotiable. Clinicians must feel empowered to question its recommendations, integrating their expertise with the insights AI provides. This evolving relationship between AI and medicine holds great promise—but only if developed and deployed with ethical integrity.  

AI must be inclusive, transparent, and accountable. It must be trained on diverse datasets to ensure fairness, with safeguards in place to prevent bias. Ethical principles—beneficence, non-maleficence, autonomy, and justice—must guide its integration, ensuring that AI complements rather than overrides human judgment.  

As Dr. X considers his role in this shifting landscape, he knows that the future of AI in healthcare depends on collaboration. Healthcare professionals, policymakers, and technologists must work together to refine algorithms, train clinicians in AI literacy, and uphold transparency.  

By embracing these principles, AI can become a powerful tool for improving patient outcomes—especially in underserved regions, where equitable access to quality care is needed most.

 



Bibilography


1. Botha, P.J., Schoonees, A, & Pontes, C.C.. (2018). Mapping oral cancer research in South Africa. South African Dental Journal 73(6), 384-394. https://doi.org/10.17159/2519-0105/2018/v73no6a1

2. Entwistle VA, Carter SM, Cribb A, McCaffery K. Supporting patient autonomy: the importance of clinician-patient relationships. J Gen Intern Med. 2010 Jul;25(7):741-5. doi: 10.1007/s11606-010-1292-2. Epub 2010 Mar 6. PMID: 20213206; PMCID: PMC2881979.

3. Fletcher RR, Nakeshimana A, Olubeko O. Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health. Front Artif Intell. 2021 Apr 15;3:561802. doi: 10.3389/frai.2020.561802. PMID: 33981989; PMCID: PMC8107824.

4. Harishbhai Tilala M, Kumar Chenchala P, Choppadandi A, Kaur J, Naguri S, Saoji R, Devaguptapu B. Ethical Considerations in the Use of Artificial Intelligence and Machine Learning in Health Care: A Comprehensive Review. Cureus. 2024 Jun 15;16(6):e62443. doi: 10.7759/cureus.62443. PMID: 39011215; PMCID: PMC11249277.

5. Kinsinger F. S. (2009). Beneficence and the professional's moral imperativeJournal of chiropractic humanities16(1), 44–46. https://doi.org/10.1016/j.echu.2010.02.006

6. Murdoch B. (2021). Privacy and artificial intelligence: challenges for protecting health information in a new eraBMC medical ethics22(1), 122. https://doi.org/10.1186/s12910-021-00687-3

7. Ro C, Chai W, Yu VE, Yu R. Pancreatic neuroendocrine tumors: biology, diagnosis,and treatment. Chin J Cancer. 2013 Jun;32(6):312-24. doi: 10.5732/cjc.012.10295. Epub 2012 Dec 14. PMID: 23237225; PMCID: PMC3845620.


8. Wójcik M. A. (2022). Algorithmic Discrimination in Health Care: An EU Law Perspective. Health and human rights24(1), 93–103.

9. TechTarget. (n.d.). Black box AI. TechTarget. Retrieved March 17, 2025, from https://www.techtarget.com/whatis/definition/black-box-AI#:~:text=The%20term%20black%20box%20refers,as%20black%20box%20AI%20systems

 


Publication Information

Published: 15 March 2025
Catalogue Number: MAICPD003
Category: Healthcare Law & Ethics
Sub-Category: Artificial Intelligence
Fact-Checked:13 March 2025

 

Back to top


Disclaimer
Every effort has been made to attribute quotes and content correctly. Where possible, all information has been independently verified. The Medical Education Network bears no responsibility for any inaccuracies which may occur from the use of third-party sources. If you have any queries regarding this article contact us 

Fact-checking Policy

The Medical Education Network makes every effort to review and fact-check the articles used as source material in our summaries and original material. We have strict guidelines in relation to the publications we use as our source data, favouring peer-reviewed research wherever possible. Every effort is made to ensure that the information contained here accurately reflects the original material. Should you find inaccuracies or out-of-date content or have any additional issues with our articles, please make use of the Contact Us form to notify us.

 
 
 
Rapid SSL

The Medical Education Network
Powered by eLecture, a VisualLive Solution