In this learning activity, we will unpack the role of ChatGPT in an African healthcare context. You'll explore its potential benefits, understand its limitations, and gain practical insights into how to integrate AI-driven tools into your clinical practice while ensuring you are compliant with your ethical and legal requirements.
The rise of AI has been nothing short of revolutionary, seamlessly integrating into nearly every aspect of daily life—from how you use your phone and conduct your work to the operation of home security systems.
The recent announcement of a $500-billion investment in Artificial Intelligence (AI) infrastructure by the US government signals that we can expect further exponential growth and influence of AI technologies in the coming years.
Given this backdrop, it's clear that AI will continue to play a crucial role in shaping the future of healthcare. Indeed, AI is now embedded in nearly every aspect of modern medicine.
A 2024 study review published in BMJ highlighted that the use of artificial intelligence (AI) in healthcare is expanding at an unprecedented rate, with millions of practitioners already integrating AI into their daily medical practices.5
The survey, published in BMJ Health and Care Informatics, found that one in five GPs have used AI tools like ChatGPT in their clinical practice. Of 1,006 respondents, nearly a third used AI to draft documentation after patient appointments, while 28% used it to suggest alternative diagnoses. A quarter relied on AI for treatment recommendations.5
Building upon our previous review of the Role of WhatsApp in Clinical Practice, it is important to similarly explore the increasing integration of artificial intelligence (AI) tools like ChatGPT in healthcare and to understand the implications of doing so.
In this AI Compendium, we will examine how doctors and medical professionals are using ChatGPT and explore the potential pitfalls and ethical considerations surrounding its use in clinical settings.
We will explore the South African legislative landscape and how it intersects with AI in healthcare, referencing our previous discussions on ethics and data privacy.
Chat Generative Pre-Trained Transformer or ChatGPT is developed by OpenAI, and is an advanced AI language model built on a deep learning framework first introduced in the 2017 paper "Attention is All You Need" .30
At its core, ChatGPT is designed to understand and generate what is called natural language. This means it can comprehend context, respond coherently, and even maintain the flow of a conversation. It does this by learning patterns in language, predicting the next word in a sentence based on the words that precede it.
Over time, through exposure to millions of sentences and interactions, ChatGPT becomes proficient in generating text that feels contextually relevant and conversationally appropriate.
How Does ChatGPT Work?
An understanding of how ChatGPT works is important to understanding the risks and opportunities involved in it's use. So lets delve in.
The Training Process
ChatGPT’s deep learning model is based on a structure called the transformer (Remember the name - Chat Generative Pre-trained Transformer).
The training process involves feeding the model vast amounts of data, allowing it to learn language patterns, grammar, facts, and the nuances of human communication.
Large datasets are essential for training these AI models, providing them with extensive clinical data necessary for effective learning. While ChatGPT accesses publicly available (and proprietary information) every time you access the AI, or enter a query, you too are contributing to its database of knowledge, helping it to learn.
Tokenization and Input Processing
When you input a question or prompt into ChatGPT, the model breaks down the text into smaller units called tokens. Tokens can represent words or even sub-words, and they allow the model to process language more efficiently. ChatGPT analyses the relationship between these tokens, considering their context and how they interact with one another to produce a meaningful response.
Generating Responses
ChatGPT uses the probabilistic model built during its training to generate responses based on the input it receives. It evaluates the words and phrases in the input. It generates an output by selecting tokens one at a time, always considering the best possible next token based on its training.
Unlike traditional rule-based systems that rely on pre-programmed responses, ChatGPT can generate novel content based on its learned patterns. This gives it the ability to craft tailored, contextually appropriate replies to specific questions, which is what makes it powerful in fields like medicine.
Contextual Understanding
One of ChatGPT’s key features is its ability to understand context. Unlike basic chatbots that might only react to immediate prompts, ChatGPT can remember information within a conversation. This enables it to offer more meaningful and coherent responses over multiple interactions. For example, if a user asks a follow-up question that builds on a previous response, ChatGPT can integrate that context and provide a more relevant and informed answer.
How ChatGPT uses large datasets to facilitate Healthcare Diagnosis and Detection
One of the most common uses of AI is to train models capable of detecting diseases from series of medical images, such as X-rays, MRIs or CT scans. For example, algorithms are trained to identify cancer, heart disease or lung pathologies.
Large datasets are essential for training these AI models, providing the extensive clinical data and radiological images necessary for effective learning. Notable examples of datasets that are used in the training process include:
The Cancer Imaging Archive (TCIA):A service that de-identifies and hosts a large archive of medical images of cancer, accessible for public download. The data are organized into collections, typically related by a common disease, image modality, or research focus.
CheXpert Dataset: A dataset consisting of 224,316 chest radiographs of 65,240 patients collected from Stanford Health Care. It includes both frontal and lateral views, with associated radiology reports, facilitating automated chest X-ray interpretation.
MedNIST Dataset: A collection of medical images designed for training and evaluating AI models in the medical imaging domain. It includes a variety of imaging modalities and disease categories, supporting the development of robust AI diagnostic tools.
These datasets, and others like them, are crucial for training AI models, enabling them to learn from a diverse range of clinical scenarios and imaging characteristics and enabling them to develop the capability to detect and diagnose diseases in some instances with higher accuracy than their human counterparts.
ChatGPT is trained on data which typically cuts off at a certain point, the current commonly acknowledged time of which is September 2021.
Despite its impressive capabilities, ChatGPT has limitations. The most notable is that its knowledge is static — meaning it is based on the data it was trained on, which typically cuts off at a certain point, the current commonly acknowledged time of which is September 2021.
As a result, it may not have access to the most recent research, medical guidelines, or emerging treatments in fields like healthcare. Furthermore, while it can generate human-like text, ChatGPT does not have true understanding or consciousness. It cannot reason, reflect, or apply complex ethical principles the way a human can.
Perhaps of greatest concern for local practitioners, is that ChatGPT is modelled on data which may not reflect the local circumstances. Much of the data used to train AI models originates from developing countries, and these demographics, preferences, and behaviours are over-represented in the data that is returned.
This imbalance in data leads to AI systems that are inherently biased, making them less effective and equitable when applied in African contexts. AI models trained on such data often fail to account for the unique characteristics of African populations, resulting in skewed outcomes and diminished performance in the region.
Compounding these issues is the challenge of data quality. Inconsistent data collection and management practices across regions lead to variability in the quality of data used to train AI systems. This undermines the accuracy and reliability of AI models such as ChatGPT, as they may be based on incomplete or poorly structured data.
Moreover, the lack of extensive historical data records in many African countries further hampers the development of effective predictive models. These data gaps limit the potential of AI to make a meaningful impact in African healthcare, education, and other critical areas, highlighting the need for more localized data collection and improved data management practices.
All of these issues make it crucial for healthcare professionals to approach ChatGPT-generated responses with caution, particularly in clinical settings where accuracy, precision, and individualised care are critical.
The Application of ChatGPT in Everyday Practice
Healthcare professionals can use ChatGPT for a range of purposes, including:
Administrative Tasks: Many healthcare providers use ChatGPT for more routine administrative functions such as drafting emails, generating reports, or managing schedules. As shown in the BMJ survey, this is currently the most popular use of ChatGPT among practitioners. It allows doctors and clinicians to free up time for direct patient care while still maintaining efficient operations.
Clinical Decision Support: While not a replacement for a clinician’s judgment, ChatGPT can assist doctors in finding relevant clinical guidelines, reviewing patient symptoms, and even suggesting differential diagnoses. It can also help in exploring the latest treatment options and therapies for specific conditions.
Patient Communication: ChatGPT can be used in drafting patient communications, such as discharge instructions, medication guidance, and appointment reminders. It can also help explain medical terminology in layman's terms, making it easier for patients to understand their conditions and treatments.
Medical Education and Research: ChatGPT can help in summarizing research articles, extracting key insights from scientific papers, and providing concise explanations of complex medical concepts. This can aid students and professionals in keeping up with the latest advancements in medicine.
There is no doubt that as the technology continues to evolve, the integration of ChatGPT into your everyday practice will become seamless. Understanding its practical implementation becomes crucial.
Let’s take a closer look at what is involved in effectively integrating ChatGPT into your practice, together with the key considerations you should take into account.
How can you access ChatGPT for your practice?
Over and above using ChatGPT on your desktop or phone, which is freely available and easy to use without the need for further intervention, if you are interested in integrating the use of ChatGPT into your practice, there are various avenues you can explore, each requiring differing levels of technical expertise, depending on the desired application and customization. Here is a review of the various options that are available to you:
Freeware
The free version of ChatGPT is a good place to start for healthcare practitioners. While you are required to sign-up for an account to access the platform, the free version is well worth your while.
You can use the free version to assist you in drafting patient communications, such as discharge instructions and medication guidelines, and help you improve the clarity of your communications. It also can be a useful tool for generating educational materials, helping to explain intricate medical concepts in simpler terms for patient education.
There are certain limitations in the free versions that users should be aware of.
Access to advanced models like GPT-4 is restricted, and during peak usage times, response times may be slower, or access may be temporarily limited.
Additionally, the free version does not support the use of plugins, which means it cannot provide real-time information or access external databases. These constraints can affect the depth and accuracy of responses in specialised fields such as healthcare. For users requiring more advanced features and consistent performance, upgrading to a paid plan such as ChatGPT Plus may be an option.
Basic ChatGPT Integration
Pre-Built Platforms: Some platforms offer ChatGPT integration with minimal setup, allowing healthcare professionals to utilize AI capabilities without extensive programming knowledge. These solutions often provide user-friendly interfaces and are designed for quick deployment. Examples of pre-built platforms include IBM WatsonX Assist and Kore.ai.
Custom Integration
For more tailored applications, integrating ChatGPT into your everyday practice requires a foundational understanding of programming languages such as Python or JavaScript. This enables customization to fit specific practice needs, such as developing diagnostic tools or patient communication systems.
Advanced Customization
Bespoke Solutions: Developing bespoke solutions or deeply embedding ChatGPT into your existing systems may necessitate collaboration with IT professionals or developers proficient in AI and machine learning. This ensures seamless integration and optimal performance, particularly for complex applications like electronic health record systems or advanced diagnostic tools.
The Challenges of Using ChatGPT in Medicine
As with any emerging technology, there are several pitfalls that healthcare professionals must be aware of when using ChatGPT.
Misinformation: While ChatGPT offers valuable assistance in healthcare, it carries the risk of disseminating misinformation. The model may generate plausible yet inaccurate or outdated medical information, potentially leading to patient harm if relied upon without proper verification.
Accuracy: While ChatGPT is trained on large datasets, it may not always provide accurate or up-to-date information. Medical knowledge evolves rapidly, and there is the possibility that ChatGPT’s responses may not reflect the latest guidelines or research, which could lead to incorrect diagnoses or treatment plans.
Individual Patient Insights: Another significant challenge is the lack of personalization. ChatGPT does not have access to a patient’s complete medical history or real-time data, which makes it difficult to provide tailored recommendations. For instance, ChatGPT might suggest a general treatment plan without considering the individual nuances of a patient’s condition, such as comorbidities, allergies, or prior treatments.
Lack of Cultural Insights & Context: As indicated previously, ChatGPT is currently unable to understand nuance or individual context. Furthermore, it is important to bear in mind that ChatGPT is trained on available data that may not reflect the healthcare context within which we operate in sub-Saharan Africa and may, therefore, generate responses that are not relevant to our regional circumstances.
Data Privacy & Security: As with any technology, the use of ChatGPT in patient care raises concerns about data privacy and security. Given the sensitive nature of healthcare data, any interaction with AI must comply with data protection laws to avoid breaches and misuse of personal health information.
Regulations regarding the use of ChatGPT in Medicine: South African and Global Perspectives
As the use of artificial intelligence (AI) tools like ChatGPT continues to expand in healthcare, there is a growing need for clear regulatory frameworks to ensure safe and ethical use. The potential of AI to transform medicine is vast, but it also raises significant concerns about data privacy, accuracy, and accountability.
Most countries have started to develop and implement regulations to govern the use of AI in healthcare. While Africa lags somewhat behind Europe and the United States of America, the continent is actively developing frameworks to regulate artificial intelligence (AI) to ensure ethical and responsible deployment across the continent. In July 2024, the African Union Executive Council endorsed the Continental AI Strategy, emphasizing an Africa-centric approach to AI that promotes ethical, responsible, and equitable practices.1
Additionally, the African Union ratified the Malabo Convention on June 8, 2023, establishing a legal framework for data protection and cybersecurity, similar to the EU's GDPR, to bolster AI regulation throughout Africa.
A comprehensive outline of the Guidelines and Legislations related to Digital Healthcare can be found in the companion article to this review, which has been linked below. We recommend all practitioners have a working knowledge of the regulations as they apply to new media.
For the purposes of this review, we have provided a brief overview of the regulations governing the use of AI in your healthcare practice.
Health Professions Council of South Africa (HPCSA) Guidelines The HPCSA plays a central role in regulating healthcare practices in South Africa, and this extends to the use of AI tools in clinical settings. The Ethical Guidelines for Good Practice in Telemedicine issued by the HPCSA, although not specific to ChatGPT, set out general ethical principles for any technology-based healthcare delivery. These guidelines emphasize the importance of maintaining patient confidentiality, ensuring informed consent, and using technologies that enhance, rather than replace, the role of healthcare professionals. 22,23,24,25,26,27
Confidentiality and Data Protection
ChatGPT, as an AI tool, may interact with sensitive patient data, and this requires strict adherence to the Protection of Personal Information Act (POPIA).18
This act mandates that healthcare providers must take measures to ensure that any personal or health information shared with AI tools is secure and used ethically. This includes the requirement that any AI-based system storing or processing patient data must be compliant with data protection laws.15
Practitioners should be aware that informed consent still applies when using ChatGPT in a healthcare setting.
Patients must be informed that AI is being used in the delivery of their care and must explicitly consent to its application.
Informed Consent
When using AI tools like ChatGPT in healthcare settings, practitioners must ensure that patients are fully informed about how their data will be used and that they consent to the use of AI tools for specific purposes. This is particularly important in the context of telemedicine and virtual consultations, where ChatGPT could be involved in diagnosing conditions or offering treatment advice.
National AI Regulation
While South Africa currently lacks specific legislation regulating AI in healthcare, there is an increasing recognition of the need for targeted laws.10
The National Digital and Data Policy, currently under discussion, is expected to address the integration of AI into healthcare and other sectors, with a focus on ensuring that AI tools operate in a way that benefits the public while safeguarding rights.
Additional regulations which may impact the use of ChatGPT
Various other regulations apply to ChatGPT, which practitioners should be aware of. These include the Copyright Act and Patents Act, which apply to AI-generated works, such as forms and documents and the draft Cyber Security Bill, which was published in 2024 and includes AI in the definition of regulated systems.29
South Africa is also following international trends where AI is subject to increasing scrutiny to ensure its ethical deployment in healthcare. As AI technology evolves, the local regulatory landscape is likely to adapt in response to these developments.
Ethical Considerations in the Use of ChatGPT in Medicine
While ChatGPT can be an invaluable tool in healthcare, its use must be approached with caution, especially when it comes to patient care. While some of the issues were previously dealt with under the section on the Challenges of ChatGPT, it is worth reviewing those with a view to the ethical implication of ChatGPT specifically.
Clinical Accuracy ChatGPT, like other AI models, may occasionally generate incorrect or outdated information which can lead to misdiagnosis or inappropriate treatment plans. Healthcare professionals must critically evaluate AI-generated content and cross-reference it with established medical knowledge before applying it patient care. This approach aligns with the ethical principle of non-maleficence, ensuring that interventions do not harm patients.
Data Privacy and Confidentiality Patient data is highly sensitive, and its protection is paramount. Utilizing ChatGPT involves processing patient information, which could be stored or transmitted through external servers. This raises concerns about data breaches and unauthorized access. Healthcare providers must ensure that any AI tool complies with South Africa's Protection of Personal Information Act (POPIA) and other relevant data protection regulations. Implementing robust data encryption and access controls is essential to safeguard patient confidentiality.
Over-Reliance on Technology
While AI can enhance efficiency, there's a risk of over-reliance, potentially diminishing clinical judgment and the human aspect of patient care. Healthcare professionals should use ChatGPT as a supplementary tool rather than a replacement for their expertise. Maintaining a balance between technological assistance and professional judgment is crucial to uphold the ethical principle of beneficence, ensuring that interventions are in the best interest of the patient.
Informed Consent
Patients should be informed when AI tools are used in their care. Transparent communication about the role of ChatGPT in their treatment plan is necessary to obtain informed consent. This practice respects patient autonomy and aligns with ethical standards in medical practice. You can find out more about obtaining informed content from patients here.
Regulatory Compliance The use of AI in healthcare is subject to evolving regulations. In South Africa, the national regulatory framework for AI in healthcare is still in development. Healthcare providers must stay informed about current laws and guidelines to ensure compliance.
How to Mitigate the Risk of Using ChatGPT in Your Practice
To ensure that you mitigate the risk of ChatGPT in your practice, here are some issues you should include in your practice strategy:
Human Oversight and Validation: ChatGPT’s outputs should never be solely relied upon for clinical decision-making. Healthcare professionals must ensure that AI-generated suggestions are cross-checked with current medical guidelines and validated by expert opinion.
Personalisation and Contextualisation: To mitigate the risks of generalized advice, healthcare professionals should aim to personalize AI-driven suggestions based on individual patient data, clinical history, and real-time monitoring.
Transparency and Explainability: Healthcare providers must understand how ChatGPT arrives at its conclusions. Transparent, explainable AI models can help clinicians assess the rationale behind AI-generated suggestions and make more informed decisions.
Data Privacy and Compliance: Ensuring that patient data is anonymized and securely handled in compliance with regulations like POPIA is essential for mitigating risks related to privacy breaches and unauthorized data access. The above is by no means a comprehensive strategy for mitigating the risks of using ChatGPT In your practice.
As the regulatory, legal and ethical landscapes adapt to the rapidly evolving AI technologies such as ChatGPT, practitioners will need to ensure that they are aware of the how these changes impact their everyday practice.
Leveraging ChatGPT to Tackle Healthcare Challenges in Sub-Saharan Africa
The growing use of AI tools like ChatGPT in healthcare raises important questions about how these technologies will impact regions like sub-Saharan Africa, where healthcare systems face unique challenges. While the adoption of AI presents significant opportunities for improving healthcare delivery, there are also concerns about whether sub-Saharan Africa will be included in the global AI revolution or left behind.
The region’s ability to benefit from AI advancements will largely depend on how quickly it can address regulatory, infrastructural, and economic barriers. When integrated correctly into existing healthcare systems or when created to meet specific challenges of the region, ChatGPT could significantly impact healthcare delivery in the region.
The caveat for all ChatGPT responses and activities is that ChatGPT has been trained on information that may not not reflective of the conditions in the region, and therefore, the answers provided by the AI may not be applicable, appropriate or relevant to our context.
That said, ChatGPT and similar AI technologies offer several benefits in the sub-Saharan region, let’s take a closer look at some of opportunities ChatGPT offers local healthcare practitioners.
Overcoming Language Barriers
Language barriers represent significant challenges to the delivery of optimal healthcare in the region. ChatGPT's extensive language support, encompassing over 95 languages, will enable healthcare practitioners to bridge communication gaps across Africa's diverse linguistic landscape. This capability ensures that information and services are accessible to a broader audience, promoting equitable access to healthcare information across the continent.
Have a look at how a local South African start-up is using the technology to remove language barriers.
Tailoring Educational Materials to Local Contexts
ChatGPT can significantly enhance educational experiences by assisting practitioners to tailor content to local languages and cultural contexts.
By incorporating culturally relevant examples and scenarios, practitioners can train ChatGPT to ensure that its responses are both relevant, engaging and meaningful.
This customization will aid in bridging educational gaps in healthcare, specifically in rural regions, ensuring healthcare information is more accessible and effective for diverse populations.
Improving Healthcare Access and Efficiency
There is a severe shortage of doctors and healthcare workers in the region, particularly in rural and underserved areas. ChatGPT can assist in various ways with this crisis including: reducing the administrative burden for example, documenting medical information, freeing resources to attend to patients, assisting in answering patient queries in user-friendly language, and even, in some instances, providing general guidance to healthcare practitioners in relation to patient diagnosis, not to replace specialist care, but to act as an adjunct medical tool.
In regions with limited healthcare infrastructure, ChatGPT could offer an accessible and cost-effective solution for both healthcare workers and patients alike.
Telemedicine and Remote Consultations
With AI's potential to facilitate telemedicine and remote consultations, sub-Saharan Africa can benefit from enhanced access to medical expertise, even in remote areas.
ChatGPT could serve as an adjunct tool for healthcare providers, offering guidance on treatment options and medical advice that may not otherwise be readily available.
In Conclusion
Artificial Intelligence (AI) holds immense potential to revolutionize sectors such as healthcare, with transformative benefits already being seen in fields like diagnostic accuracy, personalized treatment plans, and operational efficiency.
In South Africa’s healthcare system, ChatGPT can streamline administrative tasks, enhance clinical decision-making, and improve patient communication. However, as with any technology, its use must be carefully managed to mitigate risks such as misinformation, privacy concerns, and a lack of personalization.
While we are just beginning to explore AI’s full potential, it is essential to remain vigilant about the ethical, legal, and social challenges that come with its integration. Human oversight must remain central in AI deployments, such as ChatGPT, to ensure that the technology remains a supportive tool rather than a replacement for human judgment.
Disclaimer Every effort has been made to attribute quotes and content correctly. Where possible, all information has been independently verified. The Medical Education Network bears no responsibility for any inaccuracies which may occur from the use of third-party sources. If you have any queries regarding this article contact us
Fact-checking Policy The Medical Education Network makes every effort to review and fact-check the articles used as source material in our summaries and original material. We have strict guidelines in relation to the publications we use as our source data, favouring peer-reviewed research wherever possible. Every effort is made to ensure that the information contained here accurately reflects the original material. Should you find inaccuracies or out-of-date content or have any additional issues with our articles, please make use of the Contact Us form to notify us.