top of page

Is AI better than a Doctor?

  • Writer: Shafi Ahmed
    Shafi Ahmed
  • Feb 17
  • 7 min read

"The human touch in medicine is irreplaceable, but when used responsibly, AI can be an invaluable tool to augment human capabilities and improve patient outcomes"


As a surgeon and futurist deeply embedded in the digital transformation of healthcare, I stand at the juncture where technology meets human expertise. Today, I delve into a provocative question circulating in recent discussions: "Is AI better than a doctor?"


Welcome to this week's edition of AI Horizons, where each week, I discuss the most important topics at the confluence of artificial intelligence and healthcare. This week, I will address the most provocative debate spurred by recent studies suggesting that AI can outperform humans in specific diagnostic tasks. It's essential to address this claim with nuance and a deep understanding of the complexities of healthcare.






The AI Versus Doctor Debate


In recent years, we have seen an explosion in AI capabilities, challenging our preconceptions of medical practice and decision-making. We have been increasingly hearing phrases like, "ChatGPT passes medical exams" or "AI makes better diagnoses than doctors. "But what is the reality? Is it hype, or is AI going to replace doctors?


These are the questions everyone is asking, and as a surgeon and futurist with three decades of experience in healthcare and extensive experience conjoining technology, AI, and healthcare, I am excited to guide you through the rapidly evolving world of AI in healthcare.


The evidence is compelling, from AI systems diagnosing with higher accuracy than experienced doctors to algorithms predicting patient outcomes with astonishing precision.


A landmark study in Nature Medicine demonstrated that Google’s AI model could outperform human radiologists in breast cancer detection, showcasing fewer false positives and negatives. Another study saw IBM's Watson diagnose rare diseases faster than medical boards by sifting through massive data troves in minutes.


A Harvard Health study discussed whether AI answers medical questions better than your doctor?" ChatGPT's responses were rated better than physicians' in nearly 80% of cases. Specifically, ChatGPT received high ratings for quality (78% of responses) and empathy (45% of responses), compared to physicians' ratings of 22% and 4.6%, respectively. Other reports claim that "ChatGPT solves medical exams and scores better than doctors ."But in real life, that is different.


In my decades of experience, I've yet to see a patient come to me with multiple choices of what might be wrong or a plain questionnaire. Doctors understand, explore, and try to find out what the underlying cause can be. Moreover, the same study did not assess the accuracy of the answers or their impact on patient health. The evaluators' criteria for quality and empathy were subjective and untested.


Another limitation is the data on which these AI models are trained. All these AI models are trained on existing data and to a set point, which is mainly limited. Do you know that 97% of all medical data is in non-text form and that alone is more than all the data the biggest AI model, ChatGPT, is trained on?


In the past, the speed at which medical knowledge grew was slow, and it usually doubled, let's say, in a hundred years, but today, medical knowledge doubles every three months. And many AI models are not trained on the latest data, like in recent years, or do not have access to specific data because it is behind a paywall or a security centre. This makes us confident that the knowledge and data that these AI models are trained on are limited and not without inaccuracy.

 


The Human Element: Empathy, Communication, and Clinical Judgment:


Medicine is more than statistics and algorithms; it is about human connection. Physicians have the unique capacity to sympathise with patients, comprehend their concerns, and establish trusted connections. These human characteristics are critical for providing successful medical care and cannot be easily defined or recreated by AI.


Furthermore, clinical judgment is a complicated combination of knowledge, experience, and intuition. It necessitates the capacity to synthesise information from many sources, assess the patient's circumstances, and make sound decisions in the face of uncertainty. While artificial intelligence can help with data analysis and pattern detection, it cannot replace the nuanced judgment and critical thinking from years of clinical practice.


One surprising study suggested that AI can appear more empathetic than human doctors. This assertion stems from AI's ability to quickly generate comprehensive, context-aware responses—often perceived as more empathetic by patients due to their detail and attentiveness. However, genuine empathy involves more than just delivering information; it encompasses understanding patient emotions and reacting appropriately, a subtlety that AI has yet to master fully.


Although AI has been shown to answer medical questions better and diagnose diseases, some studies show that AI models lack ethical perspectives and biases toward patients. These biases can be due to different reasons, like the bias in giving prompts by the person or the bias in data that the AI model is trained on. But these aspects are not to be ignored. Despite its benefits, the deployment of AI in healthcare must be navigated carefully to address significant ethical concerns, including potential biases. AI systems can inadvertently perpetuate existing disparities in healthcare if not carefully monitored and corrected. Ensuring fairness, transparency, and accountability in AI applications is crucial. Strategies to mitigate bias include diversifying training data, implementing rigorous testing across varied demographics, and maintaining human oversight, especially in critical decision-making processes.


 

The Role of AI as a Tool, not a Replacement:


Emphasising AI’s role as an augmentative tool rather than a replacement or competitor offers a more nuanced perspective on its potential. We should embrace AI as a powerful tool that can augment doctors' capabilities by streamlining administrative tasks, automating routine diagnostic processes, and allowing doctors to focus on what matters the most: patient care.


AI’s capabilities in diagnosis are beyond denial, but patient care is not only about making a diagnosis and suggesting treatment but also about comforting, understanding, and healing. Despite AI’s advancements, the core elements of healthcare—empathy, ethical judgment, and holistic understanding—remain uniquely human. Patients value the reassurance from a doctor's touch, the empathy in their voice, and the knowledge in their eyes—elements no AI can replicate.


The prowess of AI in diagnostics stems from its unparalleled ability to analyze extensive datasets rapidly—a task unfeasible for any human due to cognitive and temporal limitations. AI algorithms integrate findings from thousands of global cases, detecting patterns and anomalies that might elude even the most vigilant clinicians.


A study published in JAMA Network Open investigates the impact of using a large language model (LLM), specifically ChatGPT Plus (GPT-4), on physicians' diagnostic reasoning performance compared to conventional diagnostic resources. It concludes that while the LLM alone demonstrated higher performance, the availability of an LLM to physicians did not significantly improve clinical reasoning compared to conventional resources.


Another study compared AI's capability in diagnosing respiratory diseases and concluded that ChatGPT scored the highest and was noted for its human-like responses. This suggests that large language models could assist doctors and medical staff in assessing patients more efficiently, potentially reducing pressures on healthcare systems like the NHS.

However, it's crucial to remember that these studies often focus on narrow, well-defined tasks in controlled environments. The real-world practice of medicine is far more complex. It involves interpreting data, understanding patient context, considering individual patient preferences, and building trust and rapport. These are uniquely human qualities that AI cannot easily replicate.


How many of you've visited a doctor and noticed that they're stuck to their computer screen when you're talking to a doctor? That's because doctors have to spend a lot of time managing data. A study shows that doctors in the US spend more than 15 hours a week managing data. These are the tasks AI can help with. AI can automate routine tasks, such as scheduling appointments and generating medical reports, allowing doctors to spend more time with patients. AI tools like the generative AI-based system Dax Copilot are revolutionizing how clinical notes are drafted, integrating directly into EMR systems and significantly reducing the administrative load on healthcare providers. By reducing the time spent on paperwork, AI enables medical professionals to focus more on patient care, potentially reducing burnout and improving job satisfaction.


Beyond administrative tasks, AI can quickly analyze massive amounts of medical literature, patient data, and research findings to identify relevant information and potential treatment options. AI can help personalize treatment plans based on individual patient characteristics and genetic predispositions. AI models assist in clinical trials by improving patient-matching processes, thus enhancing the efficiency and effectiveness of medical research. This capability extends to diagnosing diseases, where AI has demonstrated the potential to surpass traditional diagnostic tools in accuracy and speed.


AI is not just another tech tool; it's a versatile asset that acts much like a digital intern. It is capable of performing a wide range of tasks and helping to optimize clinical and administrative processes across the healthcare spectrum. As AI becomes more integrated into healthcare, ethical considerations such as privacy, data security, and algorithmic bias must be prioritized. AI algorithms should be clear and understandable for clinicians; patient information must be safeguarded and used responsibly; AI must be designed to avoid bias and ensure equitable access to care; and qualified healthcare professionals should supervise AI systems to interpret results and make decisions.


As a surgeon and futurist, I envision the future of healthcare as a collaboration between AI and human doctors, enhancing diagnostic accuracy and personalizing treatment. As we ponder whether AI is better than doctors, we must shift the dialogue to how AI can enhance the healthcare ecosystem. The main focus should be improving healthcare efficiency and accessibility rather than replacing human roles. Key discussions will address AI's impact on mental health and ethical issues, including algorithmic bias and data privacy. Engaging with AI is essential for healthcare professionals to integrate it effectively, marking the beginning of a journey towards AI-enhanced healthcare for better patient outcomes.


Subscribe to my Newsletter and stay tuned for next week’s edition. In it, we'll explore new dimensions of AI's impact on healthcare, particularly focusing on mental health innovations and the ethical landscapes of AI integration.

Comments


Subscribe to our newsletter

bottom of page