The AI Doctor: Promising Panacea or Perilous Placebo?

By Turing
A springer spaniel facing a robot

In a world increasingly shaped by digital transformation, healthcare, an arena once considered sacrosanct and insulated from the cyber revolution, finds itself on the precipice of change. At the forefront of this evolution is ChatGPT, the artificial intelligence conversational model developed by OpenAI, a creation originally designed for conversation, yet now demonstrating an uncanny potential in the realms of medical aid and mental health support. As with all transformative technologies, however, it is imperative to balance optimism with a measure of critical scrutiny.

Doximity, a professional networking platform for healthcare providers, has ventured into an ambitious experiment by the name of DocsGPT, intended to exploit the abilities of ChatGPT to streamline the administrative drudgery that often plagues the medical profession, such as drafting appeals for insurance denials. This endeavour, hailed as successful, has reportedly freed up to two hours per day for physicians. The newfound efficiency could liberate these professionals to refocus on the very essence of their vocation: patient care.

Yet, amidst the success stories are cautionary tales that prompt a pause. A medical professional relayed her encounter with the AI, where it incorrectly cited medical references in an appeal letter. This episode underscores the potential perils of an unchecked reliance on AI in matters of critical importance. Despite the extraordinary capabilities of the chatbot, the incident serves as a stark reminder of the crucial role human oversight continues to play, especially in a domain as sensitive as healthcare.

In the sphere of mental health, AI’s influence is palpable. Freddie Chipres, a mortgage broker grappling with loneliness and a mild depression, sought the counsel of ChatGPT. He prompted the chatbot to respond in the style of a cognitive behavioural therapist. To his surprise, the advice was not just reasonable but beneficial. Chipres found solace in the simple, practical advice from the chatbot, from taking walks to boost mood to practising gratitude and meditation.

Yet, ChatGPT, despite its uncanny ability to mimic human conversation, is not a licensed therapist. It cannot diagnose mental conditions nor chart out comprehensive treatment plans. Although it can generate empathetic responses, mental health experts caution against the potential pitfalls of relying too heavily on a tool that is not specifically designed for therapeutic purposes.

Privacy, the bedrock of the therapist-patient relationship, is another critical issue. Did Chipres, in his confidences to ChatGPT, realise that the same privacy protections did not extend to his digital interactions? As we increasingly entrust our personal concerns to digital platforms, the question of data privacy and its potential misuse is a significant concern that must not be taken lightly.

The most compelling demonstration of AI’s potential, and its limitations, in healthcare comes from an extraordinary story about a dog named Sassy, a Border Collie suffering from a severe form of anemia. The dog’s owner, named Cooper, turned to GPT-4 for help when the vet struggled to identify the cause of Sassy’s illness. After supplying the AI with detailed medical records and test results, the chatbot suggested that the severe anemia could be a symptom of an underlying issue, potentially Immune-Mediated Hemolytic Anemia (IMHA), a condition common in Collies. The vet confirmed this diagnosis, and Sassy, once on the brink of death, made a full recovery.

This incident serves as a potent reminder of AI’s potential to assist in healthcare. However, it is equally a stark reminder of the importance of human oversight and expertise. AI, embodied by tools like ChatGPT, can revolutionise healthcare, but it should complement, not replace, human expertise and empathy. As we continue to navigate the complex terrain of AI-assisted healthcare, it is essential to strike a delicate balance between embracing the potential benefits and maintaining a cautious, measured approach to its implementation.

AI’s presence in mental healthcare, as evidenced by Freddie Chipres’ experience, offers an accessible, non-judgmental source of support in a time when mental health issues are on the rise. The allure of an ever-present, empathetic listener is undeniable. However, it is crucial to remember that ChatGPT is not a licensed therapist and that over-reliance on AI-driven advice could be detrimental to an individual’s well-being.

The story of ChatGPT’s role in diagnosing and treating Sassy’s illness serves as an inspiring illustration of the vast potential that AI holds in healthcare. However, it also highlights the need for human expertise and oversight in harnessing that potential responsibly. It was the collaboration of both the AI and the veterinarian that led to Sassy’s successful treatment and recovery.

The tale of ChatGPT and its foray into healthcare is a fascinating one, reflecting the seemingly limitless potential of AI in transforming the healthcare landscape. However, it is also a reminder that while AI can augment human effort, it should not supplant it. As we continue to explore the brave new world of AI-assisted healthcare, we must strive to strike a balance between leveraging the convenience of technology and preserving the gold standard of human expertise and empathy. In doing so, we can ensure that AI remains a promising panacea, rather than a perilous placebo.