8 Ways How LLMs Can Be Your Practice Superpower

As a busy clinician, I have always looked for ways to work smarter, not harder.

That is where large language models (LLMs) like ChatGPT and Google’s Gemini come in. These AI tools are revolutionizing the way we practice. I like to share exactly how, including some example prompts you can use, along with the ethical considerations we always need to keep in mind.

Patient Education Made Easy

Let us be real, medical terminology can feel like a foreign language to most patients. LLMs help me bridge that gap by breaking down complex concepts using simple language and visuals. It’s a win-win – patients understand their condition better, and we know they have the information they need to manage their health.

Prompt Examples:

  • “Write a patient handout about hypertension that a 5th-grader could understand.”
  • “Create a diagram explaining the stages of COPD, using simple language.”
  • “Generate a list of frequently asked questions (FAQs) about diabetes management.”

Expanding Diagnostic Horizons

Sometimes, I see a patient who presents with a truly puzzling collection of symptoms. This is especially true in psychiatry, the field I am working in as a clinician, where the symptoms can be very complex and overlapping. That’s when I turn to my LLM assistant (see prompt examples below). Ultimately, this approach saves hours of searching through the books of the internet, as the LLM can even look for relevant trusted references which I can use to counter-check and validate.

Prompt Examples:

  • “Provide a differential diagnosis for a 42-year-old female with chronic fatigue, joint pain, and a butterfly rash on her face.”
  • “A 68-year-old male presents with progressive memory loss, word-finding difficulty, and occasional disorientation. What conditions should I consider?”

Research Summarizer

Staying current on medical research is crucial, but it is a major time commitment. LLMs, again, are our secret weapon. Say we are interested in a new Alzheimer’s drug, we might prompt the LLM to summarize the key findings and clinical implications of the most recent studies on Lecanemab.”  This prompt makes the AI distil those lengthy papers into the highlights I need, saving me precious time. For extra reliability when utilizing AI for knowledge purposes, as always and should be the norm, we must use LLMs with the ability to access the internet and ask them to provide references.

Prompt Examples:

  • “Summarize the key findings of recent clinical trials on the use of aducanumab for Alzheimer’s disease.”
  • “What are the potential long-term side effects of using biologic medications for rheumatoid arthritis? Focus on the most recent research

Note-Taking Automation

Wouldn’t it be awesome to dictate a patient encounter and magically get a complete progress note? While we are not quite there yet, LLMs are rapidly improving in this area.

After an appointment, we can dictate a rough summary, and the LLM will organize it into a structured note, even suggesting potential ICD-10 codes if you prompt it to do so. Based on my experience, this frees up so much of my time.

However, I like to remind, again and again, when interacting with AI models, be wary of sharing sensitive information that may enter the AI database. We must refrain from disclosing personal confidential information to maintain the privacy of our patients. Sometimes, the model may wrongly summarize the case, therefore always counter-check thoroughly and do not get tricked into thinking that a neatly organized note means an accurate one.

Prompt Examples:

  • “Turn this dictated summary into a structured progress note: ‘Patient reports worsening headaches, occasional nausea, and blurry vision. Neurologic exam within normal limits. Plan: Order brain MRI.'”
  • “Suggest ICD-10 codes for the following diagnoses: Type 2 diabetes mellitus, diabetic neuropathy, hypertension.”

Coding Clinical Diagnoses

Speaking of codes, if you have ever wrestled with the ICD-10 manual, you know how painful it can be to search for the right code, especially when discharging patients. LLMs are amazing for suggesting appropriate codes based on diagnoses and procedures. A quick query and the LLM can give you the answer, preventing billing errors and potential headaches down the road.

Prompt Examples:

  • “What is the ICD-10 code for a laparoscopic cholecystectomy?”
  • “Provide ICD-10 codes related to a diagnosis of major depressive disorder, recurrent episode.”

Clear Communication for Better Outcomes

We can use LLMs to translate instructions, consent forms, and discharge summaries into the languages our patients speak. When everyone is on the same page, it improves health equity and treatment adherence. Just remember to cross-validate the accuracy of the translated information by native speakers and the documents have been verified by your hospital’s administration (just like what you would do if you were to ask a native to translate them for you).

Prompt Examples:

  • “Translate this pre-colonoscopy instruction sheet into Malay.”
  • “Write a discharge summary for a patient with pneumonia, using language they will easily understand.”

Extra Layer of Medication Safety

With patients often taking multiple medications, drug interactions are a constant concern.  If we are prescribing a new medication for someone with a complex list,  we prompt the LLM to analyze it. This acts as a safety net that helps us catch potential problems.

Prompt Examples:

  • “Analyze this patient’s medication list for potential drug interactions: [list of medications]”
  • “This 70-year-old male patient has renal failure. Are there any dosage adjustments needed for the following medications: [list of medications]”

Tailored Treatment Planning

LLMs can analyze a patient’s data and provide evidence-based treatment options that fit their unique needs. If you can utilize them properly, it is like having a super-powered consultant on our team, leading to more personalized care.

Prompt Examples:

  • “Based on this patient’s history, suggest evidence-based treatment options for their bipolar disorder, including both medication and therapy recommendations.”
  • “This patient has poorly controlled asthma despite using an inhaled corticosteroid. What other treatment options should I consider? Provide pros and cons of each.”

Some Ethical Considerations

While artificial intelligence (AI) has made significant progress, it is vital to acknowledge the ethical concerns associated with its use in healthcare. I like to remind some core ethical concepts to consider when we try to utilize AI in healthcare:

  1. AI enhances, not replaces, our judgment: LLMs are amazing tools, but I always critically evaluate their output. My clinical expertise is what ultimately guides patient care.
  2. Beware of the bias: We should be mindful that LLMs can reflect biases present in their training data. It is our responsibility to ensure their suggestions don’t perpetuate unfair or inequitable care.
  3. Privacy above all: Patient confidentiality is paramount. We should choose LLMs with robust security or those built specifically for healthcare.
  4. Openness builds trust: Consider informing patients when LLMs play a role in their care. It fosters trust and keeps them in the loop about how technology is shaping their healthcare experience.

Check out other articles: