I read a great article by Mitesh Patel exploring how ChatGPT might help physicians review patient health records and improve patient outcomes. Physicians are faced with a vast amount of information for every patient. Navigating it when serving 30 or 40 people each day is challenging. Article
The article contained a publicly available patient health note with the identifiers removed. Patient Record
The problem of bias and stigmatizing language in EHRs:
“The EHR’s important role in clinician-clinician communications and clinician-patient relationships raises concerns about the use of stigmatizing language in medical records. Stigmas mark or signal that someone is less worthwhile and hence merits inferior treatment. [ … ] Stigmatizing language generally takes 3 forms: (1) marking or labeling someone as other; (2) assigning responsibility (ie, blame); and (3) invoking danger or peril. [ … .]
Stigmatizing language may compromise care by communicating discriminatory beliefs between clinicians.” Himmelstein G, Bates D, Zhou L. Examination of Stigmatizing Language in the Electronic Health Record. JAMA Netw Open. 2022 Jan 4;5(1):e2144967. doi: 10.1001/jamanetworkopen.2021.44967. PMID: 35084481; PMCID: PMC8796019. Link
I wondered if ChatGPT would be able to identify biased and stigmatizing language in the record and suggest ways to reduce it.
I thought about how we might design an EHR system that helps physicians and care providers create better patient notes. I imagined an EHR system that suggests corrections in real time and flags possible bias and stigmatizing language when displaying patient notes.
I created the prompts for ChatGPT using two articles about bias in electronic health records. Article 1 Article 2
So, could ChatGPT reduce bias in electronic health records? The answer appears to be yes, but decide for yourself. Here’s what happened. Here are the prompts I used. The ChatGPT Output sections below were generated by ChatGPT without edits from me other than clarifications in brackests.
Prompt 1
Hp2.pdf [this is the uploaded patient note]
In the role of a qualitative researcher trained to reduce bias in electronic health records, review this patient medical record and identify bias and stigmatizing language. Biased and stigmatizing language will often include these words: non-adherent, aggressive, agitated, angry, challenging, combative, non-compliant, confront, non-cooperative, defensive, exaggerate, hysterical, unpleasant, refuse and resist. Examples of biased or stigmatizing use of quotes: 1) His mother stated that the wound was supposed to be “all closed up” but that the lesion “busted open.” 2) Chief complaint – “I stay tired.” 3) Reports that if she were to fall, she would just “lay there” until someone found her. 4) He does not believe that he has prostate cancer because his “bowels are working fine.” 5) Patient reports pain is “Still a 10.” 6) The father said that his daughter had a “reaction” to the medication. Examples of stigmatizing language: 1) Mr. R is a 28-year-old sickle cell patient with chronic left hip osteomyelitis who comes to the ED stating he has 10/10 pain “all up in my arms and legs.” 2) He is narcotic dependent and in our ED frequently. At home he reportedly takes 100 mg OxyContin BID and oxycodone 5 mg for breakthrough pain. 3) Over the past few days, he says that he has taken 2 tabs every 4–6 hours. About 3 months ago, patient states that the housing authority moved him to a new neighborhood, and he now has to wheel himself in a manual wheelchair up 3 blocks from the bus stop. 4) Yesterday afternoon, he was hanging out with friends outside McDonaldʼs where he wheeled himself around more than usual and got dehydrated due to the heat. He believes that this, along with some “stressful situations,” precipitated his current crisis. 5) The pain is aching in quality, severe (10/10), and has not been helped by any of the narcotic medications he says he has already taken. On physical exam, he appears to be in distress. He has no fever and his pulse ox is 96% on RA. The rest of the physical exam is normal although he reports tenderness to palpation on the left hip. 6) After 1 hour, the nurse documents: Mr. R is sleeping but easily arousable and has been cussing at nurse. He refuses to wear his oxygen mask and is insisting that his pain is “still a 10.” His girlfriend is on the bed with shoes on and requests a bus token to go home.
Raw ChatGPT Output #1
Upon reviewing the medical record provided, several instances of potentially biased and stigmatizing language can be identified. These not only could affect the patient’s quality of care but also could undermine their dignity and self-esteem. Below are the notable instances:
1. Stigmatizing Language Regarding Pain and Narcotic Use: The document refers to the patient having “a past violation of a pain contract” and mentions “suspicions that she was narcotic-seeking” which can be stigmatizing by suggesting drug-seeking behavior without context of her pain and medical condition. This may lead to under-treatment of pain due to biased assumptions.
2. Usage of Potentially Biased Terms: Terms such as “non-compliant” and “narcotic-seeking” are used in the patient’s history, which may negatively color the reader’s perception of the patient as being difficult or untrustworthy. These terms can contribute to stigma and might impact the care provided by influencing healthcare providers’ attitudes and actions unconsciously.
3. Descriptions of Personal History: The patient’s past involvement in “the distribution of crack cocaine” and her relationship with a spouse who uses crack are mentioned. While these details may be relevant to her social history, the way they are presented could carry a judgmental tone that may not be directly relevant to the immediate medical care, potentially biasing healthcare providers against her.
4. Characterizations of Behavior: There is mention of the patient’s past behavior of leaving the hospital when not prescribed narcotics. This could be perceived as confrontational or non-cooperative, which are terms linked to negative bias.
5. Biased Description of Social Circumstances: The mention of the patient being “on disability for her bipolar diagnosis” and her activities primarily involving staying at home could subtly imply a negative stereotype about individuals with psychiatric disorders and their capabilities or societal roles.
It’s crucial in medical documentation to maintain a neutral tone and focus on objective, clinically relevant information. Avoiding judgmental or stigmatizing language can help ensure all patients receive equitable and respectful care. Adjustments in language that maintain an objective and non-judgmental tone could significantly improve the quality of the medical records.
For example:
– Instead of “narcotic-seeking,” describe the patient’s reports of pain and their history with pain management.
– Replace judgmental terms like “non-compliant” with specific descriptions of the patient’s adherence behaviors.
– Contextualize the patient’s social circumstances without implying judgment, focusing on how these might clinically impact their health.
[ChatGPT output #1 end]
Prompt 2
Repeat the analysis as the bias reduction expert applying the ideas in this document: https://www.chcs.org/media/Words-Matter-Strategies-to-Reduce-Bias-in-Electronic-Health-Records_102022.pdf
Raw ChatGPT Output #2
The guidance from “Words Matter: Strategies to Reduce Bias in Electronic Health Records” emphasizes using language in medical records that is precise, respectful, and devoid of judgement. The document advocates for patient-centered terminology that objectively describes behaviors and conditions without implying blame or negative character traits. Here’s a revised analysis of the medical record, applying these principles:
1. Neutral and Specific Descriptions of Pain Management and Substance Use:
– Original Phrase: “Past violation of a pain contract” and “suspicions that she was narcotic-seeking”
– Revised Approach: Discuss the patient’s history with pain management, specifying any difficulties or challenges in following prescribed treatments without implying intent or character judgement. For example, “Patient has had challenges adhering to previously agreed-upon pain management plans.”
2. Objective Reporting of Medication and Treatment Adherence:
– Original Phrase: “Non-compliant”
– Revised Approach: Specify the aspects of treatment with which the patient has struggled, using neutral language that describes actions rather than character. For example, “Patient has not taken medication as prescribed on several occasions.”
3. Contextualizing Social History Without Stigma:
– Original Mention: “Involved in the distribution of crack cocaine” and relationship dynamics regarding substance use.
– Revised Approach: Provide necessary medical or social context that is directly relevant to the care and health of the patient, focusing on health impacts rather than moral judgments. For example, “Patient has a history that includes exposure to environments with prevalent substance use, which may impact her social and psychological health.”
4. Describing Behavioral Observations Without Bias:
– Original Phrase: “Left during past hospital visits when narcotic medications would not be prescribed”
– Revised Approach: Describe the behavior observed and its context factually, without implying motive. For example, “Patient has previously chosen to leave the hospital during visits when specific pain management expectations were not met.”
5. Avoiding Stigmatizing Descriptions of Disability and Lifestyle:
– Original Phrase: “On disability for her bipolar diagnosis”
– Revised Approach: Factually state the patient’s health conditions and their impact on daily life without stigma. For example, “Patient receives disability benefits due to a diagnosed bipolar disorder, which impacts her daily functioning.”
By revising the language in the medical record according to these principles, healthcare providers can foster a more respectful and equitable healthcare environment. It also helps in ensuring that the documentation accurately reflects the patient’s health status and interactions with the healthcare system, based purely on observable facts and without the interference of subjective bias.
[ChatGPT output #2 end]