User login
A 26-year-old woman who reports a history of acyclovir-resistant herpes complains of a recurring, stinging rash around her mouth. Topical tacrolimus made it worse, she said. On exam, she has somewhat grouped pustules on her cutaneous lip. I mentioned her to colleagues, saying: “I’ve a patient with acyclovir-resistant herpes who isn’t improving on high-dose Valtrex.” They proffered a few alternative diagnoses and treatment recommendations. I tried several to no avail.
Daniel Kahneman, PhD, with two other authors, has written a brilliant book about this cognitive unreliability called “Noise: A Flaw in Human Judgment” (New York: Hachette Book Group, 2021).
Both bias and noise create trouble for us. Although biases get more attention, noise is both more prevalent and insidious. In a 2016 article, Dr. Kahneman and coauthors use a bathroom scale as an analogy to explain the difference. “We would say that the scale is biased if its readings are generally either too high or too low. A scale that consistently underestimates true weight by exactly 4 pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy.” In the case presented, “measurements” by me and my colleagues were returning different “readings.” There is one true diagnosis and best treatment, yet because of noise, we waste time and resources by not getting it right the first time.
There is also evidence of bias in this case. For example, there’s probably some confirmation bias: The patient said she has a history of antiviral-resistant herpes; therefore, her rash might appear to be herpes. Also there might be salience bias: it’s easy to see how prominent pustules might be herpes simplex virus. Noise is an issue in many misdiagnoses, but trickier to see. In most instances, we don’t have the opportunity to get multiple assessments of the same case. When examined though, interrater reliability in medicine is often found to be shockingly low, an indication of how much noise there is in our clinical judgments. This leads to waste, frustration – and can even be dangerous when we’re trying to diagnose cancers such as melanoma, lung, or breast cancer.
Dr. Kahneman and colleagues have excellent recommendations on how to reduce noise, such as tips for good decision hygiene (e.g., using differential diagnoses) and using algorithms (e.g., calculating Apgar or LACE scores). I also liked their strategy of aggregating expert opinions. Fascinatingly, averaging multiple independent assessments is mathematically guaranteed to reduce noise. (God, I love economists). This is true of measurements and opinions: If you use 100 judgments for a case, you reduce noise by 90% (the noise is divided by the square root of the number of judgments averaged). So 20 colleagues’ opinions would reduce noise by almost 80%. However, those 20 opinions must be independent to avoid spurious agreement. (Again, math for the win.)
I showed photos of my patient to a few other dermatologists. They independently returned the same result: perioral dermatitis. This was the correct diagnosis and reminded me why grand rounds and tumor boards are such a great help. Multiple, independent assessments are more likely to get it right than just one opinion because we are canceling out the noise. But remember, grand rounds has to be old-school style – no looking at your coresident answers before giving yours!
Our patient cleared after restarting her topical tacrolimus and a bit of doxycycline. Credit the wisdom of the crowd. Reassuringly though, Dr. Kahneman also shows that expertise does matter in minimizing error. So that fellowship you did was still a great idea.
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. He reports having no conflicts of interest. Write to him at dermnews@mdedge.com.
A 26-year-old woman who reports a history of acyclovir-resistant herpes complains of a recurring, stinging rash around her mouth. Topical tacrolimus made it worse, she said. On exam, she has somewhat grouped pustules on her cutaneous lip. I mentioned her to colleagues, saying: “I’ve a patient with acyclovir-resistant herpes who isn’t improving on high-dose Valtrex.” They proffered a few alternative diagnoses and treatment recommendations. I tried several to no avail.
Daniel Kahneman, PhD, with two other authors, has written a brilliant book about this cognitive unreliability called “Noise: A Flaw in Human Judgment” (New York: Hachette Book Group, 2021).
Both bias and noise create trouble for us. Although biases get more attention, noise is both more prevalent and insidious. In a 2016 article, Dr. Kahneman and coauthors use a bathroom scale as an analogy to explain the difference. “We would say that the scale is biased if its readings are generally either too high or too low. A scale that consistently underestimates true weight by exactly 4 pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy.” In the case presented, “measurements” by me and my colleagues were returning different “readings.” There is one true diagnosis and best treatment, yet because of noise, we waste time and resources by not getting it right the first time.
There is also evidence of bias in this case. For example, there’s probably some confirmation bias: The patient said she has a history of antiviral-resistant herpes; therefore, her rash might appear to be herpes. Also there might be salience bias: it’s easy to see how prominent pustules might be herpes simplex virus. Noise is an issue in many misdiagnoses, but trickier to see. In most instances, we don’t have the opportunity to get multiple assessments of the same case. When examined though, interrater reliability in medicine is often found to be shockingly low, an indication of how much noise there is in our clinical judgments. This leads to waste, frustration – and can even be dangerous when we’re trying to diagnose cancers such as melanoma, lung, or breast cancer.
Dr. Kahneman and colleagues have excellent recommendations on how to reduce noise, such as tips for good decision hygiene (e.g., using differential diagnoses) and using algorithms (e.g., calculating Apgar or LACE scores). I also liked their strategy of aggregating expert opinions. Fascinatingly, averaging multiple independent assessments is mathematically guaranteed to reduce noise. (God, I love economists). This is true of measurements and opinions: If you use 100 judgments for a case, you reduce noise by 90% (the noise is divided by the square root of the number of judgments averaged). So 20 colleagues’ opinions would reduce noise by almost 80%. However, those 20 opinions must be independent to avoid spurious agreement. (Again, math for the win.)
I showed photos of my patient to a few other dermatologists. They independently returned the same result: perioral dermatitis. This was the correct diagnosis and reminded me why grand rounds and tumor boards are such a great help. Multiple, independent assessments are more likely to get it right than just one opinion because we are canceling out the noise. But remember, grand rounds has to be old-school style – no looking at your coresident answers before giving yours!
Our patient cleared after restarting her topical tacrolimus and a bit of doxycycline. Credit the wisdom of the crowd. Reassuringly though, Dr. Kahneman also shows that expertise does matter in minimizing error. So that fellowship you did was still a great idea.
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. He reports having no conflicts of interest. Write to him at dermnews@mdedge.com.
A 26-year-old woman who reports a history of acyclovir-resistant herpes complains of a recurring, stinging rash around her mouth. Topical tacrolimus made it worse, she said. On exam, she has somewhat grouped pustules on her cutaneous lip. I mentioned her to colleagues, saying: “I’ve a patient with acyclovir-resistant herpes who isn’t improving on high-dose Valtrex.” They proffered a few alternative diagnoses and treatment recommendations. I tried several to no avail.
Daniel Kahneman, PhD, with two other authors, has written a brilliant book about this cognitive unreliability called “Noise: A Flaw in Human Judgment” (New York: Hachette Book Group, 2021).
Both bias and noise create trouble for us. Although biases get more attention, noise is both more prevalent and insidious. In a 2016 article, Dr. Kahneman and coauthors use a bathroom scale as an analogy to explain the difference. “We would say that the scale is biased if its readings are generally either too high or too low. A scale that consistently underestimates true weight by exactly 4 pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy.” In the case presented, “measurements” by me and my colleagues were returning different “readings.” There is one true diagnosis and best treatment, yet because of noise, we waste time and resources by not getting it right the first time.
There is also evidence of bias in this case. For example, there’s probably some confirmation bias: The patient said she has a history of antiviral-resistant herpes; therefore, her rash might appear to be herpes. Also there might be salience bias: it’s easy to see how prominent pustules might be herpes simplex virus. Noise is an issue in many misdiagnoses, but trickier to see. In most instances, we don’t have the opportunity to get multiple assessments of the same case. When examined though, interrater reliability in medicine is often found to be shockingly low, an indication of how much noise there is in our clinical judgments. This leads to waste, frustration – and can even be dangerous when we’re trying to diagnose cancers such as melanoma, lung, or breast cancer.
Dr. Kahneman and colleagues have excellent recommendations on how to reduce noise, such as tips for good decision hygiene (e.g., using differential diagnoses) and using algorithms (e.g., calculating Apgar or LACE scores). I also liked their strategy of aggregating expert opinions. Fascinatingly, averaging multiple independent assessments is mathematically guaranteed to reduce noise. (God, I love economists). This is true of measurements and opinions: If you use 100 judgments for a case, you reduce noise by 90% (the noise is divided by the square root of the number of judgments averaged). So 20 colleagues’ opinions would reduce noise by almost 80%. However, those 20 opinions must be independent to avoid spurious agreement. (Again, math for the win.)
I showed photos of my patient to a few other dermatologists. They independently returned the same result: perioral dermatitis. This was the correct diagnosis and reminded me why grand rounds and tumor boards are such a great help. Multiple, independent assessments are more likely to get it right than just one opinion because we are canceling out the noise. But remember, grand rounds has to be old-school style – no looking at your coresident answers before giving yours!
Our patient cleared after restarting her topical tacrolimus and a bit of doxycycline. Credit the wisdom of the crowd. Reassuringly though, Dr. Kahneman also shows that expertise does matter in minimizing error. So that fellowship you did was still a great idea.
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. He reports having no conflicts of interest. Write to him at dermnews@mdedge.com.