User login
Clinical Psychiatry News is the online destination and multimedia properties of Clinica Psychiatry News, the independent news publication for psychiatrists. Since 1971, Clinical Psychiatry News has been the leading source of news and commentary about clinical developments in psychiatry as well as health care policy and regulations that affect the physician's practice.
Dear Drupal User: You're seeing this because you're logged in to Drupal, and not redirected to MDedge.com/psychiatry.
Depression
adolescent depression
adolescent major depressive disorder
adolescent schizophrenia
adolescent with major depressive disorder
animals
autism
baby
brexpiprazole
child
child bipolar
child depression
child schizophrenia
children with bipolar disorder
children with depression
children with major depressive disorder
compulsive behaviors
cure
elderly bipolar
elderly depression
elderly major depressive disorder
elderly schizophrenia
elderly with dementia
first break
first episode
gambling
gaming
geriatric depression
geriatric major depressive disorder
geriatric schizophrenia
infant
ketamine
kid
major depressive disorder
major depressive disorder in adolescents
major depressive disorder in children
parenting
pediatric
pediatric bipolar
pediatric depression
pediatric major depressive disorder
pediatric schizophrenia
pregnancy
pregnant
rexulti
skin care
suicide
teen
wine
section[contains(@class, 'nav-hidden')]
footer[@id='footer']
div[contains(@class, 'pane-pub-article-cpn')]
div[contains(@class, 'pane-pub-home-cpn')]
div[contains(@class, 'pane-pub-topic-cpn')]
div[contains(@class, 'panel-panel-inner')]
div[contains(@class, 'pane-node-field-article-topics')]
section[contains(@class, 'footer-nav-section-wrapper')]
AI in Medicine: Are Large Language Models Ready for the Exam Room?
In seconds, Ravi Parikh, MD, an oncologist at the Emory University School of Medicine in Atlanta, had a summary of his patient’s entire medical history. Normally, Parikh skimmed the cumbersome files before seeing a patient. However, the artificial intelligence (AI) tool his institution was testing could list the highlights he needed in a fraction of the time.
“On the whole, I like it ... it saves me time,” Parikh said of the tool. “But I’d be lying if I told you it was perfect all the time. It’s interpreting the [patient] history in some ways that may be inaccurate,” he said.
Within the first week of testing the tool, Parikh started to notice that the large language model (LLM) made a particular mistake in his patients with prostate cancer. If their prostate-specific antigen test results came back slightly elevated — which is part of normal variation — the LLM recorded it as disease progression. Because Parikh reviews all his notes — with or without using an AI tool — after a visit, he easily caught the mistake before it was added to the chart. “The problem, I think, is if these mistakes go under the hood,” he said.
In the data science world, these mistakes are called hallucinations. And a growing body of research suggests they’re happening more frequently than is safe for healthcare. The industry promised LLMs would alleviate administrative burden and reduce physician burnout. But so far, studies show these AI-tool mistakes often create more work for doctors, not less. To truly help physicians and be safe for patients, some experts say healthcare needs to build its own LLMs from the ground up. And all agree that the field desperately needs a way to vet these algorithms more thoroughly.
Prone to Error
Right now, “I think the industry is focused on taking existing LLMs and forcing them into usage for healthcare,” said Nigam H. Shah, MBBS, PhD, chief data scientist for Stanford Health. However, the value of deploying general LLMs in the healthcare space is questionable. “People are starting to wonder if we’re using these tools wrong,” he told this news organization.
In 2023, Shah and his colleagues evaluated seven LLMs on their ability to answer electronic health record–based questions. For realistic tasks, the error rate in the best cases was about 35%, he said. “To me, that rate seems a bit high ... to adopt for routine use.”
A study earlier this year by the UC San Diego School of Medicine showed that using LLMs to respond to patient messages increased the time doctors spent on messages. And this summer, a study by the clinical AI firm Mendel found that when GPT-4o or Llama-3 were used to summarize patient medical records, almost every summary contained at least one type of hallucination.
“We’ve seen cases where a patient does have drug allergies, but the system says ‘no known drug allergies’ ” in the medical history summary, said Wael Salloum, PhD, cofounder and chief science officer at Mendel. “That’s a serious hallucination.” And if physicians have to constantly verify what the system is telling them, that “defeats the purpose [of summarization],” he said.
A Higher Quality Diet
Part of the trouble with LLMs is that there’s just not enough high-quality information to feed them. The algorithms are insatiable, requiring vast swaths of data for training. GPT-3.5, for instance, was trained on 570 GB of data from the internet, more than 300 billion words. And to train GPT-4o, OpenAI reportedly transcribed more than 1 million hours of YouTube content.
However, the strategies that built these general LLMs don’t always translate well to healthcare. The internet is full of low-quality or misleading health information from wellness sites and supplement advertisements. And even data that are trustworthy, like the millions of clinical studies and the US Food and Drug Administration (FDA) statements, can be outdated, Salloum said. And “an LLM in training can’t distinguish good from bad,” he added.
The good news is that clinicians don’t rely on controversial information in the real world. Medical knowledge is standardized. “Healthcare is a domain rich with explicit knowledge,” Salloum said. So there’s potential to build a more reliable LLM that is guided by robust medical standards and guidelines.
It’s possible that healthcare could use small language models, which are LLM’s pocket-sized cousins, and perform tasks needing only bite-sized datasets requiring fewer resources and easier fine-tuning, according to Microsoft’s website. Shah said training these smaller models on real medical data might be an option, like an LLM meant to respond to patient messages that could be trained with real messages sent by physicians.
Several groups are already working on databases of standardized human medical knowledge or real physician responses. “Perhaps that will work better than using LLMs trained on the general internet. Those studies need to be done,” Shah said.
Jon Tamir, assistant professor of electrical and computer engineering and co-lead of the AI Health Lab at The University of Texas at Austin, said, “The community has recognized that we are entering a new era of AI where the dataset itself is the most important aspect. We need training sets that are highly curated and highly specialized.
“If the dataset is highly specialized, it will definitely help reduce hallucinations,” he said.
Cutting Overconfidence
A major problem with LLM mistakes is that they are often hard to detect. Hallucinations can be highly convincing even if they’re highly inaccurate, according to Tamir.
When Shah, for instance, was recently testing an LLM on de-identified patient data, he asked the LLM which blood test the patient last had. The model responded with “complete blood count [CBC].” But when he asked for the results, the model gave him white blood count and other values. “Turns out that record did not have a CBC done at all! The result was entirely made up,” he said.
Making healthcare LLMs safer and more reliable will mean training AI to acknowledge potential mistakes and uncertainty. Existing LLMs are trained to project confidence and produce a lot of answers, even when there isn’t one, Salloum said. They rarely respond with “I don’t know” even when their prediction has low confidence, he added.
Healthcare stands to benefit from a system that highlights uncertainty and potential errors. For instance, if a patient’s history shows they have smoked, stopped smoking, vaped, and started smoking again. The LLM might call them a smoker but flag the comment as uncertain because the chronology is complicated, Salloum said.
Tamir added that this strategy could improve LLM and doctor collaboration by honing in on where human expertise is needed most.
Too Little Evaluation
For any improvement strategy to work, LLMs — and all AI-assisted healthcare tools — first need a better evaluation framework. So far, LLMs have “been used in really exciting ways but not really well-vetted ways,” Tamir said.
While some AI-assisted tools, particularly in medical imaging, have undergone rigorous FDA evaluations and earned approval, most haven’t. And because the FDA only regulates algorithms that are considered medical devices, Parikh said that most LLMs used for administrative tasks and efficiency don’t fall under the regulatory agency’s purview.
But these algorithms still have access to patient information and can directly influence patient and doctor decisions. Third-party regulatory agencies are expected to emerge, but it’s still unclear who those will be. Before developers can build a safer and more efficient LLM for healthcare, they’ll need better guidelines and guardrails. “Unless we figure out evaluation, how would we know whether the healthcare-appropriate large language models are better or worse?” Shah asked.
A version of this article appeared on Medscape.com.
In seconds, Ravi Parikh, MD, an oncologist at the Emory University School of Medicine in Atlanta, had a summary of his patient’s entire medical history. Normally, Parikh skimmed the cumbersome files before seeing a patient. However, the artificial intelligence (AI) tool his institution was testing could list the highlights he needed in a fraction of the time.
“On the whole, I like it ... it saves me time,” Parikh said of the tool. “But I’d be lying if I told you it was perfect all the time. It’s interpreting the [patient] history in some ways that may be inaccurate,” he said.
Within the first week of testing the tool, Parikh started to notice that the large language model (LLM) made a particular mistake in his patients with prostate cancer. If their prostate-specific antigen test results came back slightly elevated — which is part of normal variation — the LLM recorded it as disease progression. Because Parikh reviews all his notes — with or without using an AI tool — after a visit, he easily caught the mistake before it was added to the chart. “The problem, I think, is if these mistakes go under the hood,” he said.
In the data science world, these mistakes are called hallucinations. And a growing body of research suggests they’re happening more frequently than is safe for healthcare. The industry promised LLMs would alleviate administrative burden and reduce physician burnout. But so far, studies show these AI-tool mistakes often create more work for doctors, not less. To truly help physicians and be safe for patients, some experts say healthcare needs to build its own LLMs from the ground up. And all agree that the field desperately needs a way to vet these algorithms more thoroughly.
Prone to Error
Right now, “I think the industry is focused on taking existing LLMs and forcing them into usage for healthcare,” said Nigam H. Shah, MBBS, PhD, chief data scientist for Stanford Health. However, the value of deploying general LLMs in the healthcare space is questionable. “People are starting to wonder if we’re using these tools wrong,” he told this news organization.
In 2023, Shah and his colleagues evaluated seven LLMs on their ability to answer electronic health record–based questions. For realistic tasks, the error rate in the best cases was about 35%, he said. “To me, that rate seems a bit high ... to adopt for routine use.”
A study earlier this year by the UC San Diego School of Medicine showed that using LLMs to respond to patient messages increased the time doctors spent on messages. And this summer, a study by the clinical AI firm Mendel found that when GPT-4o or Llama-3 were used to summarize patient medical records, almost every summary contained at least one type of hallucination.
“We’ve seen cases where a patient does have drug allergies, but the system says ‘no known drug allergies’ ” in the medical history summary, said Wael Salloum, PhD, cofounder and chief science officer at Mendel. “That’s a serious hallucination.” And if physicians have to constantly verify what the system is telling them, that “defeats the purpose [of summarization],” he said.
A Higher Quality Diet
Part of the trouble with LLMs is that there’s just not enough high-quality information to feed them. The algorithms are insatiable, requiring vast swaths of data for training. GPT-3.5, for instance, was trained on 570 GB of data from the internet, more than 300 billion words. And to train GPT-4o, OpenAI reportedly transcribed more than 1 million hours of YouTube content.
However, the strategies that built these general LLMs don’t always translate well to healthcare. The internet is full of low-quality or misleading health information from wellness sites and supplement advertisements. And even data that are trustworthy, like the millions of clinical studies and the US Food and Drug Administration (FDA) statements, can be outdated, Salloum said. And “an LLM in training can’t distinguish good from bad,” he added.
The good news is that clinicians don’t rely on controversial information in the real world. Medical knowledge is standardized. “Healthcare is a domain rich with explicit knowledge,” Salloum said. So there’s potential to build a more reliable LLM that is guided by robust medical standards and guidelines.
It’s possible that healthcare could use small language models, which are LLM’s pocket-sized cousins, and perform tasks needing only bite-sized datasets requiring fewer resources and easier fine-tuning, according to Microsoft’s website. Shah said training these smaller models on real medical data might be an option, like an LLM meant to respond to patient messages that could be trained with real messages sent by physicians.
Several groups are already working on databases of standardized human medical knowledge or real physician responses. “Perhaps that will work better than using LLMs trained on the general internet. Those studies need to be done,” Shah said.
Jon Tamir, assistant professor of electrical and computer engineering and co-lead of the AI Health Lab at The University of Texas at Austin, said, “The community has recognized that we are entering a new era of AI where the dataset itself is the most important aspect. We need training sets that are highly curated and highly specialized.
“If the dataset is highly specialized, it will definitely help reduce hallucinations,” he said.
Cutting Overconfidence
A major problem with LLM mistakes is that they are often hard to detect. Hallucinations can be highly convincing even if they’re highly inaccurate, according to Tamir.
When Shah, for instance, was recently testing an LLM on de-identified patient data, he asked the LLM which blood test the patient last had. The model responded with “complete blood count [CBC].” But when he asked for the results, the model gave him white blood count and other values. “Turns out that record did not have a CBC done at all! The result was entirely made up,” he said.
Making healthcare LLMs safer and more reliable will mean training AI to acknowledge potential mistakes and uncertainty. Existing LLMs are trained to project confidence and produce a lot of answers, even when there isn’t one, Salloum said. They rarely respond with “I don’t know” even when their prediction has low confidence, he added.
Healthcare stands to benefit from a system that highlights uncertainty and potential errors. For instance, if a patient’s history shows they have smoked, stopped smoking, vaped, and started smoking again. The LLM might call them a smoker but flag the comment as uncertain because the chronology is complicated, Salloum said.
Tamir added that this strategy could improve LLM and doctor collaboration by honing in on where human expertise is needed most.
Too Little Evaluation
For any improvement strategy to work, LLMs — and all AI-assisted healthcare tools — first need a better evaluation framework. So far, LLMs have “been used in really exciting ways but not really well-vetted ways,” Tamir said.
While some AI-assisted tools, particularly in medical imaging, have undergone rigorous FDA evaluations and earned approval, most haven’t. And because the FDA only regulates algorithms that are considered medical devices, Parikh said that most LLMs used for administrative tasks and efficiency don’t fall under the regulatory agency’s purview.
But these algorithms still have access to patient information and can directly influence patient and doctor decisions. Third-party regulatory agencies are expected to emerge, but it’s still unclear who those will be. Before developers can build a safer and more efficient LLM for healthcare, they’ll need better guidelines and guardrails. “Unless we figure out evaluation, how would we know whether the healthcare-appropriate large language models are better or worse?” Shah asked.
A version of this article appeared on Medscape.com.
In seconds, Ravi Parikh, MD, an oncologist at the Emory University School of Medicine in Atlanta, had a summary of his patient’s entire medical history. Normally, Parikh skimmed the cumbersome files before seeing a patient. However, the artificial intelligence (AI) tool his institution was testing could list the highlights he needed in a fraction of the time.
“On the whole, I like it ... it saves me time,” Parikh said of the tool. “But I’d be lying if I told you it was perfect all the time. It’s interpreting the [patient] history in some ways that may be inaccurate,” he said.
Within the first week of testing the tool, Parikh started to notice that the large language model (LLM) made a particular mistake in his patients with prostate cancer. If their prostate-specific antigen test results came back slightly elevated — which is part of normal variation — the LLM recorded it as disease progression. Because Parikh reviews all his notes — with or without using an AI tool — after a visit, he easily caught the mistake before it was added to the chart. “The problem, I think, is if these mistakes go under the hood,” he said.
In the data science world, these mistakes are called hallucinations. And a growing body of research suggests they’re happening more frequently than is safe for healthcare. The industry promised LLMs would alleviate administrative burden and reduce physician burnout. But so far, studies show these AI-tool mistakes often create more work for doctors, not less. To truly help physicians and be safe for patients, some experts say healthcare needs to build its own LLMs from the ground up. And all agree that the field desperately needs a way to vet these algorithms more thoroughly.
Prone to Error
Right now, “I think the industry is focused on taking existing LLMs and forcing them into usage for healthcare,” said Nigam H. Shah, MBBS, PhD, chief data scientist for Stanford Health. However, the value of deploying general LLMs in the healthcare space is questionable. “People are starting to wonder if we’re using these tools wrong,” he told this news organization.
In 2023, Shah and his colleagues evaluated seven LLMs on their ability to answer electronic health record–based questions. For realistic tasks, the error rate in the best cases was about 35%, he said. “To me, that rate seems a bit high ... to adopt for routine use.”
A study earlier this year by the UC San Diego School of Medicine showed that using LLMs to respond to patient messages increased the time doctors spent on messages. And this summer, a study by the clinical AI firm Mendel found that when GPT-4o or Llama-3 were used to summarize patient medical records, almost every summary contained at least one type of hallucination.
“We’ve seen cases where a patient does have drug allergies, but the system says ‘no known drug allergies’ ” in the medical history summary, said Wael Salloum, PhD, cofounder and chief science officer at Mendel. “That’s a serious hallucination.” And if physicians have to constantly verify what the system is telling them, that “defeats the purpose [of summarization],” he said.
A Higher Quality Diet
Part of the trouble with LLMs is that there’s just not enough high-quality information to feed them. The algorithms are insatiable, requiring vast swaths of data for training. GPT-3.5, for instance, was trained on 570 GB of data from the internet, more than 300 billion words. And to train GPT-4o, OpenAI reportedly transcribed more than 1 million hours of YouTube content.
However, the strategies that built these general LLMs don’t always translate well to healthcare. The internet is full of low-quality or misleading health information from wellness sites and supplement advertisements. And even data that are trustworthy, like the millions of clinical studies and the US Food and Drug Administration (FDA) statements, can be outdated, Salloum said. And “an LLM in training can’t distinguish good from bad,” he added.
The good news is that clinicians don’t rely on controversial information in the real world. Medical knowledge is standardized. “Healthcare is a domain rich with explicit knowledge,” Salloum said. So there’s potential to build a more reliable LLM that is guided by robust medical standards and guidelines.
It’s possible that healthcare could use small language models, which are LLM’s pocket-sized cousins, and perform tasks needing only bite-sized datasets requiring fewer resources and easier fine-tuning, according to Microsoft’s website. Shah said training these smaller models on real medical data might be an option, like an LLM meant to respond to patient messages that could be trained with real messages sent by physicians.
Several groups are already working on databases of standardized human medical knowledge or real physician responses. “Perhaps that will work better than using LLMs trained on the general internet. Those studies need to be done,” Shah said.
Jon Tamir, assistant professor of electrical and computer engineering and co-lead of the AI Health Lab at The University of Texas at Austin, said, “The community has recognized that we are entering a new era of AI where the dataset itself is the most important aspect. We need training sets that are highly curated and highly specialized.
“If the dataset is highly specialized, it will definitely help reduce hallucinations,” he said.
Cutting Overconfidence
A major problem with LLM mistakes is that they are often hard to detect. Hallucinations can be highly convincing even if they’re highly inaccurate, according to Tamir.
When Shah, for instance, was recently testing an LLM on de-identified patient data, he asked the LLM which blood test the patient last had. The model responded with “complete blood count [CBC].” But when he asked for the results, the model gave him white blood count and other values. “Turns out that record did not have a CBC done at all! The result was entirely made up,” he said.
Making healthcare LLMs safer and more reliable will mean training AI to acknowledge potential mistakes and uncertainty. Existing LLMs are trained to project confidence and produce a lot of answers, even when there isn’t one, Salloum said. They rarely respond with “I don’t know” even when their prediction has low confidence, he added.
Healthcare stands to benefit from a system that highlights uncertainty and potential errors. For instance, if a patient’s history shows they have smoked, stopped smoking, vaped, and started smoking again. The LLM might call them a smoker but flag the comment as uncertain because the chronology is complicated, Salloum said.
Tamir added that this strategy could improve LLM and doctor collaboration by honing in on where human expertise is needed most.
Too Little Evaluation
For any improvement strategy to work, LLMs — and all AI-assisted healthcare tools — first need a better evaluation framework. So far, LLMs have “been used in really exciting ways but not really well-vetted ways,” Tamir said.
While some AI-assisted tools, particularly in medical imaging, have undergone rigorous FDA evaluations and earned approval, most haven’t. And because the FDA only regulates algorithms that are considered medical devices, Parikh said that most LLMs used for administrative tasks and efficiency don’t fall under the regulatory agency’s purview.
But these algorithms still have access to patient information and can directly influence patient and doctor decisions. Third-party regulatory agencies are expected to emerge, but it’s still unclear who those will be. Before developers can build a safer and more efficient LLM for healthcare, they’ll need better guidelines and guardrails. “Unless we figure out evaluation, how would we know whether the healthcare-appropriate large language models are better or worse?” Shah asked.
A version of this article appeared on Medscape.com.
Cybersecurity Concerns Continue to Rise With Ransom, Data Manipulation, AI Risks
From the largest healthcare companies to solo practices, just every organization in medicine faces a risk for costly cyberattacks. In recent years, hackers have threatened to release the personal information of patients and employees — or paralyze online systems — unless they’re paid a ransom.
Should companies pay? It’s not an easy answer, a pair of experts told colleagues in an American Medical Association (AMA) cybersecurity webinar on October 18. It turns out that each choice — pay or don’t pay — can end up being costly.
This is just one of the new challenges facing the American medical system on the cybersecurity front, the speakers said. Others include the possibility that hackers will manipulate patient data — turning a medical test negative, for example, when it’s actually positive — and take advantage of the powers of artificial intelligence (AI).
The AMA held the webinar to educate physicians about cybersecurity risks and defenses, an especially hot topic in the wake of February’s Change Healthcare hack, which cost UnitedHealth Group an estimated $2.5 billion — so far — and deeply disrupted the American healthcare system.
Cautionary tales abound. Greg Garcia, executive director for cybersecurity of the Health Sector Coordinating Council, a coalition of medical industry organizations, pointed to a Pennsylvania clinic that refused to pay a ransom to prevent the release of hundreds of images of patients with breast cancer undressed from the waist up. Garcia told webinar participants that the ransom was $5 million.
Risky Choices
While the Federal Bureau of Investigation recommends against paying a ransom, this can be a risky choice, Garcia said. Hackers released the images, and the center has reportedly agreed to settle a class-action lawsuit for $65 million. “They traded $5 million for $60 million,” Garcia added, slightly misstating the settlement amount.
Health systems have been cagey about whether they’ve paid ransoms to prevent private data from being made public in cyberattacks. If a ransom is demanded, “it’s every organization for itself,” Garcia said.
He highlighted the case of a chain of psychiatry practices in Finland that suffered a ransomware attack in 2020. The hackers “contacted the patients and said: ‘Hey, call your clinic and tell them to pay the ransom. Otherwise, we’re going to release all your psychiatric notes to the public.’ ”
Cyberattacks continue. In October, Boston Children’s Health Physicians announced that it had suffered a “ recent security incident” involving data — possibly including Social Security numbers and treatment information — regarding patients and employees. A hacker group reportedly claimed responsibility and wants the system, which boasts more than 300 clinicians, to pay a ransom or else it will release the stolen information.
Should Paying Ransom Be a Crime?
Christian Dameff, MD, MS, an emergency medicine physician and director of the Center for Healthcare Cybersecurity at the University of California (UC), San Diego, noted that there are efforts to turn paying ransom into a crime. “If people aren’t paying ransoms, then ransomware operators will move to something else that makes them money.”
Dameff urged colleagues to understand we no longer live in a world where clinicians only bother to think of technology when they call the IT department to help them reset their password.
New challenges face clinicians, he said. “How do we develop better strategies, downtime procedures, and safe clinical care in an era where our vital technology may be gone, not just for an hour or 2, but as is the case with these ransomware attacks, sometimes weeks to months.”
Garcia said “cybersecurity is everybody’s responsibility, including frontline clinicians. Because you’re touching data, you’re touching technology, you’re touching patients, and all of those things combine to present some vulnerabilities in the digital world.”
Next Frontier: Hackers May Manipulate Patient Data
Dameff said future hackers may use AI to manipulate individual patient data in ways that threaten patient health. AI makes this easier to accomplish.
“What if I delete your allergies in your electronic health record, or I manipulate your chest x-ray, or I change your lab values so it looks like you’re in diabetic ketoacidosis when you’re not so a clinician gives you insulin when you don’t need it?”
Garcia highlighted another new threat: Phishing efforts that are harder to ignore thanks to AI.
“One of the most successful way that hackers get in, disrupt systems, and steal data is through email phishing, and it’s only going to get better because of artificial intelligence,” he said. “No longer are you going to have typos in that email written by a hacking group in Nigeria or in China. It’s going to be perfect looking.”
What can practices and healthcare systems do? Garcia highlighted federal health agency efforts to encourage organizations to adopt best practices in cybersecurity.
“If you’ve got a data breach, and you can show to the US Department of Health & Human Services [HHS] you have implemented generally recognized cybersecurity controls over the past year, that you have done your best, you did the right thing, and you still got hit, HHS is directed to essentially take it easy on you,” he said. “That’s a positive incentive.”
Ransomware Guide in the Works
Dameff said UC San Diego’s Center for Healthcare Cybersecurity plans to publish a free cybersecurity guide in 2025 that will include specific information about ransomware attacks for medical specialties such as cardiology, trauma surgery, and pediatrics.
“Then, should you ever be ransomed, you can pull out this guide. You’ll know what’s going to kind of happen, and you can better prepare for those effects.”
Will the future president prioritize healthcare cybersecurity? That remains to be seen, but crises do have the capacity to concentrate the mind, experts said.
The nation’s capital “has a very short memory, a short attention span. The policymakers tend to be reactive,” Dameff said. “All it takes is yet another Change Healthcare–like attack that disrupts 30% or more of the nation’s healthcare system for the policymakers to sit up, take notice, and try to come up with solutions.”
In addition, he said, an estimated two data breaches/ransomware attacks are occurring per day. “The fact is that we’re all patients, up to the President of the United States and every member of the Congress is a patient.”
There’s a “very existential, very palpable understanding that cyber safety is patient safety and cyber insecurity is patient insecurity,” Dameff said.
A version of this article appeared on Medscape.com.
From the largest healthcare companies to solo practices, just every organization in medicine faces a risk for costly cyberattacks. In recent years, hackers have threatened to release the personal information of patients and employees — or paralyze online systems — unless they’re paid a ransom.
Should companies pay? It’s not an easy answer, a pair of experts told colleagues in an American Medical Association (AMA) cybersecurity webinar on October 18. It turns out that each choice — pay or don’t pay — can end up being costly.
This is just one of the new challenges facing the American medical system on the cybersecurity front, the speakers said. Others include the possibility that hackers will manipulate patient data — turning a medical test negative, for example, when it’s actually positive — and take advantage of the powers of artificial intelligence (AI).
The AMA held the webinar to educate physicians about cybersecurity risks and defenses, an especially hot topic in the wake of February’s Change Healthcare hack, which cost UnitedHealth Group an estimated $2.5 billion — so far — and deeply disrupted the American healthcare system.
Cautionary tales abound. Greg Garcia, executive director for cybersecurity of the Health Sector Coordinating Council, a coalition of medical industry organizations, pointed to a Pennsylvania clinic that refused to pay a ransom to prevent the release of hundreds of images of patients with breast cancer undressed from the waist up. Garcia told webinar participants that the ransom was $5 million.
Risky Choices
While the Federal Bureau of Investigation recommends against paying a ransom, this can be a risky choice, Garcia said. Hackers released the images, and the center has reportedly agreed to settle a class-action lawsuit for $65 million. “They traded $5 million for $60 million,” Garcia added, slightly misstating the settlement amount.
Health systems have been cagey about whether they’ve paid ransoms to prevent private data from being made public in cyberattacks. If a ransom is demanded, “it’s every organization for itself,” Garcia said.
He highlighted the case of a chain of psychiatry practices in Finland that suffered a ransomware attack in 2020. The hackers “contacted the patients and said: ‘Hey, call your clinic and tell them to pay the ransom. Otherwise, we’re going to release all your psychiatric notes to the public.’ ”
Cyberattacks continue. In October, Boston Children’s Health Physicians announced that it had suffered a “ recent security incident” involving data — possibly including Social Security numbers and treatment information — regarding patients and employees. A hacker group reportedly claimed responsibility and wants the system, which boasts more than 300 clinicians, to pay a ransom or else it will release the stolen information.
Should Paying Ransom Be a Crime?
Christian Dameff, MD, MS, an emergency medicine physician and director of the Center for Healthcare Cybersecurity at the University of California (UC), San Diego, noted that there are efforts to turn paying ransom into a crime. “If people aren’t paying ransoms, then ransomware operators will move to something else that makes them money.”
Dameff urged colleagues to understand we no longer live in a world where clinicians only bother to think of technology when they call the IT department to help them reset their password.
New challenges face clinicians, he said. “How do we develop better strategies, downtime procedures, and safe clinical care in an era where our vital technology may be gone, not just for an hour or 2, but as is the case with these ransomware attacks, sometimes weeks to months.”
Garcia said “cybersecurity is everybody’s responsibility, including frontline clinicians. Because you’re touching data, you’re touching technology, you’re touching patients, and all of those things combine to present some vulnerabilities in the digital world.”
Next Frontier: Hackers May Manipulate Patient Data
Dameff said future hackers may use AI to manipulate individual patient data in ways that threaten patient health. AI makes this easier to accomplish.
“What if I delete your allergies in your electronic health record, or I manipulate your chest x-ray, or I change your lab values so it looks like you’re in diabetic ketoacidosis when you’re not so a clinician gives you insulin when you don’t need it?”
Garcia highlighted another new threat: Phishing efforts that are harder to ignore thanks to AI.
“One of the most successful way that hackers get in, disrupt systems, and steal data is through email phishing, and it’s only going to get better because of artificial intelligence,” he said. “No longer are you going to have typos in that email written by a hacking group in Nigeria or in China. It’s going to be perfect looking.”
What can practices and healthcare systems do? Garcia highlighted federal health agency efforts to encourage organizations to adopt best practices in cybersecurity.
“If you’ve got a data breach, and you can show to the US Department of Health & Human Services [HHS] you have implemented generally recognized cybersecurity controls over the past year, that you have done your best, you did the right thing, and you still got hit, HHS is directed to essentially take it easy on you,” he said. “That’s a positive incentive.”
Ransomware Guide in the Works
Dameff said UC San Diego’s Center for Healthcare Cybersecurity plans to publish a free cybersecurity guide in 2025 that will include specific information about ransomware attacks for medical specialties such as cardiology, trauma surgery, and pediatrics.
“Then, should you ever be ransomed, you can pull out this guide. You’ll know what’s going to kind of happen, and you can better prepare for those effects.”
Will the future president prioritize healthcare cybersecurity? That remains to be seen, but crises do have the capacity to concentrate the mind, experts said.
The nation’s capital “has a very short memory, a short attention span. The policymakers tend to be reactive,” Dameff said. “All it takes is yet another Change Healthcare–like attack that disrupts 30% or more of the nation’s healthcare system for the policymakers to sit up, take notice, and try to come up with solutions.”
In addition, he said, an estimated two data breaches/ransomware attacks are occurring per day. “The fact is that we’re all patients, up to the President of the United States and every member of the Congress is a patient.”
There’s a “very existential, very palpable understanding that cyber safety is patient safety and cyber insecurity is patient insecurity,” Dameff said.
A version of this article appeared on Medscape.com.
From the largest healthcare companies to solo practices, just every organization in medicine faces a risk for costly cyberattacks. In recent years, hackers have threatened to release the personal information of patients and employees — or paralyze online systems — unless they’re paid a ransom.
Should companies pay? It’s not an easy answer, a pair of experts told colleagues in an American Medical Association (AMA) cybersecurity webinar on October 18. It turns out that each choice — pay or don’t pay — can end up being costly.
This is just one of the new challenges facing the American medical system on the cybersecurity front, the speakers said. Others include the possibility that hackers will manipulate patient data — turning a medical test negative, for example, when it’s actually positive — and take advantage of the powers of artificial intelligence (AI).
The AMA held the webinar to educate physicians about cybersecurity risks and defenses, an especially hot topic in the wake of February’s Change Healthcare hack, which cost UnitedHealth Group an estimated $2.5 billion — so far — and deeply disrupted the American healthcare system.
Cautionary tales abound. Greg Garcia, executive director for cybersecurity of the Health Sector Coordinating Council, a coalition of medical industry organizations, pointed to a Pennsylvania clinic that refused to pay a ransom to prevent the release of hundreds of images of patients with breast cancer undressed from the waist up. Garcia told webinar participants that the ransom was $5 million.
Risky Choices
While the Federal Bureau of Investigation recommends against paying a ransom, this can be a risky choice, Garcia said. Hackers released the images, and the center has reportedly agreed to settle a class-action lawsuit for $65 million. “They traded $5 million for $60 million,” Garcia added, slightly misstating the settlement amount.
Health systems have been cagey about whether they’ve paid ransoms to prevent private data from being made public in cyberattacks. If a ransom is demanded, “it’s every organization for itself,” Garcia said.
He highlighted the case of a chain of psychiatry practices in Finland that suffered a ransomware attack in 2020. The hackers “contacted the patients and said: ‘Hey, call your clinic and tell them to pay the ransom. Otherwise, we’re going to release all your psychiatric notes to the public.’ ”
Cyberattacks continue. In October, Boston Children’s Health Physicians announced that it had suffered a “ recent security incident” involving data — possibly including Social Security numbers and treatment information — regarding patients and employees. A hacker group reportedly claimed responsibility and wants the system, which boasts more than 300 clinicians, to pay a ransom or else it will release the stolen information.
Should Paying Ransom Be a Crime?
Christian Dameff, MD, MS, an emergency medicine physician and director of the Center for Healthcare Cybersecurity at the University of California (UC), San Diego, noted that there are efforts to turn paying ransom into a crime. “If people aren’t paying ransoms, then ransomware operators will move to something else that makes them money.”
Dameff urged colleagues to understand we no longer live in a world where clinicians only bother to think of technology when they call the IT department to help them reset their password.
New challenges face clinicians, he said. “How do we develop better strategies, downtime procedures, and safe clinical care in an era where our vital technology may be gone, not just for an hour or 2, but as is the case with these ransomware attacks, sometimes weeks to months.”
Garcia said “cybersecurity is everybody’s responsibility, including frontline clinicians. Because you’re touching data, you’re touching technology, you’re touching patients, and all of those things combine to present some vulnerabilities in the digital world.”
Next Frontier: Hackers May Manipulate Patient Data
Dameff said future hackers may use AI to manipulate individual patient data in ways that threaten patient health. AI makes this easier to accomplish.
“What if I delete your allergies in your electronic health record, or I manipulate your chest x-ray, or I change your lab values so it looks like you’re in diabetic ketoacidosis when you’re not so a clinician gives you insulin when you don’t need it?”
Garcia highlighted another new threat: Phishing efforts that are harder to ignore thanks to AI.
“One of the most successful way that hackers get in, disrupt systems, and steal data is through email phishing, and it’s only going to get better because of artificial intelligence,” he said. “No longer are you going to have typos in that email written by a hacking group in Nigeria or in China. It’s going to be perfect looking.”
What can practices and healthcare systems do? Garcia highlighted federal health agency efforts to encourage organizations to adopt best practices in cybersecurity.
“If you’ve got a data breach, and you can show to the US Department of Health & Human Services [HHS] you have implemented generally recognized cybersecurity controls over the past year, that you have done your best, you did the right thing, and you still got hit, HHS is directed to essentially take it easy on you,” he said. “That’s a positive incentive.”
Ransomware Guide in the Works
Dameff said UC San Diego’s Center for Healthcare Cybersecurity plans to publish a free cybersecurity guide in 2025 that will include specific information about ransomware attacks for medical specialties such as cardiology, trauma surgery, and pediatrics.
“Then, should you ever be ransomed, you can pull out this guide. You’ll know what’s going to kind of happen, and you can better prepare for those effects.”
Will the future president prioritize healthcare cybersecurity? That remains to be seen, but crises do have the capacity to concentrate the mind, experts said.
The nation’s capital “has a very short memory, a short attention span. The policymakers tend to be reactive,” Dameff said. “All it takes is yet another Change Healthcare–like attack that disrupts 30% or more of the nation’s healthcare system for the policymakers to sit up, take notice, and try to come up with solutions.”
In addition, he said, an estimated two data breaches/ransomware attacks are occurring per day. “The fact is that we’re all patients, up to the President of the United States and every member of the Congress is a patient.”
There’s a “very existential, very palpable understanding that cyber safety is patient safety and cyber insecurity is patient insecurity,” Dameff said.
A version of this article appeared on Medscape.com.
Six Tips for Media Interviews
As a physician, you might be contacted by the media to provide your professional opinion and advice. Or you might be looking for media interview opportunities to market your practice or side project. And if you do research, media interviews can be an effective way to spread the word. It’s important to prepare for a media interview so that you achieve the outcome you are looking for.
Keep your message simple. When you are a subject expert, you might think that the basics are obvious or even boring, and that the nuances are more important. However, most of the audience is looking for big-picture information that they can apply to their lives. Consider a few key takeaways, keeping in mind that your interview is likely to be edited to short sound bites or a few quotes. It may help to jot down notes so that you cover the fundamentals clearly. You could even write and rehearse a script beforehand. If there is something complicated or subtle that you want to convey, you can preface it by saying, “This is confusing but very important …” to let the audience know to give extra consideration to what you are about to say.
Avoid extremes and hyperbole. Sometimes, exaggerated statements make their way into medical discussions. Statements such as “it doesn’t matter how many calories you consume — it’s all about the quality” are common oversimplifications. But you might be upset to see your name next to a comment like this because it is not actually correct. Check the phrasing of your key takeaways to avoid being stuck defending or explaining an inaccurate statement when your patients ask you about it later.
Ask the interviewers what they are looking for. Many medical topics have some controversial element, so it is good to know what you’re getting into. Find out the purpose of the article or interview before you decide whether it is right for you. It could be about another doctor in town who is being sued; if you don’t want to be associated with that story, it might be best to decline the interview.
Explain your goals. You might accept or pursue an interview to raise awareness about an underrecognized condition. You might want the public to identify and get help for early symptoms, or you might want to create empathy for people coping with a disease you treat. Consider why you are participating in an interview, and communicate that to the interviewer to ensure that your objective can be part of the final product.
Know whom you’re dealing with. It is good to learn about the publication/media channel before you agree to participate. It may have a political bias, or perhaps the interview is intended to promote a specific product. If you agree with and support their purposes, then you may be happy to lend your opinion. But learning about the “voice” of the publication in advance allows you to make an informed decision about whether you want to be identified with a particular political ideology or product endorsement.
Ask to see your quotes before publication. It’s good to have the opportunity to make corrections in case you are accidentally misquoted or misunderstood. It is best to ask to see quotes before you agree to the interview. Some reporters may agree to (or even prefer) a written question-and-answer format so that they can directly quote your responses without rephrasing your words. You could suggest this, especially if you are too busy for a call or live meeting.
As a physician, your insights and advice can be highly beneficial to others. You can also use media interviews to propel your career forward. Doing your homework can ensure that you will be pleased with the final product and how your words were used.
Dr. Moawad, Clinical Assistant Professor, Department of Medical Education, Case Western Reserve University School of Medicine, Cleveland, Ohio, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
As a physician, you might be contacted by the media to provide your professional opinion and advice. Or you might be looking for media interview opportunities to market your practice or side project. And if you do research, media interviews can be an effective way to spread the word. It’s important to prepare for a media interview so that you achieve the outcome you are looking for.
Keep your message simple. When you are a subject expert, you might think that the basics are obvious or even boring, and that the nuances are more important. However, most of the audience is looking for big-picture information that they can apply to their lives. Consider a few key takeaways, keeping in mind that your interview is likely to be edited to short sound bites or a few quotes. It may help to jot down notes so that you cover the fundamentals clearly. You could even write and rehearse a script beforehand. If there is something complicated or subtle that you want to convey, you can preface it by saying, “This is confusing but very important …” to let the audience know to give extra consideration to what you are about to say.
Avoid extremes and hyperbole. Sometimes, exaggerated statements make their way into medical discussions. Statements such as “it doesn’t matter how many calories you consume — it’s all about the quality” are common oversimplifications. But you might be upset to see your name next to a comment like this because it is not actually correct. Check the phrasing of your key takeaways to avoid being stuck defending or explaining an inaccurate statement when your patients ask you about it later.
Ask the interviewers what they are looking for. Many medical topics have some controversial element, so it is good to know what you’re getting into. Find out the purpose of the article or interview before you decide whether it is right for you. It could be about another doctor in town who is being sued; if you don’t want to be associated with that story, it might be best to decline the interview.
Explain your goals. You might accept or pursue an interview to raise awareness about an underrecognized condition. You might want the public to identify and get help for early symptoms, or you might want to create empathy for people coping with a disease you treat. Consider why you are participating in an interview, and communicate that to the interviewer to ensure that your objective can be part of the final product.
Know whom you’re dealing with. It is good to learn about the publication/media channel before you agree to participate. It may have a political bias, or perhaps the interview is intended to promote a specific product. If you agree with and support their purposes, then you may be happy to lend your opinion. But learning about the “voice” of the publication in advance allows you to make an informed decision about whether you want to be identified with a particular political ideology or product endorsement.
Ask to see your quotes before publication. It’s good to have the opportunity to make corrections in case you are accidentally misquoted or misunderstood. It is best to ask to see quotes before you agree to the interview. Some reporters may agree to (or even prefer) a written question-and-answer format so that they can directly quote your responses without rephrasing your words. You could suggest this, especially if you are too busy for a call or live meeting.
As a physician, your insights and advice can be highly beneficial to others. You can also use media interviews to propel your career forward. Doing your homework can ensure that you will be pleased with the final product and how your words were used.
Dr. Moawad, Clinical Assistant Professor, Department of Medical Education, Case Western Reserve University School of Medicine, Cleveland, Ohio, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
As a physician, you might be contacted by the media to provide your professional opinion and advice. Or you might be looking for media interview opportunities to market your practice or side project. And if you do research, media interviews can be an effective way to spread the word. It’s important to prepare for a media interview so that you achieve the outcome you are looking for.
Keep your message simple. When you are a subject expert, you might think that the basics are obvious or even boring, and that the nuances are more important. However, most of the audience is looking for big-picture information that they can apply to their lives. Consider a few key takeaways, keeping in mind that your interview is likely to be edited to short sound bites or a few quotes. It may help to jot down notes so that you cover the fundamentals clearly. You could even write and rehearse a script beforehand. If there is something complicated or subtle that you want to convey, you can preface it by saying, “This is confusing but very important …” to let the audience know to give extra consideration to what you are about to say.
Avoid extremes and hyperbole. Sometimes, exaggerated statements make their way into medical discussions. Statements such as “it doesn’t matter how many calories you consume — it’s all about the quality” are common oversimplifications. But you might be upset to see your name next to a comment like this because it is not actually correct. Check the phrasing of your key takeaways to avoid being stuck defending or explaining an inaccurate statement when your patients ask you about it later.
Ask the interviewers what they are looking for. Many medical topics have some controversial element, so it is good to know what you’re getting into. Find out the purpose of the article or interview before you decide whether it is right for you. It could be about another doctor in town who is being sued; if you don’t want to be associated with that story, it might be best to decline the interview.
Explain your goals. You might accept or pursue an interview to raise awareness about an underrecognized condition. You might want the public to identify and get help for early symptoms, or you might want to create empathy for people coping with a disease you treat. Consider why you are participating in an interview, and communicate that to the interviewer to ensure that your objective can be part of the final product.
Know whom you’re dealing with. It is good to learn about the publication/media channel before you agree to participate. It may have a political bias, or perhaps the interview is intended to promote a specific product. If you agree with and support their purposes, then you may be happy to lend your opinion. But learning about the “voice” of the publication in advance allows you to make an informed decision about whether you want to be identified with a particular political ideology or product endorsement.
Ask to see your quotes before publication. It’s good to have the opportunity to make corrections in case you are accidentally misquoted or misunderstood. It is best to ask to see quotes before you agree to the interview. Some reporters may agree to (or even prefer) a written question-and-answer format so that they can directly quote your responses without rephrasing your words. You could suggest this, especially if you are too busy for a call or live meeting.
As a physician, your insights and advice can be highly beneficial to others. You can also use media interviews to propel your career forward. Doing your homework can ensure that you will be pleased with the final product and how your words were used.
Dr. Moawad, Clinical Assistant Professor, Department of Medical Education, Case Western Reserve University School of Medicine, Cleveland, Ohio, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Duloxetine Bottles Recalled by FDA Because of Potential Carcinogen
The US Food and Drug Administration (FDA) has announced a voluntary manufacturer-initiated recall of more than 7000 bottles of duloxetine delayed-release capsules due to unacceptable levels of a potential carcinogen.
Duloxetine (Cymbalta) is a serotonin-norepinephrine reuptake inhibitor used to treat major depressive disorder, generalized anxiety disorder, fibromyalgia, chronic musculoskeletal pain, and neuropathic pain associated with diabetic peripheral neuropathy.
The recall is due to the detection of the nitrosamine impurity, N-nitroso duloxetine, above the proposed interim limit.
Nitrosamines are common in water and foods, and exposure to some levels of the chemical is common. Exposure to nitrosamine impurities above acceptable levels and over long periods may increase cancer risk, the FDA reported.
“If drugs contain levels of nitrosamines above the acceptable daily intake limits, FDA recommends these drugs be recalled by the manufacturer as appropriate,” the agency noted on its website.
The recall was initiated by Breckenridge Pharmaceutical and covers 7107 bottles of 500-count, 20 mg duloxetine delayed-release capsules. The drug is manufactured by Towa Pharmaceutical Europe and distributed nationwide by BPI.
The affected bottles are from lot number 220128 with an expiration date of 12/2024 and NDC of 51991-746-05.
The recall was initiated on October 10 and is ongoing.
“Healthcare professionals can educate patients about alternative treatment options to medications with potential nitrosamine impurities if available and clinically appropriate,” the FDA advises. “If a medication has been recalled, pharmacists may be able to dispense the same medication from a manufacturing lot that has not been recalled. Prescribers may also determine whether there is an alternative treatment option for patients.”
The FDA has labeled this a “class II” recall, which the agency defines as “a situation in which use of or exposure to a violative product may cause temporary or medically reversible adverse health consequences or where the probability of serious adverse health consequences is remote.”
Nitrosamine impurities have prompted a number of drug recalls in recent years, including oral anticoagulants, metformin, and skeletal muscle relaxants.
The impurities may be found in drugs for a number of reasons, the agency reported. The source may be from a drug’s manufacturing process, chemical structure, or the conditions under which it is stored or packaged.
A version of this article appeared on Medscape.com.
The US Food and Drug Administration (FDA) has announced a voluntary manufacturer-initiated recall of more than 7000 bottles of duloxetine delayed-release capsules due to unacceptable levels of a potential carcinogen.
Duloxetine (Cymbalta) is a serotonin-norepinephrine reuptake inhibitor used to treat major depressive disorder, generalized anxiety disorder, fibromyalgia, chronic musculoskeletal pain, and neuropathic pain associated with diabetic peripheral neuropathy.
The recall is due to the detection of the nitrosamine impurity, N-nitroso duloxetine, above the proposed interim limit.
Nitrosamines are common in water and foods, and exposure to some levels of the chemical is common. Exposure to nitrosamine impurities above acceptable levels and over long periods may increase cancer risk, the FDA reported.
“If drugs contain levels of nitrosamines above the acceptable daily intake limits, FDA recommends these drugs be recalled by the manufacturer as appropriate,” the agency noted on its website.
The recall was initiated by Breckenridge Pharmaceutical and covers 7107 bottles of 500-count, 20 mg duloxetine delayed-release capsules. The drug is manufactured by Towa Pharmaceutical Europe and distributed nationwide by BPI.
The affected bottles are from lot number 220128 with an expiration date of 12/2024 and NDC of 51991-746-05.
The recall was initiated on October 10 and is ongoing.
“Healthcare professionals can educate patients about alternative treatment options to medications with potential nitrosamine impurities if available and clinically appropriate,” the FDA advises. “If a medication has been recalled, pharmacists may be able to dispense the same medication from a manufacturing lot that has not been recalled. Prescribers may also determine whether there is an alternative treatment option for patients.”
The FDA has labeled this a “class II” recall, which the agency defines as “a situation in which use of or exposure to a violative product may cause temporary or medically reversible adverse health consequences or where the probability of serious adverse health consequences is remote.”
Nitrosamine impurities have prompted a number of drug recalls in recent years, including oral anticoagulants, metformin, and skeletal muscle relaxants.
The impurities may be found in drugs for a number of reasons, the agency reported. The source may be from a drug’s manufacturing process, chemical structure, or the conditions under which it is stored or packaged.
A version of this article appeared on Medscape.com.
The US Food and Drug Administration (FDA) has announced a voluntary manufacturer-initiated recall of more than 7000 bottles of duloxetine delayed-release capsules due to unacceptable levels of a potential carcinogen.
Duloxetine (Cymbalta) is a serotonin-norepinephrine reuptake inhibitor used to treat major depressive disorder, generalized anxiety disorder, fibromyalgia, chronic musculoskeletal pain, and neuropathic pain associated with diabetic peripheral neuropathy.
The recall is due to the detection of the nitrosamine impurity, N-nitroso duloxetine, above the proposed interim limit.
Nitrosamines are common in water and foods, and exposure to some levels of the chemical is common. Exposure to nitrosamine impurities above acceptable levels and over long periods may increase cancer risk, the FDA reported.
“If drugs contain levels of nitrosamines above the acceptable daily intake limits, FDA recommends these drugs be recalled by the manufacturer as appropriate,” the agency noted on its website.
The recall was initiated by Breckenridge Pharmaceutical and covers 7107 bottles of 500-count, 20 mg duloxetine delayed-release capsules. The drug is manufactured by Towa Pharmaceutical Europe and distributed nationwide by BPI.
The affected bottles are from lot number 220128 with an expiration date of 12/2024 and NDC of 51991-746-05.
The recall was initiated on October 10 and is ongoing.
“Healthcare professionals can educate patients about alternative treatment options to medications with potential nitrosamine impurities if available and clinically appropriate,” the FDA advises. “If a medication has been recalled, pharmacists may be able to dispense the same medication from a manufacturing lot that has not been recalled. Prescribers may also determine whether there is an alternative treatment option for patients.”
The FDA has labeled this a “class II” recall, which the agency defines as “a situation in which use of or exposure to a violative product may cause temporary or medically reversible adverse health consequences or where the probability of serious adverse health consequences is remote.”
Nitrosamine impurities have prompted a number of drug recalls in recent years, including oral anticoagulants, metformin, and skeletal muscle relaxants.
The impurities may be found in drugs for a number of reasons, the agency reported. The source may be from a drug’s manufacturing process, chemical structure, or the conditions under which it is stored or packaged.
A version of this article appeared on Medscape.com.
More Evidence Ties Semaglutide to Reduced Alzheimer’s Risk
Adults with type 2 diabetes who were prescribed the GLP-1 RA semaglutide had a significantly lower risk for Alzheimer’s disease compared with their peers who were prescribed any of seven other antidiabetic medications, including other types of GLP-1 receptor–targeting medications.
“These findings support further clinical trials to assess semaglutide’s potential in delaying or preventing Alzheimer’s disease,” wrote the investigators, led by Rong Xu, PhD, with Case Western Reserve School of Medicine, Cleveland, Ohio.
The study was published online on October 24 in Alzheimer’s & Dementia.
Real-World Data
Semaglutide has shown neuroprotective effects in animal models of neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease. In animal models of Alzheimer’s disease, the drug reduced beta-amyloid deposition and improved spatial learning and memory, as well as glucose metabolism in the brain.
In a real-world analysis, Xu and colleagues used electronic health record data to identify 17,104 new users of semaglutide and 1,077,657 new users of seven other antidiabetic medications, including other GLP-1 RAs, insulin, metformin, dipeptidyl peptidase 4 inhibitors, sodium-glucose cotransporter 2 inhibitors, sulfonylurea, and thiazolidinedione.
Over 3 years, treatment with semaglutide was associated with significantly reduced risk of developing Alzheimer’s disease, most strongly compared with insulin (hazard ratio [HR], 0.33) and most weakly compared with other GLP-1 RAs (HR, 0.59).
Compared with the other medications, semaglutide was associated with a 40%-70% reduced risk for first-time diagnosis of Alzheimer’s disease in patients with type 2 diabetes, with similar reductions seen across obesity status and gender and age groups, the authors reported.
The findings align with recent evidence suggesting GLP-1 RAs may protect cognitive function.
For example, as previously reported, in the phase 2b ELAD clinical trial, adults with early-stage Alzheimer’s disease taking the GLP-1 RA liraglutide exhibited slower decline in memory and thinking and experienced less brain atrophy over 12 months compared with placebo.
Promising, but Preliminary
Reached for comment, Courtney Kloske, PhD, Alzheimer’s Association director of scientific engagement, noted that diabetes is a known risk factor for AD and managing diabetes with drugs such as semaglutide “could benefit brain health simply by managing diabetes.”
“However, we still need large clinical trials in representative populations to determine if semaglutide specifically lowers the risk of Alzheimer’s, so it is too early to recommend it for prevention,” Kloske said.
She noted that some research suggests that GLP-1 RAs “may help reduce inflammation and positively impact brain energy use. However, more research is needed to fully understand how these processes might contribute to preventing cognitive decline or Alzheimer’s,” Kloske cautioned.
The Alzheimer’s Association’s “Part the Cloud” initiative has invested more than $68 million to advance 65 clinical trials targeting a variety of compounds, including repurposed drugs that may address known and potential new aspects of the disease, Kloske said.
The study was supported by grants from the National Institute on Aging and the National Center for Advancing Translational Sciences. Xu and Kloske have no relevant conflicts.
A version of this article appeared on Medscape.com.
Adults with type 2 diabetes who were prescribed the GLP-1 RA semaglutide had a significantly lower risk for Alzheimer’s disease compared with their peers who were prescribed any of seven other antidiabetic medications, including other types of GLP-1 receptor–targeting medications.
“These findings support further clinical trials to assess semaglutide’s potential in delaying or preventing Alzheimer’s disease,” wrote the investigators, led by Rong Xu, PhD, with Case Western Reserve School of Medicine, Cleveland, Ohio.
The study was published online on October 24 in Alzheimer’s & Dementia.
Real-World Data
Semaglutide has shown neuroprotective effects in animal models of neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease. In animal models of Alzheimer’s disease, the drug reduced beta-amyloid deposition and improved spatial learning and memory, as well as glucose metabolism in the brain.
In a real-world analysis, Xu and colleagues used electronic health record data to identify 17,104 new users of semaglutide and 1,077,657 new users of seven other antidiabetic medications, including other GLP-1 RAs, insulin, metformin, dipeptidyl peptidase 4 inhibitors, sodium-glucose cotransporter 2 inhibitors, sulfonylurea, and thiazolidinedione.
Over 3 years, treatment with semaglutide was associated with significantly reduced risk of developing Alzheimer’s disease, most strongly compared with insulin (hazard ratio [HR], 0.33) and most weakly compared with other GLP-1 RAs (HR, 0.59).
Compared with the other medications, semaglutide was associated with a 40%-70% reduced risk for first-time diagnosis of Alzheimer’s disease in patients with type 2 diabetes, with similar reductions seen across obesity status and gender and age groups, the authors reported.
The findings align with recent evidence suggesting GLP-1 RAs may protect cognitive function.
For example, as previously reported, in the phase 2b ELAD clinical trial, adults with early-stage Alzheimer’s disease taking the GLP-1 RA liraglutide exhibited slower decline in memory and thinking and experienced less brain atrophy over 12 months compared with placebo.
Promising, but Preliminary
Reached for comment, Courtney Kloske, PhD, Alzheimer’s Association director of scientific engagement, noted that diabetes is a known risk factor for AD and managing diabetes with drugs such as semaglutide “could benefit brain health simply by managing diabetes.”
“However, we still need large clinical trials in representative populations to determine if semaglutide specifically lowers the risk of Alzheimer’s, so it is too early to recommend it for prevention,” Kloske said.
She noted that some research suggests that GLP-1 RAs “may help reduce inflammation and positively impact brain energy use. However, more research is needed to fully understand how these processes might contribute to preventing cognitive decline or Alzheimer’s,” Kloske cautioned.
The Alzheimer’s Association’s “Part the Cloud” initiative has invested more than $68 million to advance 65 clinical trials targeting a variety of compounds, including repurposed drugs that may address known and potential new aspects of the disease, Kloske said.
The study was supported by grants from the National Institute on Aging and the National Center for Advancing Translational Sciences. Xu and Kloske have no relevant conflicts.
A version of this article appeared on Medscape.com.
Adults with type 2 diabetes who were prescribed the GLP-1 RA semaglutide had a significantly lower risk for Alzheimer’s disease compared with their peers who were prescribed any of seven other antidiabetic medications, including other types of GLP-1 receptor–targeting medications.
“These findings support further clinical trials to assess semaglutide’s potential in delaying or preventing Alzheimer’s disease,” wrote the investigators, led by Rong Xu, PhD, with Case Western Reserve School of Medicine, Cleveland, Ohio.
The study was published online on October 24 in Alzheimer’s & Dementia.
Real-World Data
Semaglutide has shown neuroprotective effects in animal models of neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease. In animal models of Alzheimer’s disease, the drug reduced beta-amyloid deposition and improved spatial learning and memory, as well as glucose metabolism in the brain.
In a real-world analysis, Xu and colleagues used electronic health record data to identify 17,104 new users of semaglutide and 1,077,657 new users of seven other antidiabetic medications, including other GLP-1 RAs, insulin, metformin, dipeptidyl peptidase 4 inhibitors, sodium-glucose cotransporter 2 inhibitors, sulfonylurea, and thiazolidinedione.
Over 3 years, treatment with semaglutide was associated with significantly reduced risk of developing Alzheimer’s disease, most strongly compared with insulin (hazard ratio [HR], 0.33) and most weakly compared with other GLP-1 RAs (HR, 0.59).
Compared with the other medications, semaglutide was associated with a 40%-70% reduced risk for first-time diagnosis of Alzheimer’s disease in patients with type 2 diabetes, with similar reductions seen across obesity status and gender and age groups, the authors reported.
The findings align with recent evidence suggesting GLP-1 RAs may protect cognitive function.
For example, as previously reported, in the phase 2b ELAD clinical trial, adults with early-stage Alzheimer’s disease taking the GLP-1 RA liraglutide exhibited slower decline in memory and thinking and experienced less brain atrophy over 12 months compared with placebo.
Promising, but Preliminary
Reached for comment, Courtney Kloske, PhD, Alzheimer’s Association director of scientific engagement, noted that diabetes is a known risk factor for AD and managing diabetes with drugs such as semaglutide “could benefit brain health simply by managing diabetes.”
“However, we still need large clinical trials in representative populations to determine if semaglutide specifically lowers the risk of Alzheimer’s, so it is too early to recommend it for prevention,” Kloske said.
She noted that some research suggests that GLP-1 RAs “may help reduce inflammation and positively impact brain energy use. However, more research is needed to fully understand how these processes might contribute to preventing cognitive decline or Alzheimer’s,” Kloske cautioned.
The Alzheimer’s Association’s “Part the Cloud” initiative has invested more than $68 million to advance 65 clinical trials targeting a variety of compounds, including repurposed drugs that may address known and potential new aspects of the disease, Kloske said.
The study was supported by grants from the National Institute on Aging and the National Center for Advancing Translational Sciences. Xu and Kloske have no relevant conflicts.
A version of this article appeared on Medscape.com.
FROM ALZHEIMER’S & DEMENTIA
Blood Tests for Alzheimer’s Are Here... Are Clinicians Ready?
With the approval of anti-amyloid monoclonal antibodies to treat early-stage Alzheimer’s disease, the need for accurate and early diagnosis is crucial.
Recently, an expert workgroup convened by the Global CEO Initiative on Alzheimer’s Disease published recommendations for the clinical implementation of Alzheimer’s disease blood-based biomarkers.
“Our hope was to provide some recommendations that clinicians could use to develop the best pathways for their clinical practice,” said workgroup co-chair Michelle M. Mielke, PhD, with Wake Forest University School of Medicine, Winston-Salem, North Carolina.
Triage and Confirmatory Pathways
The group recommends two implementation pathways for Alzheimer’s disease blood biomarkers — one for current use for triaging and another for future use to confirm amyloid pathology once blood biomarker tests have reached sufficient performance for this purpose.
In the triage pathway, a negative blood biomarker test would flag individuals unlikely to have detectable brain amyloid pathology. This outcome would prompt clinicians to focus on evaluating non–Alzheimer’s disease-related causes of cognitive impairment, which may streamline the diagnosis of other causes of cognitive impairment, the authors said.
A positive triage blood test would suggest a higher likelihood of amyloid pathology and prompt referral to secondary care for further assessment and consideration for a second, more accurate test, such as amyloid PET or CSF for amyloid confirmation.
In the confirmatory pathway, a positive blood biomarker test result would identify amyloid pathology without the need for a second test, providing a faster route to diagnosis, the authors noted.
Mielke emphasized that these recommendations represent a “first step” and will need to be updated as experiences with the Alzheimer’s disease blood biomarkers in clinical care increase and additional barriers and facilitators are identified.
“These updates will likely include community-informed approaches that incorporate feedback from patients as well as healthcare providers, alongside results from validation in diverse real-world settings,” said workgroup co-chair Chi Udeh-Momoh, PhD, MSc, with Wake Forest University School of Medicine and the Brain and Mind Institute, Aga Khan University, Nairobi, Kenya.
The Alzheimer’s Association published “appropriate use” recommendations for blood biomarkers in 2022.
“Currently, the Alzheimer’s Association is building an updated library of clinical guidance that distills the scientific evidence using de novo systematic reviews and translates them into clear and actionable recommendations for clinical practice,” said Rebecca M. Edelmayer, PhD, vice president of scientific engagement, Alzheimer’s Association.
“The first major effort with our new process will be the upcoming Evidence-based Clinical Practice Guideline on the Use of Blood-based Biomarkers (BBMs) in Specialty Care Settings. This guideline’s recommendations will be published in early 2025,” Edelmayer said.
Availability and Accuracy
Research has shown that amyloid beta and tau protein blood biomarkers — especially a high plasma phosphorylated (p)–tau217 levels — are highly accurate in identifying Alzheimer’s disease in patients with cognitive symptoms attending primary and secondary care clinics.
Several tests targeting plasma p-tau217 are now available for use. They include the PrecivityAD2 blood test from C2N Diagnostics and the Simoa p-Tau 217 Planar Kit and LucentAD p-Tau 217 — both from Quanterix.
In a recent head-to-head comparison of seven leading blood tests for AD pathology, measures of plasma p-tau217, either individually or in combination with other plasma biomarkers, had the strongest relationships with Alzheimer’s disease outcomes.
A recent Swedish study showed that the PrecivityAD2 test had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.
“We’ve been using these blood biomarkers in research for a long time and we’re now taking the jump to start using them in clinic to risk stratify patients,” said Fanny Elahi, MD, PhD, director of fluid biomarker research for the Barbara and Maurice Deane Center for Wellness and Cognitive Health at Icahn Mount Sinai in New York City.
New York’s Mount Sinai Health System is among the first in the northeast to offer blood tests across primary and specialty care settings for early diagnosis of AD and related dementias.
Edelmayer cautioned, “There is no single, stand-alone test to diagnose Alzheimer’s disease today. Blood testing is one piece of the diagnostic process.”
“Currently, physicians use well-established diagnostic tools combined with medical history and other information, including neurological exams, cognitive and functional assessments as well as brain imaging and spinal fluid analysis and blood to make an accurate diagnosis and to understand which patients are eligible for approved treatments,” she said.
There are also emerging biomarkers in the research pipeline, Edelmayer said.
“For example, some researchers think retinal imaging has the potential to detect biological signs of Alzheimer’s disease within certain areas of the eye,” she explained.
“Other emerging biomarkers include examining components in saliva and the skin for signals that may indicate early biological changes in the brain. These biomarkers are still very exploratory, and more research is needed before these tests or biomarkers can be used more routinely to study risk or aid in diagnosis,” Edelmayer said.
Ideal Candidates for Alzheimer’s Disease Blood Testing?
Experts agree that blood tests represent a convenient and scalable option to address the anticipated surge in demand for biomarker testing with the availability of disease-modifying treatments. For now, however, they are not for all older adults worried about their memory.
“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid,” the authors of a recent JAMA editorial noted.
At Mount Sinai, “we’re not starting with stone-cold asymptomatic individuals. But ultimately, this is what the blood tests are intended for — screening,” Elahi noted.
She also noted that Mount Sinai has a “very diverse population” — some with young onset cognitive symptoms, so the entry criteria for testing are “very wide.”
“Anyone above age 40 with symptoms can qualify to get a blood test. We do ask at this stage that either the individual report symptoms or someone in their life or their clinician be worried about their cognition or their brain function,” Elahi said.
Ethical Considerations, Counseling
Elahi emphasized the importance of counseling patients who come to the clinic seeking an Alzheimer’s disease blood test. This should include how the diagnostic process will unfold and what the next steps are with a given result.
Elahi said patients need to be informed that Alzheimer’s disease blood biomarkers are still “relatively new,” and a test can help a patient “know the likelihood of having the disease, but it won’t be 100% definitive.”
To ensure the ethical principle of “do no harm,” counseling should ensure that patients are fully prepared for the implications of the test results and ensure that the decision to test aligns with the patient’s readiness and well-being, Elahi said.
Edelmayer said the forthcoming clinical practice guidelines will provide “evidence-based recommendations for physicians to help guide them through the decision-making process around who should be tested and when. In the meantime, the Alzheimer’s Association urges providers to refer to the 2022 appropriate use recommendations for blood tests in clinical practice and trial settings.”
Mielke has served on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio. Edelmayer and Elahi had no relevant disclosures.
A version of this article appeared on Medscape.com.
With the approval of anti-amyloid monoclonal antibodies to treat early-stage Alzheimer’s disease, the need for accurate and early diagnosis is crucial.
Recently, an expert workgroup convened by the Global CEO Initiative on Alzheimer’s Disease published recommendations for the clinical implementation of Alzheimer’s disease blood-based biomarkers.
“Our hope was to provide some recommendations that clinicians could use to develop the best pathways for their clinical practice,” said workgroup co-chair Michelle M. Mielke, PhD, with Wake Forest University School of Medicine, Winston-Salem, North Carolina.
Triage and Confirmatory Pathways
The group recommends two implementation pathways for Alzheimer’s disease blood biomarkers — one for current use for triaging and another for future use to confirm amyloid pathology once blood biomarker tests have reached sufficient performance for this purpose.
In the triage pathway, a negative blood biomarker test would flag individuals unlikely to have detectable brain amyloid pathology. This outcome would prompt clinicians to focus on evaluating non–Alzheimer’s disease-related causes of cognitive impairment, which may streamline the diagnosis of other causes of cognitive impairment, the authors said.
A positive triage blood test would suggest a higher likelihood of amyloid pathology and prompt referral to secondary care for further assessment and consideration for a second, more accurate test, such as amyloid PET or CSF for amyloid confirmation.
In the confirmatory pathway, a positive blood biomarker test result would identify amyloid pathology without the need for a second test, providing a faster route to diagnosis, the authors noted.
Mielke emphasized that these recommendations represent a “first step” and will need to be updated as experiences with the Alzheimer’s disease blood biomarkers in clinical care increase and additional barriers and facilitators are identified.
“These updates will likely include community-informed approaches that incorporate feedback from patients as well as healthcare providers, alongside results from validation in diverse real-world settings,” said workgroup co-chair Chi Udeh-Momoh, PhD, MSc, with Wake Forest University School of Medicine and the Brain and Mind Institute, Aga Khan University, Nairobi, Kenya.
The Alzheimer’s Association published “appropriate use” recommendations for blood biomarkers in 2022.
“Currently, the Alzheimer’s Association is building an updated library of clinical guidance that distills the scientific evidence using de novo systematic reviews and translates them into clear and actionable recommendations for clinical practice,” said Rebecca M. Edelmayer, PhD, vice president of scientific engagement, Alzheimer’s Association.
“The first major effort with our new process will be the upcoming Evidence-based Clinical Practice Guideline on the Use of Blood-based Biomarkers (BBMs) in Specialty Care Settings. This guideline’s recommendations will be published in early 2025,” Edelmayer said.
Availability and Accuracy
Research has shown that amyloid beta and tau protein blood biomarkers — especially a high plasma phosphorylated (p)–tau217 levels — are highly accurate in identifying Alzheimer’s disease in patients with cognitive symptoms attending primary and secondary care clinics.
Several tests targeting plasma p-tau217 are now available for use. They include the PrecivityAD2 blood test from C2N Diagnostics and the Simoa p-Tau 217 Planar Kit and LucentAD p-Tau 217 — both from Quanterix.
In a recent head-to-head comparison of seven leading blood tests for AD pathology, measures of plasma p-tau217, either individually or in combination with other plasma biomarkers, had the strongest relationships with Alzheimer’s disease outcomes.
A recent Swedish study showed that the PrecivityAD2 test had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.
“We’ve been using these blood biomarkers in research for a long time and we’re now taking the jump to start using them in clinic to risk stratify patients,” said Fanny Elahi, MD, PhD, director of fluid biomarker research for the Barbara and Maurice Deane Center for Wellness and Cognitive Health at Icahn Mount Sinai in New York City.
New York’s Mount Sinai Health System is among the first in the northeast to offer blood tests across primary and specialty care settings for early diagnosis of AD and related dementias.
Edelmayer cautioned, “There is no single, stand-alone test to diagnose Alzheimer’s disease today. Blood testing is one piece of the diagnostic process.”
“Currently, physicians use well-established diagnostic tools combined with medical history and other information, including neurological exams, cognitive and functional assessments as well as brain imaging and spinal fluid analysis and blood to make an accurate diagnosis and to understand which patients are eligible for approved treatments,” she said.
There are also emerging biomarkers in the research pipeline, Edelmayer said.
“For example, some researchers think retinal imaging has the potential to detect biological signs of Alzheimer’s disease within certain areas of the eye,” she explained.
“Other emerging biomarkers include examining components in saliva and the skin for signals that may indicate early biological changes in the brain. These biomarkers are still very exploratory, and more research is needed before these tests or biomarkers can be used more routinely to study risk or aid in diagnosis,” Edelmayer said.
Ideal Candidates for Alzheimer’s Disease Blood Testing?
Experts agree that blood tests represent a convenient and scalable option to address the anticipated surge in demand for biomarker testing with the availability of disease-modifying treatments. For now, however, they are not for all older adults worried about their memory.
“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid,” the authors of a recent JAMA editorial noted.
At Mount Sinai, “we’re not starting with stone-cold asymptomatic individuals. But ultimately, this is what the blood tests are intended for — screening,” Elahi noted.
She also noted that Mount Sinai has a “very diverse population” — some with young onset cognitive symptoms, so the entry criteria for testing are “very wide.”
“Anyone above age 40 with symptoms can qualify to get a blood test. We do ask at this stage that either the individual report symptoms or someone in their life or their clinician be worried about their cognition or their brain function,” Elahi said.
Ethical Considerations, Counseling
Elahi emphasized the importance of counseling patients who come to the clinic seeking an Alzheimer’s disease blood test. This should include how the diagnostic process will unfold and what the next steps are with a given result.
Elahi said patients need to be informed that Alzheimer’s disease blood biomarkers are still “relatively new,” and a test can help a patient “know the likelihood of having the disease, but it won’t be 100% definitive.”
To ensure the ethical principle of “do no harm,” counseling should ensure that patients are fully prepared for the implications of the test results and ensure that the decision to test aligns with the patient’s readiness and well-being, Elahi said.
Edelmayer said the forthcoming clinical practice guidelines will provide “evidence-based recommendations for physicians to help guide them through the decision-making process around who should be tested and when. In the meantime, the Alzheimer’s Association urges providers to refer to the 2022 appropriate use recommendations for blood tests in clinical practice and trial settings.”
Mielke has served on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio. Edelmayer and Elahi had no relevant disclosures.
A version of this article appeared on Medscape.com.
With the approval of anti-amyloid monoclonal antibodies to treat early-stage Alzheimer’s disease, the need for accurate and early diagnosis is crucial.
Recently, an expert workgroup convened by the Global CEO Initiative on Alzheimer’s Disease published recommendations for the clinical implementation of Alzheimer’s disease blood-based biomarkers.
“Our hope was to provide some recommendations that clinicians could use to develop the best pathways for their clinical practice,” said workgroup co-chair Michelle M. Mielke, PhD, with Wake Forest University School of Medicine, Winston-Salem, North Carolina.
Triage and Confirmatory Pathways
The group recommends two implementation pathways for Alzheimer’s disease blood biomarkers — one for current use for triaging and another for future use to confirm amyloid pathology once blood biomarker tests have reached sufficient performance for this purpose.
In the triage pathway, a negative blood biomarker test would flag individuals unlikely to have detectable brain amyloid pathology. This outcome would prompt clinicians to focus on evaluating non–Alzheimer’s disease-related causes of cognitive impairment, which may streamline the diagnosis of other causes of cognitive impairment, the authors said.
A positive triage blood test would suggest a higher likelihood of amyloid pathology and prompt referral to secondary care for further assessment and consideration for a second, more accurate test, such as amyloid PET or CSF for amyloid confirmation.
In the confirmatory pathway, a positive blood biomarker test result would identify amyloid pathology without the need for a second test, providing a faster route to diagnosis, the authors noted.
Mielke emphasized that these recommendations represent a “first step” and will need to be updated as experiences with the Alzheimer’s disease blood biomarkers in clinical care increase and additional barriers and facilitators are identified.
“These updates will likely include community-informed approaches that incorporate feedback from patients as well as healthcare providers, alongside results from validation in diverse real-world settings,” said workgroup co-chair Chi Udeh-Momoh, PhD, MSc, with Wake Forest University School of Medicine and the Brain and Mind Institute, Aga Khan University, Nairobi, Kenya.
The Alzheimer’s Association published “appropriate use” recommendations for blood biomarkers in 2022.
“Currently, the Alzheimer’s Association is building an updated library of clinical guidance that distills the scientific evidence using de novo systematic reviews and translates them into clear and actionable recommendations for clinical practice,” said Rebecca M. Edelmayer, PhD, vice president of scientific engagement, Alzheimer’s Association.
“The first major effort with our new process will be the upcoming Evidence-based Clinical Practice Guideline on the Use of Blood-based Biomarkers (BBMs) in Specialty Care Settings. This guideline’s recommendations will be published in early 2025,” Edelmayer said.
Availability and Accuracy
Research has shown that amyloid beta and tau protein blood biomarkers — especially a high plasma phosphorylated (p)–tau217 levels — are highly accurate in identifying Alzheimer’s disease in patients with cognitive symptoms attending primary and secondary care clinics.
Several tests targeting plasma p-tau217 are now available for use. They include the PrecivityAD2 blood test from C2N Diagnostics and the Simoa p-Tau 217 Planar Kit and LucentAD p-Tau 217 — both from Quanterix.
In a recent head-to-head comparison of seven leading blood tests for AD pathology, measures of plasma p-tau217, either individually or in combination with other plasma biomarkers, had the strongest relationships with Alzheimer’s disease outcomes.
A recent Swedish study showed that the PrecivityAD2 test had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.
“We’ve been using these blood biomarkers in research for a long time and we’re now taking the jump to start using them in clinic to risk stratify patients,” said Fanny Elahi, MD, PhD, director of fluid biomarker research for the Barbara and Maurice Deane Center for Wellness and Cognitive Health at Icahn Mount Sinai in New York City.
New York’s Mount Sinai Health System is among the first in the northeast to offer blood tests across primary and specialty care settings for early diagnosis of AD and related dementias.
Edelmayer cautioned, “There is no single, stand-alone test to diagnose Alzheimer’s disease today. Blood testing is one piece of the diagnostic process.”
“Currently, physicians use well-established diagnostic tools combined with medical history and other information, including neurological exams, cognitive and functional assessments as well as brain imaging and spinal fluid analysis and blood to make an accurate diagnosis and to understand which patients are eligible for approved treatments,” she said.
There are also emerging biomarkers in the research pipeline, Edelmayer said.
“For example, some researchers think retinal imaging has the potential to detect biological signs of Alzheimer’s disease within certain areas of the eye,” she explained.
“Other emerging biomarkers include examining components in saliva and the skin for signals that may indicate early biological changes in the brain. These biomarkers are still very exploratory, and more research is needed before these tests or biomarkers can be used more routinely to study risk or aid in diagnosis,” Edelmayer said.
Ideal Candidates for Alzheimer’s Disease Blood Testing?
Experts agree that blood tests represent a convenient and scalable option to address the anticipated surge in demand for biomarker testing with the availability of disease-modifying treatments. For now, however, they are not for all older adults worried about their memory.
“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid,” the authors of a recent JAMA editorial noted.
At Mount Sinai, “we’re not starting with stone-cold asymptomatic individuals. But ultimately, this is what the blood tests are intended for — screening,” Elahi noted.
She also noted that Mount Sinai has a “very diverse population” — some with young onset cognitive symptoms, so the entry criteria for testing are “very wide.”
“Anyone above age 40 with symptoms can qualify to get a blood test. We do ask at this stage that either the individual report symptoms or someone in their life or their clinician be worried about their cognition or their brain function,” Elahi said.
Ethical Considerations, Counseling
Elahi emphasized the importance of counseling patients who come to the clinic seeking an Alzheimer’s disease blood test. This should include how the diagnostic process will unfold and what the next steps are with a given result.
Elahi said patients need to be informed that Alzheimer’s disease blood biomarkers are still “relatively new,” and a test can help a patient “know the likelihood of having the disease, but it won’t be 100% definitive.”
To ensure the ethical principle of “do no harm,” counseling should ensure that patients are fully prepared for the implications of the test results and ensure that the decision to test aligns with the patient’s readiness and well-being, Elahi said.
Edelmayer said the forthcoming clinical practice guidelines will provide “evidence-based recommendations for physicians to help guide them through the decision-making process around who should be tested and when. In the meantime, the Alzheimer’s Association urges providers to refer to the 2022 appropriate use recommendations for blood tests in clinical practice and trial settings.”
Mielke has served on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio. Edelmayer and Elahi had no relevant disclosures.
A version of this article appeared on Medscape.com.
Industry Payments to Peer Reviewers Scrutinized at Four Major Medical Journals
TOPLINE:
More than half of the US peer reviewers for four major medical journals received industry payments between 2020-2022, new research shows. Altogether they received more than $64 million in general, non-research payments, with a median payment per physician of $7614. Research payments — including money paid directly to physicians as well as funds related to research for which a physician was registered as a principal investigator — exceeded $1 billion.
METHODOLOGY:
- Researchers identified peer reviewers in 2022 for The BMJ, JAMA, The Lancet, and The New England Journal of Medicine using each journal’s list of reviewers for that year. They included 1962 US-based physicians in their analysis.
- General and research payments made to the peer reviewers between 2020-2022 were extracted from the Open Payments database.
TAKEAWAY:
- Nearly 59% of the peer reviewers received industry payments between 2020-2022.
- Payments included $34.31 million in consulting fees and $11.8 million for speaking compensation unrelated to continuing medical education programs.
- Male reviewers received a significantly higher median total payment than did female reviewers ($38,959 vs $19,586). General payments were higher for men as well ($8663 vs $4183).
- For comparison, the median general payment to all physicians in 2018 was $216, the researchers noted.
IN PRACTICE:
“Additional research and transparency regarding industry payments in the peer review process are needed,” the authors of the study wrote.
SOURCE:
Christopher J. D. Wallis, MD, PhD, with the division of urology at the University of Toronto, Canada, was the corresponding author for the study. The article was published online October 10 in JAMA.
LIMITATIONS:
Whether the financial ties were relevant to any of the papers that the peer reviewers critiqued is not known. Some reviewers might have received additional payments from insurance and technology companies that were not captured in this study. The findings might not apply to other journals, the researchers noted.
DISCLOSURES:
Wallis disclosed personal fees from Janssen Oncology, Nanostics, Precision Point Specialty, Sesen Bio, AbbVie, Astellas, AstraZeneca, Bayer, EMD Serono, Knight Therapeutics, Merck, Science and Medicine Canada, TerSera, and Tolmar. He and some coauthors also disclosed support and grants from foundations and government institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
More than half of the US peer reviewers for four major medical journals received industry payments between 2020-2022, new research shows. Altogether they received more than $64 million in general, non-research payments, with a median payment per physician of $7614. Research payments — including money paid directly to physicians as well as funds related to research for which a physician was registered as a principal investigator — exceeded $1 billion.
METHODOLOGY:
- Researchers identified peer reviewers in 2022 for The BMJ, JAMA, The Lancet, and The New England Journal of Medicine using each journal’s list of reviewers for that year. They included 1962 US-based physicians in their analysis.
- General and research payments made to the peer reviewers between 2020-2022 were extracted from the Open Payments database.
TAKEAWAY:
- Nearly 59% of the peer reviewers received industry payments between 2020-2022.
- Payments included $34.31 million in consulting fees and $11.8 million for speaking compensation unrelated to continuing medical education programs.
- Male reviewers received a significantly higher median total payment than did female reviewers ($38,959 vs $19,586). General payments were higher for men as well ($8663 vs $4183).
- For comparison, the median general payment to all physicians in 2018 was $216, the researchers noted.
IN PRACTICE:
“Additional research and transparency regarding industry payments in the peer review process are needed,” the authors of the study wrote.
SOURCE:
Christopher J. D. Wallis, MD, PhD, with the division of urology at the University of Toronto, Canada, was the corresponding author for the study. The article was published online October 10 in JAMA.
LIMITATIONS:
Whether the financial ties were relevant to any of the papers that the peer reviewers critiqued is not known. Some reviewers might have received additional payments from insurance and technology companies that were not captured in this study. The findings might not apply to other journals, the researchers noted.
DISCLOSURES:
Wallis disclosed personal fees from Janssen Oncology, Nanostics, Precision Point Specialty, Sesen Bio, AbbVie, Astellas, AstraZeneca, Bayer, EMD Serono, Knight Therapeutics, Merck, Science and Medicine Canada, TerSera, and Tolmar. He and some coauthors also disclosed support and grants from foundations and government institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
More than half of the US peer reviewers for four major medical journals received industry payments between 2020-2022, new research shows. Altogether they received more than $64 million in general, non-research payments, with a median payment per physician of $7614. Research payments — including money paid directly to physicians as well as funds related to research for which a physician was registered as a principal investigator — exceeded $1 billion.
METHODOLOGY:
- Researchers identified peer reviewers in 2022 for The BMJ, JAMA, The Lancet, and The New England Journal of Medicine using each journal’s list of reviewers for that year. They included 1962 US-based physicians in their analysis.
- General and research payments made to the peer reviewers between 2020-2022 were extracted from the Open Payments database.
TAKEAWAY:
- Nearly 59% of the peer reviewers received industry payments between 2020-2022.
- Payments included $34.31 million in consulting fees and $11.8 million for speaking compensation unrelated to continuing medical education programs.
- Male reviewers received a significantly higher median total payment than did female reviewers ($38,959 vs $19,586). General payments were higher for men as well ($8663 vs $4183).
- For comparison, the median general payment to all physicians in 2018 was $216, the researchers noted.
IN PRACTICE:
“Additional research and transparency regarding industry payments in the peer review process are needed,” the authors of the study wrote.
SOURCE:
Christopher J. D. Wallis, MD, PhD, with the division of urology at the University of Toronto, Canada, was the corresponding author for the study. The article was published online October 10 in JAMA.
LIMITATIONS:
Whether the financial ties were relevant to any of the papers that the peer reviewers critiqued is not known. Some reviewers might have received additional payments from insurance and technology companies that were not captured in this study. The findings might not apply to other journals, the researchers noted.
DISCLOSURES:
Wallis disclosed personal fees from Janssen Oncology, Nanostics, Precision Point Specialty, Sesen Bio, AbbVie, Astellas, AstraZeneca, Bayer, EMD Serono, Knight Therapeutics, Merck, Science and Medicine Canada, TerSera, and Tolmar. He and some coauthors also disclosed support and grants from foundations and government institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
The Game We Play Every Day
Words do have power. Names have power. Words are events, they do things, change things. They transform both speaker and hearer ... They feed understanding or emotion back and forth and amplify it. — Ursula K. Le Guin
Every medical student should have a class in linguistics. I’m just unsure what it might replace. Maybe physiology? (When was the last time you used Fick’s or Fourier’s Laws anyway?). Even if we don’t supplant any core curriculum, it’s worth noting that we spend more time in our daily work calculating how to communicate things than calculating cardiac outputs. That we can convey so much so consistently and without specific training is a marvel. Making the diagnosis or a plan is often the easy part.
Linguistics is a broad field. At its essence, it studies how we communicate. It’s fascinating how we use tone, word choice, gestures, syntax, and grammar to explain, reassure, instruct or implore patients. Medical appointments are sometimes high stakes and occur within a huge variety of circumstances. In a single day of clinic, I had a patient with dementia, and one pursuing a PhD in P-Chem. I had English speakers, second language English speakers, and a Vietnamese patient who knew no English. In just one day, I explained things to toddlers and adults, a Black woman from Oklahoma and a Jewish woman from New York. For a brief few minutes, each of them was my partner in a game of medical charades. For each one, I had to figure out how to get them to know what I’m thinking.
I learned of this game of charades concept from a podcast featuring Morten Christiansen, professor of psychology at Cornell University, and professor in Cognitive Science of Language, at Aarhus University, Denmark. The idea is that language can be thought of as a game where speakers constantly improvise based on the topic, each one’s expertise, and the shared understanding. I found this intriguing. In his explanation, grammar and definitions are less important than the mutual understanding of what is being communicated. It helps explain the wide variations of speech even among those speaking the same language. It also flips the idea that brains are designed for language, a concept proposed by linguistic greats such as Noam Chomsky and Steven Pinker. Rather, what we call language is just the best solution our brains could create to convey information.
I thought about how each of us instinctively varies the complexity of sentences and tone of voice based on the ability of each patient to understand. Gestures, storytelling and analogies are linguistic tools we use without thinking about them. We’ve a unique communications conundrum in that we often need patients to understand a complex idea, but only have minutes to get them there. We don’t want them to panic. We also don’t want them to be so dispassionate as to not act. To speed things up, we often use a technique known as chunking, short phrases that capture an idea in one bite. For example, “soak and smear” to get atopic patients to moisturize or “scrape and burn” to describe a curettage and electrodesiccation of a basal cell carcinoma or “a stick and a burn” before injecting them (I never liked that one). These are pithy, efficient. But they don’t always work.
One afternoon I had a 93-year-old woman with glossodynia. She had dementia and her 96-year-old husband was helping. When I explained how she’d “swish and spit” her magic mouthwash, he looked perplexed. Is she swishing a wand or something? I shook my head, “No” and gestured with my hands palms down, waving back and forth. It is just a mouthwash. She should rinse, then spit it out. I lost that round.
Then a 64-year-old woman whom I had to advise that the pink bump on her arm was a cutaneous neuroendocrine tumor. Do I call it a Merkel cell carcinoma? Do I say, “You know, like the one Jimmy Buffett had?” (Nope, not a good use of storytelling). She wanted to know how she got it. Sun exposure, we think. Or, perhaps a virus. Just how does one explain a virus called MCPyV that is ubiquitous but somehow caused cancer just for you? How do you convey, “This is serious, but you might not die like Jimmy Buffett?” I had to use all my language skills to get this right.
Then there is the Henderson-Hasselbalch problem of linguistics: communicating through a translator. When doing so, I’m cognizant of choosing short, simple sentences. Subject, verb, object. First this, then that. This mitigates what’s lost in translation and reduces waiting for translations (especially when your patient is storytelling in paragraphs). But try doing this with an emotionally wrought condition like alopecia. Finding the fewest words to convey that your FSH and estrogen levels are irrelevant to your telogen effluvium to a Vietnamese speaker is tricky. “Yes, I see your primary care physician ordered these tests. No, the numbers do not matter.” Did that translate as they are normal? Or that they don’t matter because she is 54? Or that they don’t matter to me because I didn’t order them?
When you find yourself exhausted at the day’s end, perhaps you’ll better appreciate how it was not only the graduate level medicine you did today; you’ve practically got a PhD in linguistics as well. You just didn’t realize it.
Dr. Benabio is chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on X. Write to him at dermnews@mdedge.com.
Words do have power. Names have power. Words are events, they do things, change things. They transform both speaker and hearer ... They feed understanding or emotion back and forth and amplify it. — Ursula K. Le Guin
Every medical student should have a class in linguistics. I’m just unsure what it might replace. Maybe physiology? (When was the last time you used Fick’s or Fourier’s Laws anyway?). Even if we don’t supplant any core curriculum, it’s worth noting that we spend more time in our daily work calculating how to communicate things than calculating cardiac outputs. That we can convey so much so consistently and without specific training is a marvel. Making the diagnosis or a plan is often the easy part.
Linguistics is a broad field. At its essence, it studies how we communicate. It’s fascinating how we use tone, word choice, gestures, syntax, and grammar to explain, reassure, instruct or implore patients. Medical appointments are sometimes high stakes and occur within a huge variety of circumstances. In a single day of clinic, I had a patient with dementia, and one pursuing a PhD in P-Chem. I had English speakers, second language English speakers, and a Vietnamese patient who knew no English. In just one day, I explained things to toddlers and adults, a Black woman from Oklahoma and a Jewish woman from New York. For a brief few minutes, each of them was my partner in a game of medical charades. For each one, I had to figure out how to get them to know what I’m thinking.
I learned of this game of charades concept from a podcast featuring Morten Christiansen, professor of psychology at Cornell University, and professor in Cognitive Science of Language, at Aarhus University, Denmark. The idea is that language can be thought of as a game where speakers constantly improvise based on the topic, each one’s expertise, and the shared understanding. I found this intriguing. In his explanation, grammar and definitions are less important than the mutual understanding of what is being communicated. It helps explain the wide variations of speech even among those speaking the same language. It also flips the idea that brains are designed for language, a concept proposed by linguistic greats such as Noam Chomsky and Steven Pinker. Rather, what we call language is just the best solution our brains could create to convey information.
I thought about how each of us instinctively varies the complexity of sentences and tone of voice based on the ability of each patient to understand. Gestures, storytelling and analogies are linguistic tools we use without thinking about them. We’ve a unique communications conundrum in that we often need patients to understand a complex idea, but only have minutes to get them there. We don’t want them to panic. We also don’t want them to be so dispassionate as to not act. To speed things up, we often use a technique known as chunking, short phrases that capture an idea in one bite. For example, “soak and smear” to get atopic patients to moisturize or “scrape and burn” to describe a curettage and electrodesiccation of a basal cell carcinoma or “a stick and a burn” before injecting them (I never liked that one). These are pithy, efficient. But they don’t always work.
One afternoon I had a 93-year-old woman with glossodynia. She had dementia and her 96-year-old husband was helping. When I explained how she’d “swish and spit” her magic mouthwash, he looked perplexed. Is she swishing a wand or something? I shook my head, “No” and gestured with my hands palms down, waving back and forth. It is just a mouthwash. She should rinse, then spit it out. I lost that round.
Then a 64-year-old woman whom I had to advise that the pink bump on her arm was a cutaneous neuroendocrine tumor. Do I call it a Merkel cell carcinoma? Do I say, “You know, like the one Jimmy Buffett had?” (Nope, not a good use of storytelling). She wanted to know how she got it. Sun exposure, we think. Or, perhaps a virus. Just how does one explain a virus called MCPyV that is ubiquitous but somehow caused cancer just for you? How do you convey, “This is serious, but you might not die like Jimmy Buffett?” I had to use all my language skills to get this right.
Then there is the Henderson-Hasselbalch problem of linguistics: communicating through a translator. When doing so, I’m cognizant of choosing short, simple sentences. Subject, verb, object. First this, then that. This mitigates what’s lost in translation and reduces waiting for translations (especially when your patient is storytelling in paragraphs). But try doing this with an emotionally wrought condition like alopecia. Finding the fewest words to convey that your FSH and estrogen levels are irrelevant to your telogen effluvium to a Vietnamese speaker is tricky. “Yes, I see your primary care physician ordered these tests. No, the numbers do not matter.” Did that translate as they are normal? Or that they don’t matter because she is 54? Or that they don’t matter to me because I didn’t order them?
When you find yourself exhausted at the day’s end, perhaps you’ll better appreciate how it was not only the graduate level medicine you did today; you’ve practically got a PhD in linguistics as well. You just didn’t realize it.
Dr. Benabio is chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on X. Write to him at dermnews@mdedge.com.
Words do have power. Names have power. Words are events, they do things, change things. They transform both speaker and hearer ... They feed understanding or emotion back and forth and amplify it. — Ursula K. Le Guin
Every medical student should have a class in linguistics. I’m just unsure what it might replace. Maybe physiology? (When was the last time you used Fick’s or Fourier’s Laws anyway?). Even if we don’t supplant any core curriculum, it’s worth noting that we spend more time in our daily work calculating how to communicate things than calculating cardiac outputs. That we can convey so much so consistently and without specific training is a marvel. Making the diagnosis or a plan is often the easy part.
Linguistics is a broad field. At its essence, it studies how we communicate. It’s fascinating how we use tone, word choice, gestures, syntax, and grammar to explain, reassure, instruct or implore patients. Medical appointments are sometimes high stakes and occur within a huge variety of circumstances. In a single day of clinic, I had a patient with dementia, and one pursuing a PhD in P-Chem. I had English speakers, second language English speakers, and a Vietnamese patient who knew no English. In just one day, I explained things to toddlers and adults, a Black woman from Oklahoma and a Jewish woman from New York. For a brief few minutes, each of them was my partner in a game of medical charades. For each one, I had to figure out how to get them to know what I’m thinking.
I learned of this game of charades concept from a podcast featuring Morten Christiansen, professor of psychology at Cornell University, and professor in Cognitive Science of Language, at Aarhus University, Denmark. The idea is that language can be thought of as a game where speakers constantly improvise based on the topic, each one’s expertise, and the shared understanding. I found this intriguing. In his explanation, grammar and definitions are less important than the mutual understanding of what is being communicated. It helps explain the wide variations of speech even among those speaking the same language. It also flips the idea that brains are designed for language, a concept proposed by linguistic greats such as Noam Chomsky and Steven Pinker. Rather, what we call language is just the best solution our brains could create to convey information.
I thought about how each of us instinctively varies the complexity of sentences and tone of voice based on the ability of each patient to understand. Gestures, storytelling and analogies are linguistic tools we use without thinking about them. We’ve a unique communications conundrum in that we often need patients to understand a complex idea, but only have minutes to get them there. We don’t want them to panic. We also don’t want them to be so dispassionate as to not act. To speed things up, we often use a technique known as chunking, short phrases that capture an idea in one bite. For example, “soak and smear” to get atopic patients to moisturize or “scrape and burn” to describe a curettage and electrodesiccation of a basal cell carcinoma or “a stick and a burn” before injecting them (I never liked that one). These are pithy, efficient. But they don’t always work.
One afternoon I had a 93-year-old woman with glossodynia. She had dementia and her 96-year-old husband was helping. When I explained how she’d “swish and spit” her magic mouthwash, he looked perplexed. Is she swishing a wand or something? I shook my head, “No” and gestured with my hands palms down, waving back and forth. It is just a mouthwash. She should rinse, then spit it out. I lost that round.
Then a 64-year-old woman whom I had to advise that the pink bump on her arm was a cutaneous neuroendocrine tumor. Do I call it a Merkel cell carcinoma? Do I say, “You know, like the one Jimmy Buffett had?” (Nope, not a good use of storytelling). She wanted to know how she got it. Sun exposure, we think. Or, perhaps a virus. Just how does one explain a virus called MCPyV that is ubiquitous but somehow caused cancer just for you? How do you convey, “This is serious, but you might not die like Jimmy Buffett?” I had to use all my language skills to get this right.
Then there is the Henderson-Hasselbalch problem of linguistics: communicating through a translator. When doing so, I’m cognizant of choosing short, simple sentences. Subject, verb, object. First this, then that. This mitigates what’s lost in translation and reduces waiting for translations (especially when your patient is storytelling in paragraphs). But try doing this with an emotionally wrought condition like alopecia. Finding the fewest words to convey that your FSH and estrogen levels are irrelevant to your telogen effluvium to a Vietnamese speaker is tricky. “Yes, I see your primary care physician ordered these tests. No, the numbers do not matter.” Did that translate as they are normal? Or that they don’t matter because she is 54? Or that they don’t matter to me because I didn’t order them?
When you find yourself exhausted at the day’s end, perhaps you’ll better appreciate how it was not only the graduate level medicine you did today; you’ve practically got a PhD in linguistics as well. You just didn’t realize it.
Dr. Benabio is chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on X. Write to him at dermnews@mdedge.com.
A Doctor Gets the Save When a Little League Umpire Collapses
Emergencies happen anywhere, anytime, and sometimes, medical professionals find themselves in situations where they are the only ones who can help. Is There a Doctor in the House? is a Medscape Medical News series telling these stories.
I sincerely believe that what goes around comes around. Good things come to good people. And sometimes that saves lives.
My 10-year-old son was in the semifinals of the Little League district championship. And we were losing. My son is an excellent pitcher, and he had started the game. But that night, he was struggling. He just couldn’t find where to throw the ball. Needless to say, he was frustrated.
He was changed to shortstop in the second inning, and the home plate umpire walked over to him. This umpire is well known in the area for his kindness and commitment, how he encourages the kids and helps make baseball fun even when it’s stressful.
We didn’t know him well, but he was really supportive of my kid in that moment, talking to him about how baseball is a team sport and we’re here to have fun. Just being really positive.
As the game continued, I saw the umpire suddenly walk to the side of the field. I hadn’t seen it, but he had been hit by a wild pitch on the side of his neck. He was wearing protective gear, but the ball managed to bounce up the side and caught bare neck. I knew something wasn’t right.
I went down to talk to him, and my medical assistant (MA), who was also at the game, came with me. I could tell the umpire was injured, but he didn’t want to leave the game. I suggested going to the hospital, but he wouldn’t consider it. So I sat there with my arms crossed, watching him.
His symptoms got worse. I could see he was in pain, and it was getting harder for him to speak.
Again, I strongly urged him to go to the hospital, but again, he said no.
In the sixth inning, things got bad enough that the umpire finally agreed to leave the game. As I was figuring out how to get him to the hospital, he disappeared on me. He had walked up to the second floor of the snack shack. My MA and I got him back downstairs and sat him on a bench behind home plate.
We were in the process of calling 911 ... when he arrested.
Luckily, when he lost vital signs, my MA and I were standing right next to him. We were able to activate ACLS protocol and start CPR within seconds.
Many times in these critical situations — especially if people are scared or have never seen an emergency like this — there’s the potential for chaos. Well, that was the polar opposite of what happened.
As soon as I started to run the code, there was this sense of order. People were keeping their composure and following directions. My MA and I would say, “this is what we need,” and the task would immediately be assigned to someone. It was quiet. There was no yelling. Everyone trusted me, even though some of them had never met me before. It was so surprising. I remember thinking, we’re running an arrest, but it’s so calm.
We were an organized team, and it really worked like clockwork, which was remarkable given where we were. It’s one thing to be in the hospital for an event like that. But to be on a baseball field where you have nothing is a completely different scenario.
Meanwhile, the game went on.
I had requested that all the kids be placed in the dugout when they weren’t on the field. So they saw the umpire walk off, but none of them saw him arrest. Some parents were really helpful with making sure the kids were okay.
The president of Oxford Little League ran across the street to a fire station to get an AED. But the fire department personnel were out on a call. He had to break down the door.
By the time he got back, the umpire’s vital signs were returning. And then EMS arrived.
They loaded him in the ambulance, and I called ahead to the trauma team, so they knew exactly what was happening.
I was pretty worried. My hypothesis was that there was probably compression on the vasculature, which had caused him to lose his vital signs. I thought he probably had an impending airway loss. I wasn’t sure if he was going to make it through the night.
What I didn’t know was that while I was giving CPR, my son stole home, and we won the game. As the ambulance was leaving, the celebration was going on in the outfield.
The umpire was in the hospital for several days. Early on, I got permission from his family to visit him. The first time I saw him, I felt this incredible gratitude and peace.
My dad was an ER doctor, and growing up, it seemed like every time we went on a family vacation, there was an emergency. We would be near a car accident or something, and my father would fly in and save the day. I remember being on the Autobahn somewhere in Europe, and there was a devastating accident between a car and a motorcycle. My father stabilized the guy, had him airlifted out, and apparently, he did fine. I grew up watching things like this and thinking, wow, that’s incredible.
Fast forward to 2 years ago, my father was diagnosed with a lung cancer he never should have had. He never smoked. As a cancer surgeon, I know we did everything in our power to save him. But it didn’t happen. He passed away.
I realize this is superstitious, but seeing the umpire alive, I had this feeling that somehow my dad was there. It was bittersweet but also a joyful moment — like I could breathe again.
I met the umpire’s family that first time, and it was like meeting family that you didn’t know you had but now you have forever. Even though the event was traumatic — I’m still trying not to be on high alert every time I go to a game — it felt like a gift to be part of this journey with them.
Little League’s mission is to teach kids about teamwork, leadership, and making good choices so communities are stronger. Our umpire is a guy who does that every day. He’s not a Little League umpire because he makes any money. He shows up at every single game to support these kids and engage them, to model respect, gratitude, and kindness.
I think our obligation as people is to live with intentionality. We all need to make sure we leave the world a better place, even when we are called upon to do uncomfortable things. Our umpire showed our kids what that looks like, and in that moment when he could have died, we were able to do the same for him.
Jennifer LaFemina, MD, is a surgical oncologist at UMass Memorial Medical Center in Massachusetts.
Are you a medical professional with a dramatic story outside the clinic? Medscape Medical News would love to consider your story for Is There a Doctor in the House? Please email your contact information and a short summary to access@webmd.net.
A version of this article appeared on Medscape.com.
Emergencies happen anywhere, anytime, and sometimes, medical professionals find themselves in situations where they are the only ones who can help. Is There a Doctor in the House? is a Medscape Medical News series telling these stories.
I sincerely believe that what goes around comes around. Good things come to good people. And sometimes that saves lives.
My 10-year-old son was in the semifinals of the Little League district championship. And we were losing. My son is an excellent pitcher, and he had started the game. But that night, he was struggling. He just couldn’t find where to throw the ball. Needless to say, he was frustrated.
He was changed to shortstop in the second inning, and the home plate umpire walked over to him. This umpire is well known in the area for his kindness and commitment, how he encourages the kids and helps make baseball fun even when it’s stressful.
We didn’t know him well, but he was really supportive of my kid in that moment, talking to him about how baseball is a team sport and we’re here to have fun. Just being really positive.
As the game continued, I saw the umpire suddenly walk to the side of the field. I hadn’t seen it, but he had been hit by a wild pitch on the side of his neck. He was wearing protective gear, but the ball managed to bounce up the side and caught bare neck. I knew something wasn’t right.
I went down to talk to him, and my medical assistant (MA), who was also at the game, came with me. I could tell the umpire was injured, but he didn’t want to leave the game. I suggested going to the hospital, but he wouldn’t consider it. So I sat there with my arms crossed, watching him.
His symptoms got worse. I could see he was in pain, and it was getting harder for him to speak.
Again, I strongly urged him to go to the hospital, but again, he said no.
In the sixth inning, things got bad enough that the umpire finally agreed to leave the game. As I was figuring out how to get him to the hospital, he disappeared on me. He had walked up to the second floor of the snack shack. My MA and I got him back downstairs and sat him on a bench behind home plate.
We were in the process of calling 911 ... when he arrested.
Luckily, when he lost vital signs, my MA and I were standing right next to him. We were able to activate ACLS protocol and start CPR within seconds.
Many times in these critical situations — especially if people are scared or have never seen an emergency like this — there’s the potential for chaos. Well, that was the polar opposite of what happened.
As soon as I started to run the code, there was this sense of order. People were keeping their composure and following directions. My MA and I would say, “this is what we need,” and the task would immediately be assigned to someone. It was quiet. There was no yelling. Everyone trusted me, even though some of them had never met me before. It was so surprising. I remember thinking, we’re running an arrest, but it’s so calm.
We were an organized team, and it really worked like clockwork, which was remarkable given where we were. It’s one thing to be in the hospital for an event like that. But to be on a baseball field where you have nothing is a completely different scenario.
Meanwhile, the game went on.
I had requested that all the kids be placed in the dugout when they weren’t on the field. So they saw the umpire walk off, but none of them saw him arrest. Some parents were really helpful with making sure the kids were okay.
The president of Oxford Little League ran across the street to a fire station to get an AED. But the fire department personnel were out on a call. He had to break down the door.
By the time he got back, the umpire’s vital signs were returning. And then EMS arrived.
They loaded him in the ambulance, and I called ahead to the trauma team, so they knew exactly what was happening.
I was pretty worried. My hypothesis was that there was probably compression on the vasculature, which had caused him to lose his vital signs. I thought he probably had an impending airway loss. I wasn’t sure if he was going to make it through the night.
What I didn’t know was that while I was giving CPR, my son stole home, and we won the game. As the ambulance was leaving, the celebration was going on in the outfield.
The umpire was in the hospital for several days. Early on, I got permission from his family to visit him. The first time I saw him, I felt this incredible gratitude and peace.
My dad was an ER doctor, and growing up, it seemed like every time we went on a family vacation, there was an emergency. We would be near a car accident or something, and my father would fly in and save the day. I remember being on the Autobahn somewhere in Europe, and there was a devastating accident between a car and a motorcycle. My father stabilized the guy, had him airlifted out, and apparently, he did fine. I grew up watching things like this and thinking, wow, that’s incredible.
Fast forward to 2 years ago, my father was diagnosed with a lung cancer he never should have had. He never smoked. As a cancer surgeon, I know we did everything in our power to save him. But it didn’t happen. He passed away.
I realize this is superstitious, but seeing the umpire alive, I had this feeling that somehow my dad was there. It was bittersweet but also a joyful moment — like I could breathe again.
I met the umpire’s family that first time, and it was like meeting family that you didn’t know you had but now you have forever. Even though the event was traumatic — I’m still trying not to be on high alert every time I go to a game — it felt like a gift to be part of this journey with them.
Little League’s mission is to teach kids about teamwork, leadership, and making good choices so communities are stronger. Our umpire is a guy who does that every day. He’s not a Little League umpire because he makes any money. He shows up at every single game to support these kids and engage them, to model respect, gratitude, and kindness.
I think our obligation as people is to live with intentionality. We all need to make sure we leave the world a better place, even when we are called upon to do uncomfortable things. Our umpire showed our kids what that looks like, and in that moment when he could have died, we were able to do the same for him.
Jennifer LaFemina, MD, is a surgical oncologist at UMass Memorial Medical Center in Massachusetts.
Are you a medical professional with a dramatic story outside the clinic? Medscape Medical News would love to consider your story for Is There a Doctor in the House? Please email your contact information and a short summary to access@webmd.net.
A version of this article appeared on Medscape.com.
Emergencies happen anywhere, anytime, and sometimes, medical professionals find themselves in situations where they are the only ones who can help. Is There a Doctor in the House? is a Medscape Medical News series telling these stories.
I sincerely believe that what goes around comes around. Good things come to good people. And sometimes that saves lives.
My 10-year-old son was in the semifinals of the Little League district championship. And we were losing. My son is an excellent pitcher, and he had started the game. But that night, he was struggling. He just couldn’t find where to throw the ball. Needless to say, he was frustrated.
He was changed to shortstop in the second inning, and the home plate umpire walked over to him. This umpire is well known in the area for his kindness and commitment, how he encourages the kids and helps make baseball fun even when it’s stressful.
We didn’t know him well, but he was really supportive of my kid in that moment, talking to him about how baseball is a team sport and we’re here to have fun. Just being really positive.
As the game continued, I saw the umpire suddenly walk to the side of the field. I hadn’t seen it, but he had been hit by a wild pitch on the side of his neck. He was wearing protective gear, but the ball managed to bounce up the side and caught bare neck. I knew something wasn’t right.
I went down to talk to him, and my medical assistant (MA), who was also at the game, came with me. I could tell the umpire was injured, but he didn’t want to leave the game. I suggested going to the hospital, but he wouldn’t consider it. So I sat there with my arms crossed, watching him.
His symptoms got worse. I could see he was in pain, and it was getting harder for him to speak.
Again, I strongly urged him to go to the hospital, but again, he said no.
In the sixth inning, things got bad enough that the umpire finally agreed to leave the game. As I was figuring out how to get him to the hospital, he disappeared on me. He had walked up to the second floor of the snack shack. My MA and I got him back downstairs and sat him on a bench behind home plate.
We were in the process of calling 911 ... when he arrested.
Luckily, when he lost vital signs, my MA and I were standing right next to him. We were able to activate ACLS protocol and start CPR within seconds.
Many times in these critical situations — especially if people are scared or have never seen an emergency like this — there’s the potential for chaos. Well, that was the polar opposite of what happened.
As soon as I started to run the code, there was this sense of order. People were keeping their composure and following directions. My MA and I would say, “this is what we need,” and the task would immediately be assigned to someone. It was quiet. There was no yelling. Everyone trusted me, even though some of them had never met me before. It was so surprising. I remember thinking, we’re running an arrest, but it’s so calm.
We were an organized team, and it really worked like clockwork, which was remarkable given where we were. It’s one thing to be in the hospital for an event like that. But to be on a baseball field where you have nothing is a completely different scenario.
Meanwhile, the game went on.
I had requested that all the kids be placed in the dugout when they weren’t on the field. So they saw the umpire walk off, but none of them saw him arrest. Some parents were really helpful with making sure the kids were okay.
The president of Oxford Little League ran across the street to a fire station to get an AED. But the fire department personnel were out on a call. He had to break down the door.
By the time he got back, the umpire’s vital signs were returning. And then EMS arrived.
They loaded him in the ambulance, and I called ahead to the trauma team, so they knew exactly what was happening.
I was pretty worried. My hypothesis was that there was probably compression on the vasculature, which had caused him to lose his vital signs. I thought he probably had an impending airway loss. I wasn’t sure if he was going to make it through the night.
What I didn’t know was that while I was giving CPR, my son stole home, and we won the game. As the ambulance was leaving, the celebration was going on in the outfield.
The umpire was in the hospital for several days. Early on, I got permission from his family to visit him. The first time I saw him, I felt this incredible gratitude and peace.
My dad was an ER doctor, and growing up, it seemed like every time we went on a family vacation, there was an emergency. We would be near a car accident or something, and my father would fly in and save the day. I remember being on the Autobahn somewhere in Europe, and there was a devastating accident between a car and a motorcycle. My father stabilized the guy, had him airlifted out, and apparently, he did fine. I grew up watching things like this and thinking, wow, that’s incredible.
Fast forward to 2 years ago, my father was diagnosed with a lung cancer he never should have had. He never smoked. As a cancer surgeon, I know we did everything in our power to save him. But it didn’t happen. He passed away.
I realize this is superstitious, but seeing the umpire alive, I had this feeling that somehow my dad was there. It was bittersweet but also a joyful moment — like I could breathe again.
I met the umpire’s family that first time, and it was like meeting family that you didn’t know you had but now you have forever. Even though the event was traumatic — I’m still trying not to be on high alert every time I go to a game — it felt like a gift to be part of this journey with them.
Little League’s mission is to teach kids about teamwork, leadership, and making good choices so communities are stronger. Our umpire is a guy who does that every day. He’s not a Little League umpire because he makes any money. He shows up at every single game to support these kids and engage them, to model respect, gratitude, and kindness.
I think our obligation as people is to live with intentionality. We all need to make sure we leave the world a better place, even when we are called upon to do uncomfortable things. Our umpire showed our kids what that looks like, and in that moment when he could have died, we were able to do the same for him.
Jennifer LaFemina, MD, is a surgical oncologist at UMass Memorial Medical Center in Massachusetts.
Are you a medical professional with a dramatic story outside the clinic? Medscape Medical News would love to consider your story for Is There a Doctor in the House? Please email your contact information and a short summary to access@webmd.net.
A version of this article appeared on Medscape.com.
Dry Eye Linked to Increased Risk for Mental Health Disorders
TOPLINE:
Patients with dry eye disease are more than three times as likely to have mental health conditions, such as depression and anxiety, as those without the condition.
METHODOLOGY:
- Researchers used a database from the National Institutes of Health to investigate the association between dry eye disease and mental health disorders in a large and diverse nationwide population of American adults.
- They identified 18,257 patients (mean age, 64.9 years; 67% women) with dry eye disease who were propensity score–matched with 54,765 participants without the condition.
- The cases of dry eye disease were identified using Systematized Nomenclature of Medicine codes for dry eyes, meibomian gland dysfunction, and tear film insufficiency.
- The outcome measures for mental health conditions were clinical diagnoses of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders.
TAKEAWAY:
- Patients with dry eye disease had more than triple the risk for mental health conditions than participants without the condition (adjusted odds ratio [aOR], 3.21; P < .001).
- Patients with dry eye disease had a higher risk for a depressive disorder (aOR, 3.47), anxiety-related disorder (aOR, 2.74), bipolar disorder (aOR, 2.23), and schizophrenia spectrum disorder (aOR, 2.48; P < .001 for all) than participants without the condition.
- The associations between dry eye disease and mental health conditions were significantly stronger among Black individuals than among White individuals, except for bipolar disorder.
- Dry eye disease was associated with two- to threefold higher odds of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders even in participants who never used medications for mental health (P < .001 for all).
IN PRACTICE:
“Greater efforts should be undertaken to screen patients with DED [dry eye disease] for mental health conditions, particularly in historically medically underserved populations,” the authors of the study wrote.
SOURCE:
This study was led by Aaron T. Zhao, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, and was published online on October 15, 2024, in the American Journal of Ophthalmology.
LIMITATIONS:
This study relied on electronic health record data, which may have led to the inclusion of participants with undiagnosed dry eye disease as control participants. Moreover, the study did not evaluate the severity of dry eye disease or the severity and duration of mental health conditions, which may have affected the results. The database analyzed in this study may not have fully captured the complete demographic profile of the nationwide population, which may have affected the generalizability of the findings.
DISCLOSURES:
This study was supported by funding from the National Institutes of Health and Research to Prevent Blindness. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Patients with dry eye disease are more than three times as likely to have mental health conditions, such as depression and anxiety, as those without the condition.
METHODOLOGY:
- Researchers used a database from the National Institutes of Health to investigate the association between dry eye disease and mental health disorders in a large and diverse nationwide population of American adults.
- They identified 18,257 patients (mean age, 64.9 years; 67% women) with dry eye disease who were propensity score–matched with 54,765 participants without the condition.
- The cases of dry eye disease were identified using Systematized Nomenclature of Medicine codes for dry eyes, meibomian gland dysfunction, and tear film insufficiency.
- The outcome measures for mental health conditions were clinical diagnoses of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders.
TAKEAWAY:
- Patients with dry eye disease had more than triple the risk for mental health conditions than participants without the condition (adjusted odds ratio [aOR], 3.21; P < .001).
- Patients with dry eye disease had a higher risk for a depressive disorder (aOR, 3.47), anxiety-related disorder (aOR, 2.74), bipolar disorder (aOR, 2.23), and schizophrenia spectrum disorder (aOR, 2.48; P < .001 for all) than participants without the condition.
- The associations between dry eye disease and mental health conditions were significantly stronger among Black individuals than among White individuals, except for bipolar disorder.
- Dry eye disease was associated with two- to threefold higher odds of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders even in participants who never used medications for mental health (P < .001 for all).
IN PRACTICE:
“Greater efforts should be undertaken to screen patients with DED [dry eye disease] for mental health conditions, particularly in historically medically underserved populations,” the authors of the study wrote.
SOURCE:
This study was led by Aaron T. Zhao, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, and was published online on October 15, 2024, in the American Journal of Ophthalmology.
LIMITATIONS:
This study relied on electronic health record data, which may have led to the inclusion of participants with undiagnosed dry eye disease as control participants. Moreover, the study did not evaluate the severity of dry eye disease or the severity and duration of mental health conditions, which may have affected the results. The database analyzed in this study may not have fully captured the complete demographic profile of the nationwide population, which may have affected the generalizability of the findings.
DISCLOSURES:
This study was supported by funding from the National Institutes of Health and Research to Prevent Blindness. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Patients with dry eye disease are more than three times as likely to have mental health conditions, such as depression and anxiety, as those without the condition.
METHODOLOGY:
- Researchers used a database from the National Institutes of Health to investigate the association between dry eye disease and mental health disorders in a large and diverse nationwide population of American adults.
- They identified 18,257 patients (mean age, 64.9 years; 67% women) with dry eye disease who were propensity score–matched with 54,765 participants without the condition.
- The cases of dry eye disease were identified using Systematized Nomenclature of Medicine codes for dry eyes, meibomian gland dysfunction, and tear film insufficiency.
- The outcome measures for mental health conditions were clinical diagnoses of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders.
TAKEAWAY:
- Patients with dry eye disease had more than triple the risk for mental health conditions than participants without the condition (adjusted odds ratio [aOR], 3.21; P < .001).
- Patients with dry eye disease had a higher risk for a depressive disorder (aOR, 3.47), anxiety-related disorder (aOR, 2.74), bipolar disorder (aOR, 2.23), and schizophrenia spectrum disorder (aOR, 2.48; P < .001 for all) than participants without the condition.
- The associations between dry eye disease and mental health conditions were significantly stronger among Black individuals than among White individuals, except for bipolar disorder.
- Dry eye disease was associated with two- to threefold higher odds of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders even in participants who never used medications for mental health (P < .001 for all).
IN PRACTICE:
“Greater efforts should be undertaken to screen patients with DED [dry eye disease] for mental health conditions, particularly in historically medically underserved populations,” the authors of the study wrote.
SOURCE:
This study was led by Aaron T. Zhao, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, and was published online on October 15, 2024, in the American Journal of Ophthalmology.
LIMITATIONS:
This study relied on electronic health record data, which may have led to the inclusion of participants with undiagnosed dry eye disease as control participants. Moreover, the study did not evaluate the severity of dry eye disease or the severity and duration of mental health conditions, which may have affected the results. The database analyzed in this study may not have fully captured the complete demographic profile of the nationwide population, which may have affected the generalizability of the findings.
DISCLOSURES:
This study was supported by funding from the National Institutes of Health and Research to Prevent Blindness. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.