Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort
Allow Teaser Image

Gut Biomarkers Accurately Flag Autism Spectrum Disorder

Article Type
Changed
Thu, 07/11/2024 - 10:28

Bacterial and nonbacterial components of the gut microbiome and their function can accurately differentiate children with autism spectrum disorder (ASD) from neurotypical children, new research shows. 

The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.

Their study was published online in Nature Microbiology
 

Beyond Bacteria

The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD. 

However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear. 

To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China. 

After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD. 

Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children. 

A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls. 

“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote. 

They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets. 
 

‘Exciting’ Possibilities 

“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre. 

“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added. 

He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.

“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted. 

This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Bacterial and nonbacterial components of the gut microbiome and their function can accurately differentiate children with autism spectrum disorder (ASD) from neurotypical children, new research shows. 

The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.

Their study was published online in Nature Microbiology
 

Beyond Bacteria

The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD. 

However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear. 

To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China. 

After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD. 

Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children. 

A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls. 

“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote. 

They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets. 
 

‘Exciting’ Possibilities 

“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre. 

“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added. 

He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.

“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted. 

This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Bacterial and nonbacterial components of the gut microbiome and their function can accurately differentiate children with autism spectrum disorder (ASD) from neurotypical children, new research shows. 

The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.

Their study was published online in Nature Microbiology
 

Beyond Bacteria

The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD. 

However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear. 

To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China. 

After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD. 

Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children. 

A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls. 

“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote. 

They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets. 
 

‘Exciting’ Possibilities 

“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre. 

“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added. 

He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.

“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted. 

This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE MICROBIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Should Cancer Trial Eligibility Become More Inclusive?

Article Type
Changed
Wed, 07/10/2024 - 17:00

Patients with treatment-refractory cancers who did not meet eligibility criteria for a pan-cancer clinical trial but received waivers allowing them to participate had similar outcomes to patients who participated without waivers, a new analysis revealed.

The study, published online in Clinical Cancer Research, highlighted the potential benefits of broadening eligibility criteria for clinical trials.

“It is well known that results in an ‘ideal’ population do not always translate to the real-world population,” senior author Hans Gelderblom, MD, chair of the Department of Medical Oncology at the Leiden University Medical Center, Leiden, the Netherlands, said in a press release. “Eligibility criteria are often too strict, and educated exemptions by experienced investigators can help individual patients, especially in a last-resort trial.”

Although experts have expressed interest in improving trial inclusivity, it’s unclear how doing so might impact treatment safety and efficacy.

In the Drug Rediscovery Protocol (DRUP), Dr. Gelderblom and colleagues examined the impact of broadening trial eligibility on patient outcomes. DRUP is an ongoing Dutch national, multicenter, pan-cancer, nonrandomized clinical trial in which patients are treated off-label with approved molecularly targeted or immunotherapies.

In the trial, 1019 patients with treatment-refractory disease were matched to one of the available study drugs based on their tumor molecular profile and enrolled in parallel cohorts. Cohorts were defined by tumor type, molecular profile, and study drug.

Among these patients, 82 patients — 8% of the cohort — were granted waivers to participate. Most waivers (45%) were granted as exceptions to general- or drug-related eligibility criteria, often because of out-of-range lab results. Other categories included treatment and testing exceptions, as well as out-of-window testing. 

The researchers then compared safety and efficacy outcomes between the 82 participants granted waivers and the 937 who did not receive waivers. 

Overall, Dr. Gelderblom’s team found that the rate of serious adverse events was similar between patients who received a waiver and those who did not: 39% vs 41%, respectively.

A relationship between waivers and serious adverse events was deemed “unlikely” for 86% of patients and “possible” for 14%. In two cases concerning a direct relationship, for instance, patients who received waivers for decreased hemoglobin levels developed anemia.

The rate of clinical benefit — defined as an objective response or stable disease for at least 16 weeks — was similar between the groups. Overall, 40% of patients who received a waiver (33 of 82) had a clinical benefit vs 33% of patients without a waiver (P = .43). Median overall survival for patients that received a waiver was also similar — 11 months in the waiver group and 8 months in the nonwaiver group (hazard ratio, 0.87; P = .33).

“Safety and clinical benefit were preserved in patients for whom a waiver was granted,” the authors concluded.

The study had several limitations. The diversity of cancer types, treatments, and reasons for protocol exemptions precluded subgroup analyses. In addition, because the decision to grant waivers depended in large part on the likelihood of clinical benefit, “it is possible that patients who received waivers were positively selected for clinical benefit compared with the general study population,” the authors wrote.

So, “although the clinical benefit rate of the patient group for whom a waiver was granted appears to be slightly higher, this difference might be explained by the selection process of the central study team, in which each waiver request was carefully considered, weighing the risks and potential benefits for the patient in question,” the authors explained.

Overall, “these findings advocate for a broader and more inclusive design when establishing novel trials, paving the way for a more effective and tailored application of cancer therapies in patients with advanced or refractory disease,” Dr. Gelderblom said.

Commenting on the study, Bishal Gyawali, MD, PhD, said that “relaxing eligibility criteria is important, and I support this. Trials should include patients that are more representative of the real-world, so that results are generalizable.”

However, “the paper overemphasized efficacy,” said Dr. Gyawali, from Queen’s University, Kingston, Ontario, Canada. The sample size of waiver-granted patients was small, plus “the clinical benefit rate is not a marker of efficacy.

“The response rate is somewhat better, but for a heterogeneous study with multiple targets and drugs, it is difficult to say much about treatment effects here,” Dr. Gyawali added. Overall, “we shouldn’t read too much into treatment benefits based on these numbers.”

Funding for the study was provided by the Stelvio for Life Foundation, the Dutch Cancer Society, Amgen, AstraZeneca, Bayer, Boehringer Ingelheim, Bristol Myers Squibb, pharma&, Eisai Co., Ipsen, Merck Sharp & Dohme, Novartis, Pfizer, and Roche. Dr. Gelderblom declared no conflicts of interest, and Dr. Gyawali declared no conflicts of interest related to his comment.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Patients with treatment-refractory cancers who did not meet eligibility criteria for a pan-cancer clinical trial but received waivers allowing them to participate had similar outcomes to patients who participated without waivers, a new analysis revealed.

The study, published online in Clinical Cancer Research, highlighted the potential benefits of broadening eligibility criteria for clinical trials.

“It is well known that results in an ‘ideal’ population do not always translate to the real-world population,” senior author Hans Gelderblom, MD, chair of the Department of Medical Oncology at the Leiden University Medical Center, Leiden, the Netherlands, said in a press release. “Eligibility criteria are often too strict, and educated exemptions by experienced investigators can help individual patients, especially in a last-resort trial.”

Although experts have expressed interest in improving trial inclusivity, it’s unclear how doing so might impact treatment safety and efficacy.

In the Drug Rediscovery Protocol (DRUP), Dr. Gelderblom and colleagues examined the impact of broadening trial eligibility on patient outcomes. DRUP is an ongoing Dutch national, multicenter, pan-cancer, nonrandomized clinical trial in which patients are treated off-label with approved molecularly targeted or immunotherapies.

In the trial, 1019 patients with treatment-refractory disease were matched to one of the available study drugs based on their tumor molecular profile and enrolled in parallel cohorts. Cohorts were defined by tumor type, molecular profile, and study drug.

Among these patients, 82 patients — 8% of the cohort — were granted waivers to participate. Most waivers (45%) were granted as exceptions to general- or drug-related eligibility criteria, often because of out-of-range lab results. Other categories included treatment and testing exceptions, as well as out-of-window testing. 

The researchers then compared safety and efficacy outcomes between the 82 participants granted waivers and the 937 who did not receive waivers. 

Overall, Dr. Gelderblom’s team found that the rate of serious adverse events was similar between patients who received a waiver and those who did not: 39% vs 41%, respectively.

A relationship between waivers and serious adverse events was deemed “unlikely” for 86% of patients and “possible” for 14%. In two cases concerning a direct relationship, for instance, patients who received waivers for decreased hemoglobin levels developed anemia.

The rate of clinical benefit — defined as an objective response or stable disease for at least 16 weeks — was similar between the groups. Overall, 40% of patients who received a waiver (33 of 82) had a clinical benefit vs 33% of patients without a waiver (P = .43). Median overall survival for patients that received a waiver was also similar — 11 months in the waiver group and 8 months in the nonwaiver group (hazard ratio, 0.87; P = .33).

“Safety and clinical benefit were preserved in patients for whom a waiver was granted,” the authors concluded.

The study had several limitations. The diversity of cancer types, treatments, and reasons for protocol exemptions precluded subgroup analyses. In addition, because the decision to grant waivers depended in large part on the likelihood of clinical benefit, “it is possible that patients who received waivers were positively selected for clinical benefit compared with the general study population,” the authors wrote.

So, “although the clinical benefit rate of the patient group for whom a waiver was granted appears to be slightly higher, this difference might be explained by the selection process of the central study team, in which each waiver request was carefully considered, weighing the risks and potential benefits for the patient in question,” the authors explained.

Overall, “these findings advocate for a broader and more inclusive design when establishing novel trials, paving the way for a more effective and tailored application of cancer therapies in patients with advanced or refractory disease,” Dr. Gelderblom said.

Commenting on the study, Bishal Gyawali, MD, PhD, said that “relaxing eligibility criteria is important, and I support this. Trials should include patients that are more representative of the real-world, so that results are generalizable.”

However, “the paper overemphasized efficacy,” said Dr. Gyawali, from Queen’s University, Kingston, Ontario, Canada. The sample size of waiver-granted patients was small, plus “the clinical benefit rate is not a marker of efficacy.

“The response rate is somewhat better, but for a heterogeneous study with multiple targets and drugs, it is difficult to say much about treatment effects here,” Dr. Gyawali added. Overall, “we shouldn’t read too much into treatment benefits based on these numbers.”

Funding for the study was provided by the Stelvio for Life Foundation, the Dutch Cancer Society, Amgen, AstraZeneca, Bayer, Boehringer Ingelheim, Bristol Myers Squibb, pharma&, Eisai Co., Ipsen, Merck Sharp & Dohme, Novartis, Pfizer, and Roche. Dr. Gelderblom declared no conflicts of interest, and Dr. Gyawali declared no conflicts of interest related to his comment.
 

A version of this article appeared on Medscape.com.

Patients with treatment-refractory cancers who did not meet eligibility criteria for a pan-cancer clinical trial but received waivers allowing them to participate had similar outcomes to patients who participated without waivers, a new analysis revealed.

The study, published online in Clinical Cancer Research, highlighted the potential benefits of broadening eligibility criteria for clinical trials.

“It is well known that results in an ‘ideal’ population do not always translate to the real-world population,” senior author Hans Gelderblom, MD, chair of the Department of Medical Oncology at the Leiden University Medical Center, Leiden, the Netherlands, said in a press release. “Eligibility criteria are often too strict, and educated exemptions by experienced investigators can help individual patients, especially in a last-resort trial.”

Although experts have expressed interest in improving trial inclusivity, it’s unclear how doing so might impact treatment safety and efficacy.

In the Drug Rediscovery Protocol (DRUP), Dr. Gelderblom and colleagues examined the impact of broadening trial eligibility on patient outcomes. DRUP is an ongoing Dutch national, multicenter, pan-cancer, nonrandomized clinical trial in which patients are treated off-label with approved molecularly targeted or immunotherapies.

In the trial, 1019 patients with treatment-refractory disease were matched to one of the available study drugs based on their tumor molecular profile and enrolled in parallel cohorts. Cohorts were defined by tumor type, molecular profile, and study drug.

Among these patients, 82 patients — 8% of the cohort — were granted waivers to participate. Most waivers (45%) were granted as exceptions to general- or drug-related eligibility criteria, often because of out-of-range lab results. Other categories included treatment and testing exceptions, as well as out-of-window testing. 

The researchers then compared safety and efficacy outcomes between the 82 participants granted waivers and the 937 who did not receive waivers. 

Overall, Dr. Gelderblom’s team found that the rate of serious adverse events was similar between patients who received a waiver and those who did not: 39% vs 41%, respectively.

A relationship between waivers and serious adverse events was deemed “unlikely” for 86% of patients and “possible” for 14%. In two cases concerning a direct relationship, for instance, patients who received waivers for decreased hemoglobin levels developed anemia.

The rate of clinical benefit — defined as an objective response or stable disease for at least 16 weeks — was similar between the groups. Overall, 40% of patients who received a waiver (33 of 82) had a clinical benefit vs 33% of patients without a waiver (P = .43). Median overall survival for patients that received a waiver was also similar — 11 months in the waiver group and 8 months in the nonwaiver group (hazard ratio, 0.87; P = .33).

“Safety and clinical benefit were preserved in patients for whom a waiver was granted,” the authors concluded.

The study had several limitations. The diversity of cancer types, treatments, and reasons for protocol exemptions precluded subgroup analyses. In addition, because the decision to grant waivers depended in large part on the likelihood of clinical benefit, “it is possible that patients who received waivers were positively selected for clinical benefit compared with the general study population,” the authors wrote.

So, “although the clinical benefit rate of the patient group for whom a waiver was granted appears to be slightly higher, this difference might be explained by the selection process of the central study team, in which each waiver request was carefully considered, weighing the risks and potential benefits for the patient in question,” the authors explained.

Overall, “these findings advocate for a broader and more inclusive design when establishing novel trials, paving the way for a more effective and tailored application of cancer therapies in patients with advanced or refractory disease,” Dr. Gelderblom said.

Commenting on the study, Bishal Gyawali, MD, PhD, said that “relaxing eligibility criteria is important, and I support this. Trials should include patients that are more representative of the real-world, so that results are generalizable.”

However, “the paper overemphasized efficacy,” said Dr. Gyawali, from Queen’s University, Kingston, Ontario, Canada. The sample size of waiver-granted patients was small, plus “the clinical benefit rate is not a marker of efficacy.

“The response rate is somewhat better, but for a heterogeneous study with multiple targets and drugs, it is difficult to say much about treatment effects here,” Dr. Gyawali added. Overall, “we shouldn’t read too much into treatment benefits based on these numbers.”

Funding for the study was provided by the Stelvio for Life Foundation, the Dutch Cancer Society, Amgen, AstraZeneca, Bayer, Boehringer Ingelheim, Bristol Myers Squibb, pharma&, Eisai Co., Ipsen, Merck Sharp & Dohme, Novartis, Pfizer, and Roche. Dr. Gelderblom declared no conflicts of interest, and Dr. Gyawali declared no conflicts of interest related to his comment.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COMBAT-MS: Therapy Choice for Relapsing-Remitting MS Has ‘Small’ Impact on Disability Progression, Patient-Reported Outcomes

Article Type
Changed
Tue, 07/09/2024 - 11:46

An initial choice of disease-modifying therapy for patients with relapsing-remitting multiple sclerosis (MS) does not appear to have a large effect on eventual progression of disability and patient-reported outcomes, according to recent research published in Annals of Neurology.

Fredrik Piehl, MD, PhD, of the department of clinical neuroscience at Karolinska Institutet in Stockholm, and colleagues analyzed results from a cohort study in Sweden of 2449 patients with relapsing-remitting MS who started an initial disease-modifying therapy (DMT), and 2463 patients who switched from their first therapy between 2011 and 2018, with 1148 patients overlapping in both groups. DMTs evaluated in the group that started an initial treatment included rituximab (591 patients), natalizumab (334 patients), dimethyl fumarate (416 patients), interferon (992 patients), and glatiramer acetate (116 patients), while DMTs included in the group switching therapies were rituximab (748 patients), natalizumab (541 patients), dimethyl fumarate (570 patients), fingolimod (443 patients), and teriflunomide (161 patients).

The researchers compared patients receiving low-dose rituximab with other MS therapies, with confirmed disability worsening (CDW) over 12 months and change in disease-related impact on daily life as measured by MS Impact Scale-29 (MSIS-29) subscales as primary outcomes at 3 years after therapy initiation or switching. They also assessed the rate of relapse, discontinuation of therapy, and serious adverse events as secondary outcomes.

At 3 years, among patients who received rituximab, 9.1% of patients who initiated therapy and 5.1% who switched therapy experienced CDW, and there were no significant differences in disease worsening between patients who received rituximab and those who received other MS therapies. “Most instances of CDW on rituximab were in subjects with no relapse within 3 years of treatment start,” the researchers said.

Patient MSIS-29 physical subscores at 3 years improved by 1.3 points in the initial DMT group and by 0.4 points in the DMT-switching group, while MSIS-29 psychological scores improved by 8.4 points in the initial DMT and by 3.6 points in the DMT-switching group. “Adjusted for baseline characteristics, MSIS-29 physical subscale scores decreased more with natalizumab, both as a first DMT and after a DMT switch, compared with rituximab, although absolute differences were small,” Dr. Piehl and colleagues said.

With regard to secondary outcomes, there was a reduction in mean overall Expanded Disability Status Scale (EDSS) score compared with baseline in the initial rituximab group at 3 years (–0.2 points), with 28.7% of patients experiencing improvement and 19.0% experiencing worsening, while there was no overall change in mean EDSS score in the rituximab-switching group. At 5 years, mean EDSS scores decreased compared with baseline in the initial rituximab group (–0.1 point), with 27.1% patients experiencing improvement and 20.8% experiencing worsening, and there was an increase in overall EDSS score (0.1 point) at 5 years for the rituximab-switching group, with improvement in 17.9% of patients and worsening in 26.4% of patients. However, there were no significant differences between rituximab and other DMTs.

Patients in both initial and switching rituximab groups had a lower annualized relapse rate (ARR) compared with other DMTs, with the exception of natalizumab in the initial DMT group (3 vs 2 additional relapses per 100 patients per year). The highest ARR in the initial DMT group belonged to interferon (13 additional relapses per 100 patients per year) and teriflunomide (8 additional relapses per 100 patients per year). “Similar differences were evident also at 5 years, with significantly higher ARRs with all other DMTs compared with rituximab, except for natalizumab, in both the first DMT and DMT switch groups,” Dr. Piehl and colleagues said.

In the group of patients who received rituximab, 75.7% of patients had no evidence of disease activity (NEDA-3) at 3 years in the initial DMT group and 82.1% of patients in the DMT-switching group, which was “greater than for all comparators, except natalizumab as a first DMT,” the researchers said. “Proportions fulfilling NEDA-3 status at 5 years were higher with rituximab than with all comparators in both cohorts,” they noted.

Concerning safety, the researchers said there were minor differences in safety outcomes between rituximab and comparators, but patients in the DMT-switching group who received rituximab had a higher risk of severe infections compared with other groups.
 

 

 

Unanswered Questions About MS Therapies

In an interview, Mark Gudesblatt, MD, a neurologist at South Shore Neurologic Associates, New York, who was not involved in the study, emphasized the importance of high-potency DMTs and adherence for treatment success.

“Lower-efficacy DMT might result in insufficient suppression of disease activity that might not be clinically apparent,” he said. “Routine examination is not sufficient to detect cognitive impairment or change in cognitive impact of disease. Adherence is critical to therapy success, and infusion therapies or treatment not self-administered have higher likelihood of higher adherence rates.”

Commenting on the study by Piehl et al, Dr. Gudesblatt said it “provides important real-world information” on how infusion therapies are tolerated, their effectiveness, and their adherence compared with oral or self-administered treatments. For rituximab, “just as importantly, this therapy provides effective disease control with less accumulated disability and disability related health care costs,” he said.

Dr. Gudesblatt said there are several unanswered issues in the study, including the uncertain nature of the incidence and development of rituximab-blocking antibodies, which could potentially differ by biosimilar. “[H]ow this impacts therapy efficacy is unclear,” he said. “The presence of blocking antibodies should be routinely monitored.”

Another issue is the between-patient variation in degree of B-cell depletion and speed of B-cell repletion, which might differ based on therapy duration. “The timing and frequency of dosing is an issue that also needs further critical analysis and improved guidelines,” he noted.

Dr. Gudesblatt said up to 25% of patients with MS might have unrecognized immune deficiency. “[I]mmune deficiency unrelated to DMT as well as the development of immune deficiency related to DMT are issues of concern, as the rate of infections in B-cell depleting agents are higher than other class of DMT,” he explained. Patients with MS who develop infections carry significant risk of morbidity and mortality, he added.

“Lastly, the issue of vaccination failure is extremely high in B-cell depleting agents, and with the recent viral pandemic and lingering concerns about recurrent similar scenarios, this is another issue of great concern with use of this highly adherent and effective DMT choice,” Dr. Gudesblatt said.

Several authors reported personal and institutional relationships in the form of grants, consultancies, research support, honoraria, advisory board positions, travel support, and other fees for Bayer, Biogen, Merck, Novartis, Roche, and Teva. Dr. Gudesblatt reports no relevant conflicts of interest.

Publications
Topics
Sections

An initial choice of disease-modifying therapy for patients with relapsing-remitting multiple sclerosis (MS) does not appear to have a large effect on eventual progression of disability and patient-reported outcomes, according to recent research published in Annals of Neurology.

Fredrik Piehl, MD, PhD, of the department of clinical neuroscience at Karolinska Institutet in Stockholm, and colleagues analyzed results from a cohort study in Sweden of 2449 patients with relapsing-remitting MS who started an initial disease-modifying therapy (DMT), and 2463 patients who switched from their first therapy between 2011 and 2018, with 1148 patients overlapping in both groups. DMTs evaluated in the group that started an initial treatment included rituximab (591 patients), natalizumab (334 patients), dimethyl fumarate (416 patients), interferon (992 patients), and glatiramer acetate (116 patients), while DMTs included in the group switching therapies were rituximab (748 patients), natalizumab (541 patients), dimethyl fumarate (570 patients), fingolimod (443 patients), and teriflunomide (161 patients).

The researchers compared patients receiving low-dose rituximab with other MS therapies, with confirmed disability worsening (CDW) over 12 months and change in disease-related impact on daily life as measured by MS Impact Scale-29 (MSIS-29) subscales as primary outcomes at 3 years after therapy initiation or switching. They also assessed the rate of relapse, discontinuation of therapy, and serious adverse events as secondary outcomes.

At 3 years, among patients who received rituximab, 9.1% of patients who initiated therapy and 5.1% who switched therapy experienced CDW, and there were no significant differences in disease worsening between patients who received rituximab and those who received other MS therapies. “Most instances of CDW on rituximab were in subjects with no relapse within 3 years of treatment start,” the researchers said.

Patient MSIS-29 physical subscores at 3 years improved by 1.3 points in the initial DMT group and by 0.4 points in the DMT-switching group, while MSIS-29 psychological scores improved by 8.4 points in the initial DMT and by 3.6 points in the DMT-switching group. “Adjusted for baseline characteristics, MSIS-29 physical subscale scores decreased more with natalizumab, both as a first DMT and after a DMT switch, compared with rituximab, although absolute differences were small,” Dr. Piehl and colleagues said.

With regard to secondary outcomes, there was a reduction in mean overall Expanded Disability Status Scale (EDSS) score compared with baseline in the initial rituximab group at 3 years (–0.2 points), with 28.7% of patients experiencing improvement and 19.0% experiencing worsening, while there was no overall change in mean EDSS score in the rituximab-switching group. At 5 years, mean EDSS scores decreased compared with baseline in the initial rituximab group (–0.1 point), with 27.1% patients experiencing improvement and 20.8% experiencing worsening, and there was an increase in overall EDSS score (0.1 point) at 5 years for the rituximab-switching group, with improvement in 17.9% of patients and worsening in 26.4% of patients. However, there were no significant differences between rituximab and other DMTs.

Patients in both initial and switching rituximab groups had a lower annualized relapse rate (ARR) compared with other DMTs, with the exception of natalizumab in the initial DMT group (3 vs 2 additional relapses per 100 patients per year). The highest ARR in the initial DMT group belonged to interferon (13 additional relapses per 100 patients per year) and teriflunomide (8 additional relapses per 100 patients per year). “Similar differences were evident also at 5 years, with significantly higher ARRs with all other DMTs compared with rituximab, except for natalizumab, in both the first DMT and DMT switch groups,” Dr. Piehl and colleagues said.

In the group of patients who received rituximab, 75.7% of patients had no evidence of disease activity (NEDA-3) at 3 years in the initial DMT group and 82.1% of patients in the DMT-switching group, which was “greater than for all comparators, except natalizumab as a first DMT,” the researchers said. “Proportions fulfilling NEDA-3 status at 5 years were higher with rituximab than with all comparators in both cohorts,” they noted.

Concerning safety, the researchers said there were minor differences in safety outcomes between rituximab and comparators, but patients in the DMT-switching group who received rituximab had a higher risk of severe infections compared with other groups.
 

 

 

Unanswered Questions About MS Therapies

In an interview, Mark Gudesblatt, MD, a neurologist at South Shore Neurologic Associates, New York, who was not involved in the study, emphasized the importance of high-potency DMTs and adherence for treatment success.

“Lower-efficacy DMT might result in insufficient suppression of disease activity that might not be clinically apparent,” he said. “Routine examination is not sufficient to detect cognitive impairment or change in cognitive impact of disease. Adherence is critical to therapy success, and infusion therapies or treatment not self-administered have higher likelihood of higher adherence rates.”

Commenting on the study by Piehl et al, Dr. Gudesblatt said it “provides important real-world information” on how infusion therapies are tolerated, their effectiveness, and their adherence compared with oral or self-administered treatments. For rituximab, “just as importantly, this therapy provides effective disease control with less accumulated disability and disability related health care costs,” he said.

Dr. Gudesblatt said there are several unanswered issues in the study, including the uncertain nature of the incidence and development of rituximab-blocking antibodies, which could potentially differ by biosimilar. “[H]ow this impacts therapy efficacy is unclear,” he said. “The presence of blocking antibodies should be routinely monitored.”

Another issue is the between-patient variation in degree of B-cell depletion and speed of B-cell repletion, which might differ based on therapy duration. “The timing and frequency of dosing is an issue that also needs further critical analysis and improved guidelines,” he noted.

Dr. Gudesblatt said up to 25% of patients with MS might have unrecognized immune deficiency. “[I]mmune deficiency unrelated to DMT as well as the development of immune deficiency related to DMT are issues of concern, as the rate of infections in B-cell depleting agents are higher than other class of DMT,” he explained. Patients with MS who develop infections carry significant risk of morbidity and mortality, he added.

“Lastly, the issue of vaccination failure is extremely high in B-cell depleting agents, and with the recent viral pandemic and lingering concerns about recurrent similar scenarios, this is another issue of great concern with use of this highly adherent and effective DMT choice,” Dr. Gudesblatt said.

Several authors reported personal and institutional relationships in the form of grants, consultancies, research support, honoraria, advisory board positions, travel support, and other fees for Bayer, Biogen, Merck, Novartis, Roche, and Teva. Dr. Gudesblatt reports no relevant conflicts of interest.

An initial choice of disease-modifying therapy for patients with relapsing-remitting multiple sclerosis (MS) does not appear to have a large effect on eventual progression of disability and patient-reported outcomes, according to recent research published in Annals of Neurology.

Fredrik Piehl, MD, PhD, of the department of clinical neuroscience at Karolinska Institutet in Stockholm, and colleagues analyzed results from a cohort study in Sweden of 2449 patients with relapsing-remitting MS who started an initial disease-modifying therapy (DMT), and 2463 patients who switched from their first therapy between 2011 and 2018, with 1148 patients overlapping in both groups. DMTs evaluated in the group that started an initial treatment included rituximab (591 patients), natalizumab (334 patients), dimethyl fumarate (416 patients), interferon (992 patients), and glatiramer acetate (116 patients), while DMTs included in the group switching therapies were rituximab (748 patients), natalizumab (541 patients), dimethyl fumarate (570 patients), fingolimod (443 patients), and teriflunomide (161 patients).

The researchers compared patients receiving low-dose rituximab with other MS therapies, with confirmed disability worsening (CDW) over 12 months and change in disease-related impact on daily life as measured by MS Impact Scale-29 (MSIS-29) subscales as primary outcomes at 3 years after therapy initiation or switching. They also assessed the rate of relapse, discontinuation of therapy, and serious adverse events as secondary outcomes.

At 3 years, among patients who received rituximab, 9.1% of patients who initiated therapy and 5.1% who switched therapy experienced CDW, and there were no significant differences in disease worsening between patients who received rituximab and those who received other MS therapies. “Most instances of CDW on rituximab were in subjects with no relapse within 3 years of treatment start,” the researchers said.

Patient MSIS-29 physical subscores at 3 years improved by 1.3 points in the initial DMT group and by 0.4 points in the DMT-switching group, while MSIS-29 psychological scores improved by 8.4 points in the initial DMT and by 3.6 points in the DMT-switching group. “Adjusted for baseline characteristics, MSIS-29 physical subscale scores decreased more with natalizumab, both as a first DMT and after a DMT switch, compared with rituximab, although absolute differences were small,” Dr. Piehl and colleagues said.

With regard to secondary outcomes, there was a reduction in mean overall Expanded Disability Status Scale (EDSS) score compared with baseline in the initial rituximab group at 3 years (–0.2 points), with 28.7% of patients experiencing improvement and 19.0% experiencing worsening, while there was no overall change in mean EDSS score in the rituximab-switching group. At 5 years, mean EDSS scores decreased compared with baseline in the initial rituximab group (–0.1 point), with 27.1% patients experiencing improvement and 20.8% experiencing worsening, and there was an increase in overall EDSS score (0.1 point) at 5 years for the rituximab-switching group, with improvement in 17.9% of patients and worsening in 26.4% of patients. However, there were no significant differences between rituximab and other DMTs.

Patients in both initial and switching rituximab groups had a lower annualized relapse rate (ARR) compared with other DMTs, with the exception of natalizumab in the initial DMT group (3 vs 2 additional relapses per 100 patients per year). The highest ARR in the initial DMT group belonged to interferon (13 additional relapses per 100 patients per year) and teriflunomide (8 additional relapses per 100 patients per year). “Similar differences were evident also at 5 years, with significantly higher ARRs with all other DMTs compared with rituximab, except for natalizumab, in both the first DMT and DMT switch groups,” Dr. Piehl and colleagues said.

In the group of patients who received rituximab, 75.7% of patients had no evidence of disease activity (NEDA-3) at 3 years in the initial DMT group and 82.1% of patients in the DMT-switching group, which was “greater than for all comparators, except natalizumab as a first DMT,” the researchers said. “Proportions fulfilling NEDA-3 status at 5 years were higher with rituximab than with all comparators in both cohorts,” they noted.

Concerning safety, the researchers said there were minor differences in safety outcomes between rituximab and comparators, but patients in the DMT-switching group who received rituximab had a higher risk of severe infections compared with other groups.
 

 

 

Unanswered Questions About MS Therapies

In an interview, Mark Gudesblatt, MD, a neurologist at South Shore Neurologic Associates, New York, who was not involved in the study, emphasized the importance of high-potency DMTs and adherence for treatment success.

“Lower-efficacy DMT might result in insufficient suppression of disease activity that might not be clinically apparent,” he said. “Routine examination is not sufficient to detect cognitive impairment or change in cognitive impact of disease. Adherence is critical to therapy success, and infusion therapies or treatment not self-administered have higher likelihood of higher adherence rates.”

Commenting on the study by Piehl et al, Dr. Gudesblatt said it “provides important real-world information” on how infusion therapies are tolerated, their effectiveness, and their adherence compared with oral or self-administered treatments. For rituximab, “just as importantly, this therapy provides effective disease control with less accumulated disability and disability related health care costs,” he said.

Dr. Gudesblatt said there are several unanswered issues in the study, including the uncertain nature of the incidence and development of rituximab-blocking antibodies, which could potentially differ by biosimilar. “[H]ow this impacts therapy efficacy is unclear,” he said. “The presence of blocking antibodies should be routinely monitored.”

Another issue is the between-patient variation in degree of B-cell depletion and speed of B-cell repletion, which might differ based on therapy duration. “The timing and frequency of dosing is an issue that also needs further critical analysis and improved guidelines,” he noted.

Dr. Gudesblatt said up to 25% of patients with MS might have unrecognized immune deficiency. “[I]mmune deficiency unrelated to DMT as well as the development of immune deficiency related to DMT are issues of concern, as the rate of infections in B-cell depleting agents are higher than other class of DMT,” he explained. Patients with MS who develop infections carry significant risk of morbidity and mortality, he added.

“Lastly, the issue of vaccination failure is extremely high in B-cell depleting agents, and with the recent viral pandemic and lingering concerns about recurrent similar scenarios, this is another issue of great concern with use of this highly adherent and effective DMT choice,” Dr. Gudesblatt said.

Several authors reported personal and institutional relationships in the form of grants, consultancies, research support, honoraria, advisory board positions, travel support, and other fees for Bayer, Biogen, Merck, Novartis, Roche, and Teva. Dr. Gudesblatt reports no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Neck Pain in Migraine Is Common, Linked to More Disability

Article Type
Changed
Mon, 07/08/2024 - 12:03

More than two-thirds of patients with migraine also suffer from neck pain, a combination that’s linked to higher levels of various forms of disability, an international, prospective, cross-sectional study finds.

Of 51,969 respondents with headache over the past year, the 27.9% with migraine were more likely to have neck pain than those with non-migraine headache (68.3% vs 36.1%, respectively, P < .001), reported Richard B. Lipton, MD, professor of neurology at Albert Einstein College of Medicine, New York City, and colleagues in Headache.

Compared with other patients with migraine, those who also have neck pain have “greater disability, more psychiatric comorbidities, more allodynia, diminished quality of life, decreased work productivity, and reduced response to treatment,” Dr. Lipton said in an interview. “If patients don’t report [neck pain], it is probably worth asking about. And when patients have both migraine and neck pain, they may merit increased therapeutic attention.”

As Dr. Lipton noted, clinicians have long known that neck pain is common in migraine, although it’s been unclear how the two conditions are connected. “One possibility is that the neck pain is actually a manifestation of the migraine headache. Another possibility is that the neck pain is an independent factor unrelated to migraine headaches: Many people have migraine and cervical spine disease. And the third possibility is that neck pain may be an exacerbating factor, that cervical spine disease may make the migraine worse.”

Referred pain is a potential factor too, he said.
 

Assessing Migraine, Neck Pain, and Disability

The new study sought to better understand the role of neck pain in migraine, Dr. Lipton said.

For the CaMEO-I study, researchers surveyed 51,969 adults with headache via the Internet in Canada, France, Germany, Japan, United Kingdom, and the United States from 2021-2022. Most of the 37,477 patients with non-migraine headaches were considered to have tension headaches.

Among the 14,492 patients with migraine, demographics were statistically similar among those who had neck pain or didn’t have it (average age = 40.7 and 42.1, 68.4% and 72.5% female, and average BMIs = 26.0 and 26.4, respectively).

Among patients in the US, 71.4% of patients with migraine reported neck pain versus 35.9% of those with non-migraine headaches. In Canada, the numbers were 69.5% and 37.5%, respectively.

Among all patients with migraine, moderate-to-severe disability was more common among those with neck pain than those without neck pain (47.7% vs 28.9%, respectively, P < .001). Those with both migraine and neck pain had more symptom burden (P < .001), and 28.4% said neck pain was their most bothersome symptom. They also had a higher number of symptoms (P < .001).

Several conditions were more common among patients with migraine who reported neck pain versus those who didn’t (depression/anxiety, 40.2% vs 28.2%; anxiety, 41.2% vs 29.2%; and allodynia, 54.0% vs 36.6%, respectively, all P  <  0.001). Those with neck pain were also more likely to have “poor acute treatment optimization” (61.1% vs 53.3%, respectively, P < .001).

Researchers noted limitations such as the use of self-reported data, the potential for selection bias, limitations regarding survey questions, and an inability to determine causation.
 

 

 

Clinical Messages

The findings suggest that patients with both migraine and neck pain have greater activation of second-order neurons in the trigeminocervical complex, Dr. Lipton said.

He added that neck pain is often part of the migraine prodrome or the migraine attack itself, suggesting that it’s “part and parcel of the migraine attack.” However, neck pain may have another cause — such as degenerative disease of the neck — if it’s not directly connected to migraine, he added.

As for clinical messages from the study, “it’s quite likely that the neck pain is a primary manifestation of migraine. Migraine may well be the explanation in the absence of a reason to look further,” Dr. Lipton said.

If neck pain heralds a migraine, treating the prodrome with CGRP receptor antagonists (“gepants”) can be helpful, he said. He highlighted other preventive options include beta blockers, anti-epilepsy drugs, and monoclonal antibodies. There’s also anecdotal support for using botulinum toxin A in patients with chronic migraine and neck pain, he said.

In an interview, Mayo Clinic Arizona associate professor of neurology Rashmi B. Halker Singh, MD, who’s familiar with the study but did not take part in it, praised the research. The findings “help us to better understand the impact of living with neck pain if you are somebody with migraine,” she said. “It alerts us that we need to be more aggressive in how we manage that in patients.”

The study also emphasizes the importance of preventive medication in appropriate patients with migraine, especially those with neck pain who may be living with greater disability, she said. “About 13% of people with migraine are on a preventive medication, but about 40% are eligible. That’s an area where we have a big gap.”

Dr. Halker Singh added that non-medication strategies such as acupuncture and physical therapy can be helpful.

AbbVie funded the study. Dr. Lipton reports support for the study from AbbVie; research support paid to his institution from the Czap Foundation, National Headache Foundation, National Institutes of Health, S&L Marx Foundation, and US Food and Drug Administration; and personal fees from AbbVie/Allergan, American Academy of Neurology, American Headache Society, Amgen, Biohaven, Biovision, Boston, Dr. Reddy’s (Promius), electroCore, Eli Lilly, GlaxoSmithKline, Grifols, Lundbeck (Alder), Merck, Pernix, Pfizer, Teva, Vector, and Vedanta Research. He holds stock/options in Axon, Biohaven, CoolTech, and Manistee. Other authors report various disclosures.

Dr. Halker Singh is deputy editor of Headache, where the study was published, but wasn’t aware of it until it was published.

Publications
Topics
Sections

More than two-thirds of patients with migraine also suffer from neck pain, a combination that’s linked to higher levels of various forms of disability, an international, prospective, cross-sectional study finds.

Of 51,969 respondents with headache over the past year, the 27.9% with migraine were more likely to have neck pain than those with non-migraine headache (68.3% vs 36.1%, respectively, P < .001), reported Richard B. Lipton, MD, professor of neurology at Albert Einstein College of Medicine, New York City, and colleagues in Headache.

Compared with other patients with migraine, those who also have neck pain have “greater disability, more psychiatric comorbidities, more allodynia, diminished quality of life, decreased work productivity, and reduced response to treatment,” Dr. Lipton said in an interview. “If patients don’t report [neck pain], it is probably worth asking about. And when patients have both migraine and neck pain, they may merit increased therapeutic attention.”

As Dr. Lipton noted, clinicians have long known that neck pain is common in migraine, although it’s been unclear how the two conditions are connected. “One possibility is that the neck pain is actually a manifestation of the migraine headache. Another possibility is that the neck pain is an independent factor unrelated to migraine headaches: Many people have migraine and cervical spine disease. And the third possibility is that neck pain may be an exacerbating factor, that cervical spine disease may make the migraine worse.”

Referred pain is a potential factor too, he said.
 

Assessing Migraine, Neck Pain, and Disability

The new study sought to better understand the role of neck pain in migraine, Dr. Lipton said.

For the CaMEO-I study, researchers surveyed 51,969 adults with headache via the Internet in Canada, France, Germany, Japan, United Kingdom, and the United States from 2021-2022. Most of the 37,477 patients with non-migraine headaches were considered to have tension headaches.

Among the 14,492 patients with migraine, demographics were statistically similar among those who had neck pain or didn’t have it (average age = 40.7 and 42.1, 68.4% and 72.5% female, and average BMIs = 26.0 and 26.4, respectively).

Among patients in the US, 71.4% of patients with migraine reported neck pain versus 35.9% of those with non-migraine headaches. In Canada, the numbers were 69.5% and 37.5%, respectively.

Among all patients with migraine, moderate-to-severe disability was more common among those with neck pain than those without neck pain (47.7% vs 28.9%, respectively, P < .001). Those with both migraine and neck pain had more symptom burden (P < .001), and 28.4% said neck pain was their most bothersome symptom. They also had a higher number of symptoms (P < .001).

Several conditions were more common among patients with migraine who reported neck pain versus those who didn’t (depression/anxiety, 40.2% vs 28.2%; anxiety, 41.2% vs 29.2%; and allodynia, 54.0% vs 36.6%, respectively, all P  <  0.001). Those with neck pain were also more likely to have “poor acute treatment optimization” (61.1% vs 53.3%, respectively, P < .001).

Researchers noted limitations such as the use of self-reported data, the potential for selection bias, limitations regarding survey questions, and an inability to determine causation.
 

 

 

Clinical Messages

The findings suggest that patients with both migraine and neck pain have greater activation of second-order neurons in the trigeminocervical complex, Dr. Lipton said.

He added that neck pain is often part of the migraine prodrome or the migraine attack itself, suggesting that it’s “part and parcel of the migraine attack.” However, neck pain may have another cause — such as degenerative disease of the neck — if it’s not directly connected to migraine, he added.

As for clinical messages from the study, “it’s quite likely that the neck pain is a primary manifestation of migraine. Migraine may well be the explanation in the absence of a reason to look further,” Dr. Lipton said.

If neck pain heralds a migraine, treating the prodrome with CGRP receptor antagonists (“gepants”) can be helpful, he said. He highlighted other preventive options include beta blockers, anti-epilepsy drugs, and monoclonal antibodies. There’s also anecdotal support for using botulinum toxin A in patients with chronic migraine and neck pain, he said.

In an interview, Mayo Clinic Arizona associate professor of neurology Rashmi B. Halker Singh, MD, who’s familiar with the study but did not take part in it, praised the research. The findings “help us to better understand the impact of living with neck pain if you are somebody with migraine,” she said. “It alerts us that we need to be more aggressive in how we manage that in patients.”

The study also emphasizes the importance of preventive medication in appropriate patients with migraine, especially those with neck pain who may be living with greater disability, she said. “About 13% of people with migraine are on a preventive medication, but about 40% are eligible. That’s an area where we have a big gap.”

Dr. Halker Singh added that non-medication strategies such as acupuncture and physical therapy can be helpful.

AbbVie funded the study. Dr. Lipton reports support for the study from AbbVie; research support paid to his institution from the Czap Foundation, National Headache Foundation, National Institutes of Health, S&L Marx Foundation, and US Food and Drug Administration; and personal fees from AbbVie/Allergan, American Academy of Neurology, American Headache Society, Amgen, Biohaven, Biovision, Boston, Dr. Reddy’s (Promius), electroCore, Eli Lilly, GlaxoSmithKline, Grifols, Lundbeck (Alder), Merck, Pernix, Pfizer, Teva, Vector, and Vedanta Research. He holds stock/options in Axon, Biohaven, CoolTech, and Manistee. Other authors report various disclosures.

Dr. Halker Singh is deputy editor of Headache, where the study was published, but wasn’t aware of it until it was published.

More than two-thirds of patients with migraine also suffer from neck pain, a combination that’s linked to higher levels of various forms of disability, an international, prospective, cross-sectional study finds.

Of 51,969 respondents with headache over the past year, the 27.9% with migraine were more likely to have neck pain than those with non-migraine headache (68.3% vs 36.1%, respectively, P < .001), reported Richard B. Lipton, MD, professor of neurology at Albert Einstein College of Medicine, New York City, and colleagues in Headache.

Compared with other patients with migraine, those who also have neck pain have “greater disability, more psychiatric comorbidities, more allodynia, diminished quality of life, decreased work productivity, and reduced response to treatment,” Dr. Lipton said in an interview. “If patients don’t report [neck pain], it is probably worth asking about. And when patients have both migraine and neck pain, they may merit increased therapeutic attention.”

As Dr. Lipton noted, clinicians have long known that neck pain is common in migraine, although it’s been unclear how the two conditions are connected. “One possibility is that the neck pain is actually a manifestation of the migraine headache. Another possibility is that the neck pain is an independent factor unrelated to migraine headaches: Many people have migraine and cervical spine disease. And the third possibility is that neck pain may be an exacerbating factor, that cervical spine disease may make the migraine worse.”

Referred pain is a potential factor too, he said.
 

Assessing Migraine, Neck Pain, and Disability

The new study sought to better understand the role of neck pain in migraine, Dr. Lipton said.

For the CaMEO-I study, researchers surveyed 51,969 adults with headache via the Internet in Canada, France, Germany, Japan, United Kingdom, and the United States from 2021-2022. Most of the 37,477 patients with non-migraine headaches were considered to have tension headaches.

Among the 14,492 patients with migraine, demographics were statistically similar among those who had neck pain or didn’t have it (average age = 40.7 and 42.1, 68.4% and 72.5% female, and average BMIs = 26.0 and 26.4, respectively).

Among patients in the US, 71.4% of patients with migraine reported neck pain versus 35.9% of those with non-migraine headaches. In Canada, the numbers were 69.5% and 37.5%, respectively.

Among all patients with migraine, moderate-to-severe disability was more common among those with neck pain than those without neck pain (47.7% vs 28.9%, respectively, P < .001). Those with both migraine and neck pain had more symptom burden (P < .001), and 28.4% said neck pain was their most bothersome symptom. They also had a higher number of symptoms (P < .001).

Several conditions were more common among patients with migraine who reported neck pain versus those who didn’t (depression/anxiety, 40.2% vs 28.2%; anxiety, 41.2% vs 29.2%; and allodynia, 54.0% vs 36.6%, respectively, all P  <  0.001). Those with neck pain were also more likely to have “poor acute treatment optimization” (61.1% vs 53.3%, respectively, P < .001).

Researchers noted limitations such as the use of self-reported data, the potential for selection bias, limitations regarding survey questions, and an inability to determine causation.
 

 

 

Clinical Messages

The findings suggest that patients with both migraine and neck pain have greater activation of second-order neurons in the trigeminocervical complex, Dr. Lipton said.

He added that neck pain is often part of the migraine prodrome or the migraine attack itself, suggesting that it’s “part and parcel of the migraine attack.” However, neck pain may have another cause — such as degenerative disease of the neck — if it’s not directly connected to migraine, he added.

As for clinical messages from the study, “it’s quite likely that the neck pain is a primary manifestation of migraine. Migraine may well be the explanation in the absence of a reason to look further,” Dr. Lipton said.

If neck pain heralds a migraine, treating the prodrome with CGRP receptor antagonists (“gepants”) can be helpful, he said. He highlighted other preventive options include beta blockers, anti-epilepsy drugs, and monoclonal antibodies. There’s also anecdotal support for using botulinum toxin A in patients with chronic migraine and neck pain, he said.

In an interview, Mayo Clinic Arizona associate professor of neurology Rashmi B. Halker Singh, MD, who’s familiar with the study but did not take part in it, praised the research. The findings “help us to better understand the impact of living with neck pain if you are somebody with migraine,” she said. “It alerts us that we need to be more aggressive in how we manage that in patients.”

The study also emphasizes the importance of preventive medication in appropriate patients with migraine, especially those with neck pain who may be living with greater disability, she said. “About 13% of people with migraine are on a preventive medication, but about 40% are eligible. That’s an area where we have a big gap.”

Dr. Halker Singh added that non-medication strategies such as acupuncture and physical therapy can be helpful.

AbbVie funded the study. Dr. Lipton reports support for the study from AbbVie; research support paid to his institution from the Czap Foundation, National Headache Foundation, National Institutes of Health, S&L Marx Foundation, and US Food and Drug Administration; and personal fees from AbbVie/Allergan, American Academy of Neurology, American Headache Society, Amgen, Biohaven, Biovision, Boston, Dr. Reddy’s (Promius), electroCore, Eli Lilly, GlaxoSmithKline, Grifols, Lundbeck (Alder), Merck, Pernix, Pfizer, Teva, Vector, and Vedanta Research. He holds stock/options in Axon, Biohaven, CoolTech, and Manistee. Other authors report various disclosures.

Dr. Halker Singh is deputy editor of Headache, where the study was published, but wasn’t aware of it until it was published.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HEADACHE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Stroke Recurrence Risk Doubles in Patients With AF Who Stop Anticoagulation Therapy

Article Type
Changed
Fri, 07/05/2024 - 12:16

Patients with atrial fibrillation who discontinued anticoagulation (OAC) therapy after an ischemic stroke faced double the risk of a recurrent stroke within 1 year compared with counterparts who didn’t stop the drugs, a new Danish nationwide cohort study finds.

Among 8,119 patients aged 50 years and older (54.1% male, mean age 78.4), 4.3% had a recurrent stroke within 1 year following discharge for the initial stroke, reported David Gaist, PhD, of Odense University Hospital, Odense, Denmark, and colleagues in JAMA Neurology.

An adjusted analysis found that those who stopped therapy were more than twice as likely to experience another stroke over a mean 2.9 years (13.4% vs 6.8%, adjusted odds ratio [aOR] = 2.13; 95% confidence interval [CI], 1.57-2.89).

The findings highlight the preventive power of OAC therapy, Dr. Gaist said in an interview, and point to the importance of counseling patients about the benefits of the drugs. “Clinicians can provide balanced information on the pros and cons of discontinuing oral anticoagulants as well as lay out plans on when to restart the medication,” he said.

The researchers launched the study “to provide data on how often recurrent ischemic strokes occur in a large, unselected cohort of patients with atrial fibrillation who had a stroke and started or restarted oral anticoagulants, a situation mirroring what we see in our everyday lives as clinicians,” Dr. Gaist said. “We also wanted to see if patients with breakthrough strokes had particular characteristics compared with patients who did not have a recurrent stroke. Finally, we wanted to quantify a very simple cause of breakthrough stroke by answering the following question: How many of these patients had stopped taking their oral anticoagulant?”
 

A Large, Unselected Patient Cohort

Dr. Gaist and colleagues tracked 8,119 patients with ischemic stroke and atrial fibrillation who started or restarted OAC therapy within 30 days following their discharge between 2014 and 2021. Patients either had atrial fibrillation before their stroke or developed it afterward.

Eighty-one percent of patients had hypertension, 19.7% had diabetes, and 27.3% had ischemic heart disease; 35.3% had never smoked and smoking information was missing for 15.9%. Race/ethnicity information was not provided.

Patients were followed for an average of 2.9 years until 2022, and all were alive at least 30 days after discharge. During that time, 663 patients had a recurrent ischemic stroke (4.3%), of whom 80.4% were on OAC therapy. The percentage who had stroke at 2 years rose to 6.5%.

While the researchers thought the number of strokes was high, Dr. Gaist said, this isn’t a sign that the drugs aren’t working. “Oral anticoagulant use in secondary prevention in atrial fibrillation is guideline-supported as it has been proven to reduce the risk of stroke by roughly two thirds.”

Of study participants at baseline, 37.9% took oral anticoagulants, 23.5% took direct oral anticoagulants (DOACs; dabigatran, rivaroxaban, apixaban, and edoxaban), and 15.1% took vitamin K antagonists. In a nested case-control analysis of 663 cases (58.7% men, mean age 80.1) matched to 2,652 controls, at admission for ischemic stroke, 80.4% were on OAC therapy, and 8%-11% of patients stopped OAC therapy after their strokes, the researchers reported.

Patients who stopped OAC therapy had more severe strokes than those who didn’t at 7 days (median recurrent ischemic stroke Scandinavian Stroke Scale [SSS] score = 40.0 vs 46.0, respectively; aOR = 2.10; 95% CI, 1.31-3.36). Those who stopped OAC therapy also had higher mortality rates at 7 days (11.2% vs 3.9%, respectively) and 30 days (28.1% vs 10.9%, respectively).

It’s not clear why some patients discontinued OAC therapy. “We looked for evidence of serious bleeding or surgical procedures around the time of anticoagulant discontinuation but found this only to be the case in roughly 10% of these patients,” Dr. Gaist said.

He added that the study probably “underestimates the issue of anticoagulant discontinuation, particularly for DOACs, where a shorter half-life compared with warfarin means that even a short drug-break of a few days puts the patient at increased risk of stroke.”

The authors noted study limitations, including the lack of data on actual medication usage, alcohol usage, stroke etiology, lesion location, and socioeconomic status. And, they wrote, the study population is mostly of European origin.
 

 

 

No Surprises

Steven R. Messe, MD, professor of neurology at the Hospital of the University of Pennsylvania, Philadelphia, who didn’t take part in the study but is familiar with its findings, said in an interview that the study is a “well-done analysis.”

The findings are not surprising, he said. “The overall risk of stroke recurrence was 4.3% at 1 year while the mortality rate was higher at 15.4%. Given that the median CHA2DS2-VASc score was 4 and the average age was 79, the stroke recurrence rate and mortality rate are in line with prior studies.”

In regard to the power of OAC therapy to prevent recurrent strokes, Dr. Messe noted that patients may not be adhering to prescribed regimens. Also, “while DOACs are clearly safer that vitamin K–dependent anticoagulants, the medications are generally not dose adjusted. It is possible that adjusting the dose based on measured anti-Xa levels to insure therapeutic anticoagulant effects may reduce the stroke risk further.”

He added that “most of these patients with prior stroke and atrial fibrillation are vasculopathic and at risk of additional strokes due to other mechanisms such as small vessel or large vessel disease.”

In the big picture, the study “confirms again that anticoagulation should be prescribed to all patients with atrial fibrillation and prior stroke, unless there is a strong bleeding risk contraindication,” Dr. Messe said. These patients are clearly at high risk of stroke recurrence and mortality, and all risk factors should be aggressively managed.”

Researchers are exploring other options, he said. “For example, there are studies of factor XI inhibitors that could be added to a DOAC for additional reductions in ischemic stroke. In addition, in patients undergoing cardiac surgery, the randomized trial LAOS III demonstrated that surgical left atrial occlusion in addition to anticoagulation may provide additional stroke prevention.”

Dr. Gaist disclosed personal fees from Pfizer and Bristol Myers Squibb, and grants from Bayer. Several other authors reported various relationships with industry. Dr. Messe has no disclosures.
 

Publications
Topics
Sections

Patients with atrial fibrillation who discontinued anticoagulation (OAC) therapy after an ischemic stroke faced double the risk of a recurrent stroke within 1 year compared with counterparts who didn’t stop the drugs, a new Danish nationwide cohort study finds.

Among 8,119 patients aged 50 years and older (54.1% male, mean age 78.4), 4.3% had a recurrent stroke within 1 year following discharge for the initial stroke, reported David Gaist, PhD, of Odense University Hospital, Odense, Denmark, and colleagues in JAMA Neurology.

An adjusted analysis found that those who stopped therapy were more than twice as likely to experience another stroke over a mean 2.9 years (13.4% vs 6.8%, adjusted odds ratio [aOR] = 2.13; 95% confidence interval [CI], 1.57-2.89).

The findings highlight the preventive power of OAC therapy, Dr. Gaist said in an interview, and point to the importance of counseling patients about the benefits of the drugs. “Clinicians can provide balanced information on the pros and cons of discontinuing oral anticoagulants as well as lay out plans on when to restart the medication,” he said.

The researchers launched the study “to provide data on how often recurrent ischemic strokes occur in a large, unselected cohort of patients with atrial fibrillation who had a stroke and started or restarted oral anticoagulants, a situation mirroring what we see in our everyday lives as clinicians,” Dr. Gaist said. “We also wanted to see if patients with breakthrough strokes had particular characteristics compared with patients who did not have a recurrent stroke. Finally, we wanted to quantify a very simple cause of breakthrough stroke by answering the following question: How many of these patients had stopped taking their oral anticoagulant?”
 

A Large, Unselected Patient Cohort

Dr. Gaist and colleagues tracked 8,119 patients with ischemic stroke and atrial fibrillation who started or restarted OAC therapy within 30 days following their discharge between 2014 and 2021. Patients either had atrial fibrillation before their stroke or developed it afterward.

Eighty-one percent of patients had hypertension, 19.7% had diabetes, and 27.3% had ischemic heart disease; 35.3% had never smoked and smoking information was missing for 15.9%. Race/ethnicity information was not provided.

Patients were followed for an average of 2.9 years until 2022, and all were alive at least 30 days after discharge. During that time, 663 patients had a recurrent ischemic stroke (4.3%), of whom 80.4% were on OAC therapy. The percentage who had stroke at 2 years rose to 6.5%.

While the researchers thought the number of strokes was high, Dr. Gaist said, this isn’t a sign that the drugs aren’t working. “Oral anticoagulant use in secondary prevention in atrial fibrillation is guideline-supported as it has been proven to reduce the risk of stroke by roughly two thirds.”

Of study participants at baseline, 37.9% took oral anticoagulants, 23.5% took direct oral anticoagulants (DOACs; dabigatran, rivaroxaban, apixaban, and edoxaban), and 15.1% took vitamin K antagonists. In a nested case-control analysis of 663 cases (58.7% men, mean age 80.1) matched to 2,652 controls, at admission for ischemic stroke, 80.4% were on OAC therapy, and 8%-11% of patients stopped OAC therapy after their strokes, the researchers reported.

Patients who stopped OAC therapy had more severe strokes than those who didn’t at 7 days (median recurrent ischemic stroke Scandinavian Stroke Scale [SSS] score = 40.0 vs 46.0, respectively; aOR = 2.10; 95% CI, 1.31-3.36). Those who stopped OAC therapy also had higher mortality rates at 7 days (11.2% vs 3.9%, respectively) and 30 days (28.1% vs 10.9%, respectively).

It’s not clear why some patients discontinued OAC therapy. “We looked for evidence of serious bleeding or surgical procedures around the time of anticoagulant discontinuation but found this only to be the case in roughly 10% of these patients,” Dr. Gaist said.

He added that the study probably “underestimates the issue of anticoagulant discontinuation, particularly for DOACs, where a shorter half-life compared with warfarin means that even a short drug-break of a few days puts the patient at increased risk of stroke.”

The authors noted study limitations, including the lack of data on actual medication usage, alcohol usage, stroke etiology, lesion location, and socioeconomic status. And, they wrote, the study population is mostly of European origin.
 

 

 

No Surprises

Steven R. Messe, MD, professor of neurology at the Hospital of the University of Pennsylvania, Philadelphia, who didn’t take part in the study but is familiar with its findings, said in an interview that the study is a “well-done analysis.”

The findings are not surprising, he said. “The overall risk of stroke recurrence was 4.3% at 1 year while the mortality rate was higher at 15.4%. Given that the median CHA2DS2-VASc score was 4 and the average age was 79, the stroke recurrence rate and mortality rate are in line with prior studies.”

In regard to the power of OAC therapy to prevent recurrent strokes, Dr. Messe noted that patients may not be adhering to prescribed regimens. Also, “while DOACs are clearly safer that vitamin K–dependent anticoagulants, the medications are generally not dose adjusted. It is possible that adjusting the dose based on measured anti-Xa levels to insure therapeutic anticoagulant effects may reduce the stroke risk further.”

He added that “most of these patients with prior stroke and atrial fibrillation are vasculopathic and at risk of additional strokes due to other mechanisms such as small vessel or large vessel disease.”

In the big picture, the study “confirms again that anticoagulation should be prescribed to all patients with atrial fibrillation and prior stroke, unless there is a strong bleeding risk contraindication,” Dr. Messe said. These patients are clearly at high risk of stroke recurrence and mortality, and all risk factors should be aggressively managed.”

Researchers are exploring other options, he said. “For example, there are studies of factor XI inhibitors that could be added to a DOAC for additional reductions in ischemic stroke. In addition, in patients undergoing cardiac surgery, the randomized trial LAOS III demonstrated that surgical left atrial occlusion in addition to anticoagulation may provide additional stroke prevention.”

Dr. Gaist disclosed personal fees from Pfizer and Bristol Myers Squibb, and grants from Bayer. Several other authors reported various relationships with industry. Dr. Messe has no disclosures.
 

Patients with atrial fibrillation who discontinued anticoagulation (OAC) therapy after an ischemic stroke faced double the risk of a recurrent stroke within 1 year compared with counterparts who didn’t stop the drugs, a new Danish nationwide cohort study finds.

Among 8,119 patients aged 50 years and older (54.1% male, mean age 78.4), 4.3% had a recurrent stroke within 1 year following discharge for the initial stroke, reported David Gaist, PhD, of Odense University Hospital, Odense, Denmark, and colleagues in JAMA Neurology.

An adjusted analysis found that those who stopped therapy were more than twice as likely to experience another stroke over a mean 2.9 years (13.4% vs 6.8%, adjusted odds ratio [aOR] = 2.13; 95% confidence interval [CI], 1.57-2.89).

The findings highlight the preventive power of OAC therapy, Dr. Gaist said in an interview, and point to the importance of counseling patients about the benefits of the drugs. “Clinicians can provide balanced information on the pros and cons of discontinuing oral anticoagulants as well as lay out plans on when to restart the medication,” he said.

The researchers launched the study “to provide data on how often recurrent ischemic strokes occur in a large, unselected cohort of patients with atrial fibrillation who had a stroke and started or restarted oral anticoagulants, a situation mirroring what we see in our everyday lives as clinicians,” Dr. Gaist said. “We also wanted to see if patients with breakthrough strokes had particular characteristics compared with patients who did not have a recurrent stroke. Finally, we wanted to quantify a very simple cause of breakthrough stroke by answering the following question: How many of these patients had stopped taking their oral anticoagulant?”
 

A Large, Unselected Patient Cohort

Dr. Gaist and colleagues tracked 8,119 patients with ischemic stroke and atrial fibrillation who started or restarted OAC therapy within 30 days following their discharge between 2014 and 2021. Patients either had atrial fibrillation before their stroke or developed it afterward.

Eighty-one percent of patients had hypertension, 19.7% had diabetes, and 27.3% had ischemic heart disease; 35.3% had never smoked and smoking information was missing for 15.9%. Race/ethnicity information was not provided.

Patients were followed for an average of 2.9 years until 2022, and all were alive at least 30 days after discharge. During that time, 663 patients had a recurrent ischemic stroke (4.3%), of whom 80.4% were on OAC therapy. The percentage who had stroke at 2 years rose to 6.5%.

While the researchers thought the number of strokes was high, Dr. Gaist said, this isn’t a sign that the drugs aren’t working. “Oral anticoagulant use in secondary prevention in atrial fibrillation is guideline-supported as it has been proven to reduce the risk of stroke by roughly two thirds.”

Of study participants at baseline, 37.9% took oral anticoagulants, 23.5% took direct oral anticoagulants (DOACs; dabigatran, rivaroxaban, apixaban, and edoxaban), and 15.1% took vitamin K antagonists. In a nested case-control analysis of 663 cases (58.7% men, mean age 80.1) matched to 2,652 controls, at admission for ischemic stroke, 80.4% were on OAC therapy, and 8%-11% of patients stopped OAC therapy after their strokes, the researchers reported.

Patients who stopped OAC therapy had more severe strokes than those who didn’t at 7 days (median recurrent ischemic stroke Scandinavian Stroke Scale [SSS] score = 40.0 vs 46.0, respectively; aOR = 2.10; 95% CI, 1.31-3.36). Those who stopped OAC therapy also had higher mortality rates at 7 days (11.2% vs 3.9%, respectively) and 30 days (28.1% vs 10.9%, respectively).

It’s not clear why some patients discontinued OAC therapy. “We looked for evidence of serious bleeding or surgical procedures around the time of anticoagulant discontinuation but found this only to be the case in roughly 10% of these patients,” Dr. Gaist said.

He added that the study probably “underestimates the issue of anticoagulant discontinuation, particularly for DOACs, where a shorter half-life compared with warfarin means that even a short drug-break of a few days puts the patient at increased risk of stroke.”

The authors noted study limitations, including the lack of data on actual medication usage, alcohol usage, stroke etiology, lesion location, and socioeconomic status. And, they wrote, the study population is mostly of European origin.
 

 

 

No Surprises

Steven R. Messe, MD, professor of neurology at the Hospital of the University of Pennsylvania, Philadelphia, who didn’t take part in the study but is familiar with its findings, said in an interview that the study is a “well-done analysis.”

The findings are not surprising, he said. “The overall risk of stroke recurrence was 4.3% at 1 year while the mortality rate was higher at 15.4%. Given that the median CHA2DS2-VASc score was 4 and the average age was 79, the stroke recurrence rate and mortality rate are in line with prior studies.”

In regard to the power of OAC therapy to prevent recurrent strokes, Dr. Messe noted that patients may not be adhering to prescribed regimens. Also, “while DOACs are clearly safer that vitamin K–dependent anticoagulants, the medications are generally not dose adjusted. It is possible that adjusting the dose based on measured anti-Xa levels to insure therapeutic anticoagulant effects may reduce the stroke risk further.”

He added that “most of these patients with prior stroke and atrial fibrillation are vasculopathic and at risk of additional strokes due to other mechanisms such as small vessel or large vessel disease.”

In the big picture, the study “confirms again that anticoagulation should be prescribed to all patients with atrial fibrillation and prior stroke, unless there is a strong bleeding risk contraindication,” Dr. Messe said. These patients are clearly at high risk of stroke recurrence and mortality, and all risk factors should be aggressively managed.”

Researchers are exploring other options, he said. “For example, there are studies of factor XI inhibitors that could be added to a DOAC for additional reductions in ischemic stroke. In addition, in patients undergoing cardiac surgery, the randomized trial LAOS III demonstrated that surgical left atrial occlusion in addition to anticoagulation may provide additional stroke prevention.”

Dr. Gaist disclosed personal fees from Pfizer and Bristol Myers Squibb, and grants from Bayer. Several other authors reported various relationships with industry. Dr. Messe has no disclosures.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is Anxiety a Prodromal Feature of Parkinson’s Disease?

Article Type
Changed
Tue, 07/02/2024 - 12:34

Individuals with anxiety have at least a twofold higher risk of developing Parkinson’s disease than those without anxiety, new research suggested.

Investigators drew on 10-year data from primary care registry to compare almost 110,000 patients who developed anxiety after the age of 50 years with close to 900,000 matched controls without anxiety.

After adjusting for a variety of sociodemographic, lifestyle, psychiatric, and neurological factors, they found that the risk of developing Parkinson’s disease was double in those with anxiety, compared with controls.

“Anxiety is known to be a feature of the early stages of Parkinson’s disease, but prior to our study, the prospective risk of Parkinson’s in those over the age of 50 with new-onset anxiety was unknown,” colead author Juan Bazo Alvarez, a senior research fellow in the Division of Epidemiology and Health at University College London, London, England, said in a news release.

The study was published online in the British Journal of General Practice.

The presence of anxiety is increased in prodromal Parkinson’s disease, but the prospective risk for Parkinson’s disease in those aged 50 years or older with new-onset anxiety was largely unknown.

Investigators analyzed data from a large UK primary care dataset that includes all people aged between 50 and 99 years who were registered with a participating practice from Jan. 1, 2008, to Dec. 31, 2018.

They identified 109,435 people (35% men) with more than one anxiety record in the database but no previous record of anxiety for 1 year or more and 878,256 people (37% men) with no history of anxiety (control group).

Features of Parkinson’s disease such as sleep problems, depression, tremor, and impaired balance were then tracked from the point of the anxiety diagnosis until 1 year before the Parkinson’s disease diagnosis.

Among those with anxiety, 331 developed Parkinson’s disease during the follow-up period, with a median time to diagnosis of 4.9 years after the first recorded episode of anxiety.

The incidence of Parkinson’s disease was 1.2 per 1000 person-years (95% CI, 0.92-1.13) in those with anxiety versus 0.49 (95% CI, 0.47-0.52) in those without anxiety.

After adjustment for age, sex, social deprivation, lifestyle factors, severe mental illness, head trauma, and dementia, the risk for Parkinson’s disease was double in those with anxiety, compared with the non-anxiety group (hazard ratio, 2.1; 95% CI, 1.9-2.4).

Individuals without anxiety also developed Parkinson’s disease later than those with anxiety.

The researchers identified specific symptoms that were associated with later development of Parkinson’s disease in those with anxiety, including depression, sleep disturbance, fatigue, and cognitive impairment, among other symptoms.

“The results suggest that there is a strong association between anxiety and diagnosis of Parkinson’s disease in patients aged over 50 years who present with a new diagnosis of anxiety,” the authors wrote. “This provides evidence for anxiety as a prodromal presentation of Parkinson’s disease.”

Future research “should explore anxiety in relation to other prodromal symptoms and how this symptom complex is associated with the incidence of Parkinson’s disease,” the researchers wrote. Doing so “may lead to earlier diagnosis and better management of Parkinson’s disease.”

This study was funded by the European Union. Specific authors received funding from the National Institute for Health and Care Research and the Alzheimer’s Society Clinical Training Fellowship program. The authors declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Individuals with anxiety have at least a twofold higher risk of developing Parkinson’s disease than those without anxiety, new research suggested.

Investigators drew on 10-year data from primary care registry to compare almost 110,000 patients who developed anxiety after the age of 50 years with close to 900,000 matched controls without anxiety.

After adjusting for a variety of sociodemographic, lifestyle, psychiatric, and neurological factors, they found that the risk of developing Parkinson’s disease was double in those with anxiety, compared with controls.

“Anxiety is known to be a feature of the early stages of Parkinson’s disease, but prior to our study, the prospective risk of Parkinson’s in those over the age of 50 with new-onset anxiety was unknown,” colead author Juan Bazo Alvarez, a senior research fellow in the Division of Epidemiology and Health at University College London, London, England, said in a news release.

The study was published online in the British Journal of General Practice.

The presence of anxiety is increased in prodromal Parkinson’s disease, but the prospective risk for Parkinson’s disease in those aged 50 years or older with new-onset anxiety was largely unknown.

Investigators analyzed data from a large UK primary care dataset that includes all people aged between 50 and 99 years who were registered with a participating practice from Jan. 1, 2008, to Dec. 31, 2018.

They identified 109,435 people (35% men) with more than one anxiety record in the database but no previous record of anxiety for 1 year or more and 878,256 people (37% men) with no history of anxiety (control group).

Features of Parkinson’s disease such as sleep problems, depression, tremor, and impaired balance were then tracked from the point of the anxiety diagnosis until 1 year before the Parkinson’s disease diagnosis.

Among those with anxiety, 331 developed Parkinson’s disease during the follow-up period, with a median time to diagnosis of 4.9 years after the first recorded episode of anxiety.

The incidence of Parkinson’s disease was 1.2 per 1000 person-years (95% CI, 0.92-1.13) in those with anxiety versus 0.49 (95% CI, 0.47-0.52) in those without anxiety.

After adjustment for age, sex, social deprivation, lifestyle factors, severe mental illness, head trauma, and dementia, the risk for Parkinson’s disease was double in those with anxiety, compared with the non-anxiety group (hazard ratio, 2.1; 95% CI, 1.9-2.4).

Individuals without anxiety also developed Parkinson’s disease later than those with anxiety.

The researchers identified specific symptoms that were associated with later development of Parkinson’s disease in those with anxiety, including depression, sleep disturbance, fatigue, and cognitive impairment, among other symptoms.

“The results suggest that there is a strong association between anxiety and diagnosis of Parkinson’s disease in patients aged over 50 years who present with a new diagnosis of anxiety,” the authors wrote. “This provides evidence for anxiety as a prodromal presentation of Parkinson’s disease.”

Future research “should explore anxiety in relation to other prodromal symptoms and how this symptom complex is associated with the incidence of Parkinson’s disease,” the researchers wrote. Doing so “may lead to earlier diagnosis and better management of Parkinson’s disease.”

This study was funded by the European Union. Specific authors received funding from the National Institute for Health and Care Research and the Alzheimer’s Society Clinical Training Fellowship program. The authors declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Individuals with anxiety have at least a twofold higher risk of developing Parkinson’s disease than those without anxiety, new research suggested.

Investigators drew on 10-year data from primary care registry to compare almost 110,000 patients who developed anxiety after the age of 50 years with close to 900,000 matched controls without anxiety.

After adjusting for a variety of sociodemographic, lifestyle, psychiatric, and neurological factors, they found that the risk of developing Parkinson’s disease was double in those with anxiety, compared with controls.

“Anxiety is known to be a feature of the early stages of Parkinson’s disease, but prior to our study, the prospective risk of Parkinson’s in those over the age of 50 with new-onset anxiety was unknown,” colead author Juan Bazo Alvarez, a senior research fellow in the Division of Epidemiology and Health at University College London, London, England, said in a news release.

The study was published online in the British Journal of General Practice.

The presence of anxiety is increased in prodromal Parkinson’s disease, but the prospective risk for Parkinson’s disease in those aged 50 years or older with new-onset anxiety was largely unknown.

Investigators analyzed data from a large UK primary care dataset that includes all people aged between 50 and 99 years who were registered with a participating practice from Jan. 1, 2008, to Dec. 31, 2018.

They identified 109,435 people (35% men) with more than one anxiety record in the database but no previous record of anxiety for 1 year or more and 878,256 people (37% men) with no history of anxiety (control group).

Features of Parkinson’s disease such as sleep problems, depression, tremor, and impaired balance were then tracked from the point of the anxiety diagnosis until 1 year before the Parkinson’s disease diagnosis.

Among those with anxiety, 331 developed Parkinson’s disease during the follow-up period, with a median time to diagnosis of 4.9 years after the first recorded episode of anxiety.

The incidence of Parkinson’s disease was 1.2 per 1000 person-years (95% CI, 0.92-1.13) in those with anxiety versus 0.49 (95% CI, 0.47-0.52) in those without anxiety.

After adjustment for age, sex, social deprivation, lifestyle factors, severe mental illness, head trauma, and dementia, the risk for Parkinson’s disease was double in those with anxiety, compared with the non-anxiety group (hazard ratio, 2.1; 95% CI, 1.9-2.4).

Individuals without anxiety also developed Parkinson’s disease later than those with anxiety.

The researchers identified specific symptoms that were associated with later development of Parkinson’s disease in those with anxiety, including depression, sleep disturbance, fatigue, and cognitive impairment, among other symptoms.

“The results suggest that there is a strong association between anxiety and diagnosis of Parkinson’s disease in patients aged over 50 years who present with a new diagnosis of anxiety,” the authors wrote. “This provides evidence for anxiety as a prodromal presentation of Parkinson’s disease.”

Future research “should explore anxiety in relation to other prodromal symptoms and how this symptom complex is associated with the incidence of Parkinson’s disease,” the researchers wrote. Doing so “may lead to earlier diagnosis and better management of Parkinson’s disease.”

This study was funded by the European Union. Specific authors received funding from the National Institute for Health and Care Research and the Alzheimer’s Society Clinical Training Fellowship program. The authors declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE BRITISH JOURNAL OF GENERAL PRACTICE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Benzos Are Hard on the Brain, But Do They Raise Dementia Risk?

Article Type
Changed
Tue, 07/02/2024 - 12:20

New research supports current guidelines cautioning against long-term use of benzodiazepines.

The study of more than 5000 older adults found that benzodiazepine use was associated with an accelerated reduction in the volume of the hippocampus and amygdala — brain regions involved in memory and mood regulation. However, benzodiazepine use overall was not associated with an increased risk for dementia.

The findings suggest that benzodiazepine use “may have subtle, long-term impact on brain health,” lead investigator Frank Wolters, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues wrote.

The study was published online in BMC Medicine.
 

Conflicting Evidence 

Benzodiazepines are commonly prescribed in older adults for anxiety and sleep disorders. Though the short-term cognitive side effects are well documented, the long-term impact on neurodegeneration and dementia risk remains unclear. Some studies have linked benzodiazepine use to an increased risk for dementia, whereas others have not.

Dr. Wolters and colleagues assessed the effect of benzodiazepine use on long-term dementia risk and on imaging markers of neurodegeneration in 5443 cognitively healthy adults (mean age, 71 years; 57% women) from the population-based Rotterdam Study. 

Benzodiazepine use between 1991 and 2008 was determined using pharmacy dispensing records, and dementia incidence was determined from medical records. 

Half of the participants had used benzodiazepines at any time in the 15 years before baseline (2005-2008); 47% used anxiolytics, 20% used sedative-hypnotics, 34% used both, and 13% were still using the drugs at the baseline assessment. 

During an average follow-up of 11 years, 13% of participants developed dementia. 

Overall, use of benzodiazepines was not associated with dementia risk, compared with never-use (hazard ratio [HR], 1.06), irrespective of cumulative dose. 

The risk for dementia was somewhat higher with any use of anxiolytics than with sedative-hypnotics (HR, 1.17 vs HR, 0.92), although neither was statistically significant. The highest risk estimates were observed for high cumulative dose of anxiolytics (HR, 1.33). 

Sensitivity analyses of the two most commonly used anxiolytics found no differences in risk between use of short half-life oxazepam and long half-life diazepam (HR, 1.01 and HR, 1.06, respectively, for ever-use, compared with never-use for oxazepam and diazepam).
 

Brain Atrophy

The researchers investigated potential associations between benzodiazepine use and brain volumes using brain MRI imaging from 4836 participants.

They found that current use of a benzodiazepine at baseline was significantly associated with lower total brain volume — as well as lower hippocampus, amygdala, and thalamus volume cross-sectionally — and with accelerated volume loss of the hippocampus and, to a lesser extent, amygdala longitudinally. 

Imaging findings did not differ by type of benzodiazepine used or cumulative dose. 

“Given the availability of effective alternative pharmacological and nonpharmacological treatments for anxiety and sleep problems, it is important to carefully consider the necessity of prolonged benzodiazepine use in light of potential detrimental effects on brain health,” the authors wrote. 
 

Risks Go Beyond the Brain

Commenting on the study, Shaheen Lakhan, MD, PhD, a neurologist and researcher based in Miami, Florida, noted that “chronic benzodiazepine use may reduce neuroplasticity, potentially interfering with the brain’s ability to form new connections and adapt.

“Long-term use can lead to down-regulation of GABA receptors, altering the brain’s natural inhibitory mechanisms and potentially contributing to tolerance and withdrawal symptoms. Prolonged use can also disrupt the balance of various neurotransmitter systems beyond just GABA, potentially affecting mood, cognition, and overall brain function,” said Dr. Lakhan, who was not involved in the study. 

“While the literature is mixed on chronic benzodiazepine use and dementia risk, prolonged use has consistently been associated with accelerated volume loss in certain brain regions, particularly the hippocampus and amygdala,” which are responsible for memory, learning, and emotional regulation, he noted. 

“Beyond cognitive impairments and brain volume loss, chronic benzodiazepine use is associated with tolerance and dependence, potential for abuse, interactions with other drugs, and increased fall risk, especially in older adults,” Dr. Lakhan added.

Current guidelines discourage long-term use of benzodiazepines because of risk for psychological and physical dependence; falls; and cognitive impairment, especially in older adults. Nevertheless, research shows that 30%-40% of older benzodiazepine users stay on the medication beyond the recommended period of several weeks.

Donovan T. Maust, MD, Department of Psychiatry, University of Michigan Medical School, Ann Arbor, said in an interview these new findings are consistent with other recently published observational research that suggest benzodiazepine use is not linked to dementia risk. 

“I realize that such meta-analyses that find a positive relationship between benzodiazepines and dementia are out there, but they include older, less rigorous studies,” said Dr. Maust, who was not part of the new study. “In my opinion, the jury is not still out on this topic. However, there are plenty of other reasons to avoid them — and in particular, starting them — in older adults, most notably the increased risk of fall injury as well as increased overdose risk when taken along with opioids.”

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New research supports current guidelines cautioning against long-term use of benzodiazepines.

The study of more than 5000 older adults found that benzodiazepine use was associated with an accelerated reduction in the volume of the hippocampus and amygdala — brain regions involved in memory and mood regulation. However, benzodiazepine use overall was not associated with an increased risk for dementia.

The findings suggest that benzodiazepine use “may have subtle, long-term impact on brain health,” lead investigator Frank Wolters, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues wrote.

The study was published online in BMC Medicine.
 

Conflicting Evidence 

Benzodiazepines are commonly prescribed in older adults for anxiety and sleep disorders. Though the short-term cognitive side effects are well documented, the long-term impact on neurodegeneration and dementia risk remains unclear. Some studies have linked benzodiazepine use to an increased risk for dementia, whereas others have not.

Dr. Wolters and colleagues assessed the effect of benzodiazepine use on long-term dementia risk and on imaging markers of neurodegeneration in 5443 cognitively healthy adults (mean age, 71 years; 57% women) from the population-based Rotterdam Study. 

Benzodiazepine use between 1991 and 2008 was determined using pharmacy dispensing records, and dementia incidence was determined from medical records. 

Half of the participants had used benzodiazepines at any time in the 15 years before baseline (2005-2008); 47% used anxiolytics, 20% used sedative-hypnotics, 34% used both, and 13% were still using the drugs at the baseline assessment. 

During an average follow-up of 11 years, 13% of participants developed dementia. 

Overall, use of benzodiazepines was not associated with dementia risk, compared with never-use (hazard ratio [HR], 1.06), irrespective of cumulative dose. 

The risk for dementia was somewhat higher with any use of anxiolytics than with sedative-hypnotics (HR, 1.17 vs HR, 0.92), although neither was statistically significant. The highest risk estimates were observed for high cumulative dose of anxiolytics (HR, 1.33). 

Sensitivity analyses of the two most commonly used anxiolytics found no differences in risk between use of short half-life oxazepam and long half-life diazepam (HR, 1.01 and HR, 1.06, respectively, for ever-use, compared with never-use for oxazepam and diazepam).
 

Brain Atrophy

The researchers investigated potential associations between benzodiazepine use and brain volumes using brain MRI imaging from 4836 participants.

They found that current use of a benzodiazepine at baseline was significantly associated with lower total brain volume — as well as lower hippocampus, amygdala, and thalamus volume cross-sectionally — and with accelerated volume loss of the hippocampus and, to a lesser extent, amygdala longitudinally. 

Imaging findings did not differ by type of benzodiazepine used or cumulative dose. 

“Given the availability of effective alternative pharmacological and nonpharmacological treatments for anxiety and sleep problems, it is important to carefully consider the necessity of prolonged benzodiazepine use in light of potential detrimental effects on brain health,” the authors wrote. 
 

Risks Go Beyond the Brain

Commenting on the study, Shaheen Lakhan, MD, PhD, a neurologist and researcher based in Miami, Florida, noted that “chronic benzodiazepine use may reduce neuroplasticity, potentially interfering with the brain’s ability to form new connections and adapt.

“Long-term use can lead to down-regulation of GABA receptors, altering the brain’s natural inhibitory mechanisms and potentially contributing to tolerance and withdrawal symptoms. Prolonged use can also disrupt the balance of various neurotransmitter systems beyond just GABA, potentially affecting mood, cognition, and overall brain function,” said Dr. Lakhan, who was not involved in the study. 

“While the literature is mixed on chronic benzodiazepine use and dementia risk, prolonged use has consistently been associated with accelerated volume loss in certain brain regions, particularly the hippocampus and amygdala,” which are responsible for memory, learning, and emotional regulation, he noted. 

“Beyond cognitive impairments and brain volume loss, chronic benzodiazepine use is associated with tolerance and dependence, potential for abuse, interactions with other drugs, and increased fall risk, especially in older adults,” Dr. Lakhan added.

Current guidelines discourage long-term use of benzodiazepines because of risk for psychological and physical dependence; falls; and cognitive impairment, especially in older adults. Nevertheless, research shows that 30%-40% of older benzodiazepine users stay on the medication beyond the recommended period of several weeks.

Donovan T. Maust, MD, Department of Psychiatry, University of Michigan Medical School, Ann Arbor, said in an interview these new findings are consistent with other recently published observational research that suggest benzodiazepine use is not linked to dementia risk. 

“I realize that such meta-analyses that find a positive relationship between benzodiazepines and dementia are out there, but they include older, less rigorous studies,” said Dr. Maust, who was not part of the new study. “In my opinion, the jury is not still out on this topic. However, there are plenty of other reasons to avoid them — and in particular, starting them — in older adults, most notably the increased risk of fall injury as well as increased overdose risk when taken along with opioids.”

A version of this article first appeared on Medscape.com.

New research supports current guidelines cautioning against long-term use of benzodiazepines.

The study of more than 5000 older adults found that benzodiazepine use was associated with an accelerated reduction in the volume of the hippocampus and amygdala — brain regions involved in memory and mood regulation. However, benzodiazepine use overall was not associated with an increased risk for dementia.

The findings suggest that benzodiazepine use “may have subtle, long-term impact on brain health,” lead investigator Frank Wolters, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues wrote.

The study was published online in BMC Medicine.
 

Conflicting Evidence 

Benzodiazepines are commonly prescribed in older adults for anxiety and sleep disorders. Though the short-term cognitive side effects are well documented, the long-term impact on neurodegeneration and dementia risk remains unclear. Some studies have linked benzodiazepine use to an increased risk for dementia, whereas others have not.

Dr. Wolters and colleagues assessed the effect of benzodiazepine use on long-term dementia risk and on imaging markers of neurodegeneration in 5443 cognitively healthy adults (mean age, 71 years; 57% women) from the population-based Rotterdam Study. 

Benzodiazepine use between 1991 and 2008 was determined using pharmacy dispensing records, and dementia incidence was determined from medical records. 

Half of the participants had used benzodiazepines at any time in the 15 years before baseline (2005-2008); 47% used anxiolytics, 20% used sedative-hypnotics, 34% used both, and 13% were still using the drugs at the baseline assessment. 

During an average follow-up of 11 years, 13% of participants developed dementia. 

Overall, use of benzodiazepines was not associated with dementia risk, compared with never-use (hazard ratio [HR], 1.06), irrespective of cumulative dose. 

The risk for dementia was somewhat higher with any use of anxiolytics than with sedative-hypnotics (HR, 1.17 vs HR, 0.92), although neither was statistically significant. The highest risk estimates were observed for high cumulative dose of anxiolytics (HR, 1.33). 

Sensitivity analyses of the two most commonly used anxiolytics found no differences in risk between use of short half-life oxazepam and long half-life diazepam (HR, 1.01 and HR, 1.06, respectively, for ever-use, compared with never-use for oxazepam and diazepam).
 

Brain Atrophy

The researchers investigated potential associations between benzodiazepine use and brain volumes using brain MRI imaging from 4836 participants.

They found that current use of a benzodiazepine at baseline was significantly associated with lower total brain volume — as well as lower hippocampus, amygdala, and thalamus volume cross-sectionally — and with accelerated volume loss of the hippocampus and, to a lesser extent, amygdala longitudinally. 

Imaging findings did not differ by type of benzodiazepine used or cumulative dose. 

“Given the availability of effective alternative pharmacological and nonpharmacological treatments for anxiety and sleep problems, it is important to carefully consider the necessity of prolonged benzodiazepine use in light of potential detrimental effects on brain health,” the authors wrote. 
 

Risks Go Beyond the Brain

Commenting on the study, Shaheen Lakhan, MD, PhD, a neurologist and researcher based in Miami, Florida, noted that “chronic benzodiazepine use may reduce neuroplasticity, potentially interfering with the brain’s ability to form new connections and adapt.

“Long-term use can lead to down-regulation of GABA receptors, altering the brain’s natural inhibitory mechanisms and potentially contributing to tolerance and withdrawal symptoms. Prolonged use can also disrupt the balance of various neurotransmitter systems beyond just GABA, potentially affecting mood, cognition, and overall brain function,” said Dr. Lakhan, who was not involved in the study. 

“While the literature is mixed on chronic benzodiazepine use and dementia risk, prolonged use has consistently been associated with accelerated volume loss in certain brain regions, particularly the hippocampus and amygdala,” which are responsible for memory, learning, and emotional regulation, he noted. 

“Beyond cognitive impairments and brain volume loss, chronic benzodiazepine use is associated with tolerance and dependence, potential for abuse, interactions with other drugs, and increased fall risk, especially in older adults,” Dr. Lakhan added.

Current guidelines discourage long-term use of benzodiazepines because of risk for psychological and physical dependence; falls; and cognitive impairment, especially in older adults. Nevertheless, research shows that 30%-40% of older benzodiazepine users stay on the medication beyond the recommended period of several weeks.

Donovan T. Maust, MD, Department of Psychiatry, University of Michigan Medical School, Ann Arbor, said in an interview these new findings are consistent with other recently published observational research that suggest benzodiazepine use is not linked to dementia risk. 

“I realize that such meta-analyses that find a positive relationship between benzodiazepines and dementia are out there, but they include older, less rigorous studies,” said Dr. Maust, who was not part of the new study. “In my opinion, the jury is not still out on this topic. However, there are plenty of other reasons to avoid them — and in particular, starting them — in older adults, most notably the increased risk of fall injury as well as increased overdose risk when taken along with opioids.”

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BMC MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cardiovascular Health Becoming a Major Risk Factor for Dementia

Article Type
Changed
Wed, 07/10/2024 - 14:05

In a shifting landscape in dementia risk factors, cardiovascular health is now taking precedence.

That’s according to researchers from University College London (UCL) in the United Kingdom who analyzed 27 papers about dementia that had data collected over more than 70 years. They calculated what share of dementia cases were due to different risk factors. Their findings were recently published in the Lancet Public Health.

Top risk factors for dementia over the years have been hypertension, obesity, diabetes, education, and smoking, according to a news release on the findings. But the prevalence of risk factors has changed over the decades.

Researchers said smoking and education have become less important risk factors because of “population-level interventions,” such as stop-smoking campaigns and compulsory public education. On the other hand, obesity and diabetes rates have increased and become bigger risk factors.

Hypertension remains the greatest risk factor, even though doctors and public health groups are putting more emphasis on managing the condition, the study said.

“Cardiovascular risk factors may have contributed more to dementia risk over time, so these deserve more targeted action for future dementia prevention efforts,” said Naaheed Mukadam, PhD, an associate professor at UCL and the lead author of the study.

Eliminating modifiable risk factors could theoretically prevent 40% of dementia cases, the release said. 

The CDC says that an estimated 5.8 million people in the United States have Alzheimer’s disease and related dementias, including 5.6 million people ages 65 and older and about 200,000 under age 65. The UCL release said an estimated 944,000 in the U.K. have dementia. 

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

In a shifting landscape in dementia risk factors, cardiovascular health is now taking precedence.

That’s according to researchers from University College London (UCL) in the United Kingdom who analyzed 27 papers about dementia that had data collected over more than 70 years. They calculated what share of dementia cases were due to different risk factors. Their findings were recently published in the Lancet Public Health.

Top risk factors for dementia over the years have been hypertension, obesity, diabetes, education, and smoking, according to a news release on the findings. But the prevalence of risk factors has changed over the decades.

Researchers said smoking and education have become less important risk factors because of “population-level interventions,” such as stop-smoking campaigns and compulsory public education. On the other hand, obesity and diabetes rates have increased and become bigger risk factors.

Hypertension remains the greatest risk factor, even though doctors and public health groups are putting more emphasis on managing the condition, the study said.

“Cardiovascular risk factors may have contributed more to dementia risk over time, so these deserve more targeted action for future dementia prevention efforts,” said Naaheed Mukadam, PhD, an associate professor at UCL and the lead author of the study.

Eliminating modifiable risk factors could theoretically prevent 40% of dementia cases, the release said. 

The CDC says that an estimated 5.8 million people in the United States have Alzheimer’s disease and related dementias, including 5.6 million people ages 65 and older and about 200,000 under age 65. The UCL release said an estimated 944,000 in the U.K. have dementia. 

A version of this article first appeared on WebMD.com.

In a shifting landscape in dementia risk factors, cardiovascular health is now taking precedence.

That’s according to researchers from University College London (UCL) in the United Kingdom who analyzed 27 papers about dementia that had data collected over more than 70 years. They calculated what share of dementia cases were due to different risk factors. Their findings were recently published in the Lancet Public Health.

Top risk factors for dementia over the years have been hypertension, obesity, diabetes, education, and smoking, according to a news release on the findings. But the prevalence of risk factors has changed over the decades.

Researchers said smoking and education have become less important risk factors because of “population-level interventions,” such as stop-smoking campaigns and compulsory public education. On the other hand, obesity and diabetes rates have increased and become bigger risk factors.

Hypertension remains the greatest risk factor, even though doctors and public health groups are putting more emphasis on managing the condition, the study said.

“Cardiovascular risk factors may have contributed more to dementia risk over time, so these deserve more targeted action for future dementia prevention efforts,” said Naaheed Mukadam, PhD, an associate professor at UCL and the lead author of the study.

Eliminating modifiable risk factors could theoretically prevent 40% of dementia cases, the release said. 

The CDC says that an estimated 5.8 million people in the United States have Alzheimer’s disease and related dementias, including 5.6 million people ages 65 and older and about 200,000 under age 65. The UCL release said an estimated 944,000 in the U.K. have dementia. 

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET PUBLIC HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Revised Criteria for Alzheimer’s Diagnosis, Staging Released

Article Type
Changed
Mon, 07/01/2024 - 15:15

A work group convened by the Alzheimer’s Association has released revised biology-based criteria for the diagnosis and staging of Alzheimer’s disease, including a new biomarker classification system that incorporates fluid and imaging biomarkers as well as an updated disease staging system. 

“Plasma markers are here now, and it’s very important to incorporate them into the criteria for diagnosis,” said senior author Maria C. Carrillo, PhD, Alzheimer’s Association chief science officer and medical affairs lead. 

The revised criteria are the first updates since 2018.

“Defining diseases biologically, rather than based on syndromic presentation, has long been standard in many areas of medicine — including cancer, heart disease, and diabetes — and is becoming a unifying concept common to all neurodegenerative diseases,” lead author Clifford Jack Jr, MD, with Mayo Clinic, Rochester, Minnesota, said in a news release from the Alzheimer’s Association. 

“These updates to the diagnostic criteria are needed now because we know more about the underlying biology of Alzheimer’s and we are able to measure those changes,” Dr. Jack added. 

The 2024 revised criteria for diagnosis and staging of Alzheimer’s disease were published online in Alzheimer’s & Dementia
 

Core Biomarkers Defined

The revised criteria define Alzheimer’s disease as a biologic process that begins with the appearance of Alzheimer’s disease neuropathologic change (ADNPC) in the absence of symptoms. Progression of the neuropathologic burden leads to the later appearance and progression of clinical symptoms.

The work group organized Alzheimer’s disease biomarkers into three broad categories: (1) core biomarkers of ADNPC, (2) nonspecific biomarkers that are important in Alzheimer’s disease but are also involved in other brain diseases, and (3) biomarkers of diseases or conditions that commonly coexist with Alzheimer’s disease.

Core Alzheimer’s biomarkers are subdivided into Core 1 and Core 2. 

Core 1 biomarkers become abnormal early in the disease course and directly measure either amyloid plaques or phosphorylated tau (p-tau). They include amyloid PET; cerebrospinal fluid (CSF) amyloid beta 42/40 ratio, CSF p-tau181/amyloid beta 42 ratio, and CSF total (t)-tau/amyloid beta 42 ratio; and “accurate” plasma biomarkers, such as p-tau217. 

“An abnormal Core 1 biomarker result is sufficient to establish a diagnosis of Alzheimer’s disease and to inform clinical decision making [sic] throughout the disease continuum,” the work group wrote. 

Core 2 biomarkers become abnormal later in the disease process and are more closely linked with the onset of symptoms. Core 2 biomarkers include tau PET and certain soluble tau fragments associated with tau proteinopathy (eg, MTBR-tau243) but also pT205 and nonphosphorylated mid-region tau fragments. 

Core 2 biomarkers, when combined with Core 1, may be used to stage biologic disease severity; abnormal Core 2 biomarkers “increase confidence that Alzheimer’s disease is contributing to symptoms,” the work group noted. 

The revised criteria give clinicians “the flexibility to use plasma or PET scans or CSF,” Dr. Carrillo said. “They will have several tools that they can choose from and offer this variety of tools to their patients. We need different tools for different individuals. There will be differences in coverage and access to these diagnostics.” 

The revised criteria also include an integrated biologic and clinical staging scheme that acknowledges the fact that common co-pathologies, cognitive reserve, and resistance may modify relationships between clinical and biologic Alzheimer’s disease stages. 
 

 

 

Formal Guidelines to Come 

The work group noted that currently, the clinical use of Alzheimer’s disease biomarkers is intended for the evaluation of symptomatic patients, not cognitively unimpaired individuals.

Disease-targeted therapies have not yet been approved for cognitively unimpaired individuals. For this reason, the work group currently recommends against diagnostic testing in cognitively unimpaired individuals outside the context of observational or therapeutic research studies. 

This recommendation would change in the future if disease-targeted therapies that are currently being evaluated in trials demonstrate a benefit in preventing cognitive decline and are approved for use in preclinical Alzheimer’s disease, they wrote. 

They emphasize that the revised criteria are not intended to provide step-by-step clinical practice guidelines for clinicians. Rather, they provide general principles to inform diagnosis and staging of Alzheimer’s disease that reflect current science.

“This is just the beginning,” said Dr. Carrillo. “This is a gathering of the evidence to date and putting it in one place so we can have a consensus and actually a way to test it and make it better as we add new science.”

This also serves as a “springboard” for the Alzheimer’s Association to create formal clinical guidelines. “That will come, hopefully, over the next 12 months. We’ll be working on it, and we hope to have that in 2025,” Dr. Carrillo said. 

The revised criteria also emphasize the role of the clinician. 

“The biologically based diagnosis of Alzheimer’s disease is meant to assist, rather than supplant, the clinical evaluation of individuals with cognitive impairment,” the work group wrote in a related commentary published online in Nature Medicine

Recent diagnostics and therapeutic developments “herald a virtuous cycle in which improvements in diagnostic methods enable more sophisticated treatment approaches, which in turn steer advances in diagnostic methods,” they continued. “An unchanging principle, however, is that effective treatment will always rely on the ability to diagnose and stage the biology driving the disease process.”

Funding for this research was provided by the National Institutes of Health, Alexander family professorship, GHR Foundation, Alzheimer’s Association, Veterans Administration, Life Molecular Imaging, Michael J. Fox Foundation for Parkinson’s Research, Avid Radiopharmaceuticals, Eli Lilly, Gates Foundation, Biogen, C2N Diagnostics, Eisai, Fujirebio, GE Healthcare, Roche, National Institute on Aging, Roche/Genentech, BrightFocus Foundation, Hoffmann-La Roche, Novo Nordisk, Toyama, National MS Society, Alzheimer Drug Discovery Foundation, and others. A complete list of donors and disclosures is included in the original article.

 A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A work group convened by the Alzheimer’s Association has released revised biology-based criteria for the diagnosis and staging of Alzheimer’s disease, including a new biomarker classification system that incorporates fluid and imaging biomarkers as well as an updated disease staging system. 

“Plasma markers are here now, and it’s very important to incorporate them into the criteria for diagnosis,” said senior author Maria C. Carrillo, PhD, Alzheimer’s Association chief science officer and medical affairs lead. 

The revised criteria are the first updates since 2018.

“Defining diseases biologically, rather than based on syndromic presentation, has long been standard in many areas of medicine — including cancer, heart disease, and diabetes — and is becoming a unifying concept common to all neurodegenerative diseases,” lead author Clifford Jack Jr, MD, with Mayo Clinic, Rochester, Minnesota, said in a news release from the Alzheimer’s Association. 

“These updates to the diagnostic criteria are needed now because we know more about the underlying biology of Alzheimer’s and we are able to measure those changes,” Dr. Jack added. 

The 2024 revised criteria for diagnosis and staging of Alzheimer’s disease were published online in Alzheimer’s & Dementia
 

Core Biomarkers Defined

The revised criteria define Alzheimer’s disease as a biologic process that begins with the appearance of Alzheimer’s disease neuropathologic change (ADNPC) in the absence of symptoms. Progression of the neuropathologic burden leads to the later appearance and progression of clinical symptoms.

The work group organized Alzheimer’s disease biomarkers into three broad categories: (1) core biomarkers of ADNPC, (2) nonspecific biomarkers that are important in Alzheimer’s disease but are also involved in other brain diseases, and (3) biomarkers of diseases or conditions that commonly coexist with Alzheimer’s disease.

Core Alzheimer’s biomarkers are subdivided into Core 1 and Core 2. 

Core 1 biomarkers become abnormal early in the disease course and directly measure either amyloid plaques or phosphorylated tau (p-tau). They include amyloid PET; cerebrospinal fluid (CSF) amyloid beta 42/40 ratio, CSF p-tau181/amyloid beta 42 ratio, and CSF total (t)-tau/amyloid beta 42 ratio; and “accurate” plasma biomarkers, such as p-tau217. 

“An abnormal Core 1 biomarker result is sufficient to establish a diagnosis of Alzheimer’s disease and to inform clinical decision making [sic] throughout the disease continuum,” the work group wrote. 

Core 2 biomarkers become abnormal later in the disease process and are more closely linked with the onset of symptoms. Core 2 biomarkers include tau PET and certain soluble tau fragments associated with tau proteinopathy (eg, MTBR-tau243) but also pT205 and nonphosphorylated mid-region tau fragments. 

Core 2 biomarkers, when combined with Core 1, may be used to stage biologic disease severity; abnormal Core 2 biomarkers “increase confidence that Alzheimer’s disease is contributing to symptoms,” the work group noted. 

The revised criteria give clinicians “the flexibility to use plasma or PET scans or CSF,” Dr. Carrillo said. “They will have several tools that they can choose from and offer this variety of tools to their patients. We need different tools for different individuals. There will be differences in coverage and access to these diagnostics.” 

The revised criteria also include an integrated biologic and clinical staging scheme that acknowledges the fact that common co-pathologies, cognitive reserve, and resistance may modify relationships between clinical and biologic Alzheimer’s disease stages. 
 

 

 

Formal Guidelines to Come 

The work group noted that currently, the clinical use of Alzheimer’s disease biomarkers is intended for the evaluation of symptomatic patients, not cognitively unimpaired individuals.

Disease-targeted therapies have not yet been approved for cognitively unimpaired individuals. For this reason, the work group currently recommends against diagnostic testing in cognitively unimpaired individuals outside the context of observational or therapeutic research studies. 

This recommendation would change in the future if disease-targeted therapies that are currently being evaluated in trials demonstrate a benefit in preventing cognitive decline and are approved for use in preclinical Alzheimer’s disease, they wrote. 

They emphasize that the revised criteria are not intended to provide step-by-step clinical practice guidelines for clinicians. Rather, they provide general principles to inform diagnosis and staging of Alzheimer’s disease that reflect current science.

“This is just the beginning,” said Dr. Carrillo. “This is a gathering of the evidence to date and putting it in one place so we can have a consensus and actually a way to test it and make it better as we add new science.”

This also serves as a “springboard” for the Alzheimer’s Association to create formal clinical guidelines. “That will come, hopefully, over the next 12 months. We’ll be working on it, and we hope to have that in 2025,” Dr. Carrillo said. 

The revised criteria also emphasize the role of the clinician. 

“The biologically based diagnosis of Alzheimer’s disease is meant to assist, rather than supplant, the clinical evaluation of individuals with cognitive impairment,” the work group wrote in a related commentary published online in Nature Medicine

Recent diagnostics and therapeutic developments “herald a virtuous cycle in which improvements in diagnostic methods enable more sophisticated treatment approaches, which in turn steer advances in diagnostic methods,” they continued. “An unchanging principle, however, is that effective treatment will always rely on the ability to diagnose and stage the biology driving the disease process.”

Funding for this research was provided by the National Institutes of Health, Alexander family professorship, GHR Foundation, Alzheimer’s Association, Veterans Administration, Life Molecular Imaging, Michael J. Fox Foundation for Parkinson’s Research, Avid Radiopharmaceuticals, Eli Lilly, Gates Foundation, Biogen, C2N Diagnostics, Eisai, Fujirebio, GE Healthcare, Roche, National Institute on Aging, Roche/Genentech, BrightFocus Foundation, Hoffmann-La Roche, Novo Nordisk, Toyama, National MS Society, Alzheimer Drug Discovery Foundation, and others. A complete list of donors and disclosures is included in the original article.

 A version of this article appeared on Medscape.com.

A work group convened by the Alzheimer’s Association has released revised biology-based criteria for the diagnosis and staging of Alzheimer’s disease, including a new biomarker classification system that incorporates fluid and imaging biomarkers as well as an updated disease staging system. 

“Plasma markers are here now, and it’s very important to incorporate them into the criteria for diagnosis,” said senior author Maria C. Carrillo, PhD, Alzheimer’s Association chief science officer and medical affairs lead. 

The revised criteria are the first updates since 2018.

“Defining diseases biologically, rather than based on syndromic presentation, has long been standard in many areas of medicine — including cancer, heart disease, and diabetes — and is becoming a unifying concept common to all neurodegenerative diseases,” lead author Clifford Jack Jr, MD, with Mayo Clinic, Rochester, Minnesota, said in a news release from the Alzheimer’s Association. 

“These updates to the diagnostic criteria are needed now because we know more about the underlying biology of Alzheimer’s and we are able to measure those changes,” Dr. Jack added. 

The 2024 revised criteria for diagnosis and staging of Alzheimer’s disease were published online in Alzheimer’s & Dementia
 

Core Biomarkers Defined

The revised criteria define Alzheimer’s disease as a biologic process that begins with the appearance of Alzheimer’s disease neuropathologic change (ADNPC) in the absence of symptoms. Progression of the neuropathologic burden leads to the later appearance and progression of clinical symptoms.

The work group organized Alzheimer’s disease biomarkers into three broad categories: (1) core biomarkers of ADNPC, (2) nonspecific biomarkers that are important in Alzheimer’s disease but are also involved in other brain diseases, and (3) biomarkers of diseases or conditions that commonly coexist with Alzheimer’s disease.

Core Alzheimer’s biomarkers are subdivided into Core 1 and Core 2. 

Core 1 biomarkers become abnormal early in the disease course and directly measure either amyloid plaques or phosphorylated tau (p-tau). They include amyloid PET; cerebrospinal fluid (CSF) amyloid beta 42/40 ratio, CSF p-tau181/amyloid beta 42 ratio, and CSF total (t)-tau/amyloid beta 42 ratio; and “accurate” plasma biomarkers, such as p-tau217. 

“An abnormal Core 1 biomarker result is sufficient to establish a diagnosis of Alzheimer’s disease and to inform clinical decision making [sic] throughout the disease continuum,” the work group wrote. 

Core 2 biomarkers become abnormal later in the disease process and are more closely linked with the onset of symptoms. Core 2 biomarkers include tau PET and certain soluble tau fragments associated with tau proteinopathy (eg, MTBR-tau243) but also pT205 and nonphosphorylated mid-region tau fragments. 

Core 2 biomarkers, when combined with Core 1, may be used to stage biologic disease severity; abnormal Core 2 biomarkers “increase confidence that Alzheimer’s disease is contributing to symptoms,” the work group noted. 

The revised criteria give clinicians “the flexibility to use plasma or PET scans or CSF,” Dr. Carrillo said. “They will have several tools that they can choose from and offer this variety of tools to their patients. We need different tools for different individuals. There will be differences in coverage and access to these diagnostics.” 

The revised criteria also include an integrated biologic and clinical staging scheme that acknowledges the fact that common co-pathologies, cognitive reserve, and resistance may modify relationships between clinical and biologic Alzheimer’s disease stages. 
 

 

 

Formal Guidelines to Come 

The work group noted that currently, the clinical use of Alzheimer’s disease biomarkers is intended for the evaluation of symptomatic patients, not cognitively unimpaired individuals.

Disease-targeted therapies have not yet been approved for cognitively unimpaired individuals. For this reason, the work group currently recommends against diagnostic testing in cognitively unimpaired individuals outside the context of observational or therapeutic research studies. 

This recommendation would change in the future if disease-targeted therapies that are currently being evaluated in trials demonstrate a benefit in preventing cognitive decline and are approved for use in preclinical Alzheimer’s disease, they wrote. 

They emphasize that the revised criteria are not intended to provide step-by-step clinical practice guidelines for clinicians. Rather, they provide general principles to inform diagnosis and staging of Alzheimer’s disease that reflect current science.

“This is just the beginning,” said Dr. Carrillo. “This is a gathering of the evidence to date and putting it in one place so we can have a consensus and actually a way to test it and make it better as we add new science.”

This also serves as a “springboard” for the Alzheimer’s Association to create formal clinical guidelines. “That will come, hopefully, over the next 12 months. We’ll be working on it, and we hope to have that in 2025,” Dr. Carrillo said. 

The revised criteria also emphasize the role of the clinician. 

“The biologically based diagnosis of Alzheimer’s disease is meant to assist, rather than supplant, the clinical evaluation of individuals with cognitive impairment,” the work group wrote in a related commentary published online in Nature Medicine

Recent diagnostics and therapeutic developments “herald a virtuous cycle in which improvements in diagnostic methods enable more sophisticated treatment approaches, which in turn steer advances in diagnostic methods,” they continued. “An unchanging principle, however, is that effective treatment will always rely on the ability to diagnose and stage the biology driving the disease process.”

Funding for this research was provided by the National Institutes of Health, Alexander family professorship, GHR Foundation, Alzheimer’s Association, Veterans Administration, Life Molecular Imaging, Michael J. Fox Foundation for Parkinson’s Research, Avid Radiopharmaceuticals, Eli Lilly, Gates Foundation, Biogen, C2N Diagnostics, Eisai, Fujirebio, GE Healthcare, Roche, National Institute on Aging, Roche/Genentech, BrightFocus Foundation, Hoffmann-La Roche, Novo Nordisk, Toyama, National MS Society, Alzheimer Drug Discovery Foundation, and others. A complete list of donors and disclosures is included in the original article.

 A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ALZHEIMER’S & DEMENTIA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Common Cognitive Test Falls Short for Concussion Diagnosis

Article Type
Changed
Mon, 07/01/2024 - 14:13

 

A tool routinely used to evaluate concussion in college athletes fails to accurately diagnose the condition in many cases, a new study showed.

Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.

“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.

The study was published online in JAMA Network Open.

Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.

Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.

Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.

All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.

No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.

Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.

The most accurate predictor of concussion was participants’ responses to questions about their symptoms.

“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”

Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”

The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.

Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”

This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

A tool routinely used to evaluate concussion in college athletes fails to accurately diagnose the condition in many cases, a new study showed.

Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.

“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.

The study was published online in JAMA Network Open.

Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.

Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.

Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.

All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.

No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.

Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.

The most accurate predictor of concussion was participants’ responses to questions about their symptoms.

“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”

Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”

The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.

Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”

This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

A tool routinely used to evaluate concussion in college athletes fails to accurately diagnose the condition in many cases, a new study showed.

Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.

“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.

The study was published online in JAMA Network Open.

Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.

Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.

Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.

All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.

No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.

Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.

The most accurate predictor of concussion was participants’ responses to questions about their symptoms.

“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”

Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”

The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.

Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”

This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article