Neurology Reviews covers innovative and emerging news in neurology and neuroscience every month, with a focus on practical approaches to treating Parkinson's disease, epilepsy, headache, stroke, multiple sclerosis, Alzheimer's disease, and other neurologic disorders.

Theme
medstat_nr
Top Sections
Literature Review
Expert Commentary
Expert Interview
nr
Main menu
NR Main Menu
Explore menu
NR Explore Menu
Proclivity ID
18828001
Unpublish
Negative Keywords
Ocrevus PML
PML
Progressive multifocal leukoencephalopathy
Rituxan
Altmetric
DSM Affiliated
Display in offset block
QuickLearn Excluded Topics/Sections
Best Practices
CME
CME Supplements
Education Center
Medical Education Library
Disqus Exclude
Best Practices
CE/CME
Education Center
Medical Education Library
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off
Current Issue
Title
Neurology Reviews
Description

The leading independent newspaper covering neurology news and commentary.

Current Issue Reference

High-dose vitamin D and MS relapse: New phase 3 data

Article Type
Changed
Mon, 05/01/2023 - 16:43

High-dose vitamin D in patients with relapsing, remitting multiple sclerosis (RRMS) does not prevent relapse, results from a randomized control trial show. However, at least one expert believes the study’s exclusion criteria may have been too broad.

The investigation of vitamin D to prevent relapse of MS is based on older observational studies of people who already had higher blood levels of vitamin D and were less likely to develop MS, said study investigator Ellen Mowry, MD, Richard T. and Frances W. Johnson professor of neurology, Johns Hopkins University, Baltimore.

Later research where participants were given vitamin D as a therapeutic option for MS “were disappointing as the vitamin D had minimal effect,” she said.

“While we were excited by early data suggesting that vitamin D may have an important impact on MS, it’s essential to follow those linkage studies with the gold standard clinical evidence, which we have here,” Dr. Mowry added.

The findings were published online in eClinicalMedicine.
 

No difference in relapse risk

The multisite, phase 3 Vitamin D to Ameliorate MS (VIDAMS) clinical trial included 172 participants aged 18-50 years with RRMS from 16 neurology clinics between 2012 and 2019.

Inclusion criteria were having one or more clinical episodes of MS in the past year and at least one brain lesion on MRI in the past year or having two or more clinical episodes in the past year. Eligible participants also had to have a score of 4 or less on the Kurtzke Expanded Disability Status Scale.

A total of 83 participants were randomly assigned to receive low-dose vitamin D3 (600 IU/day) and 89 to receive high-dose vitamin D3 (5,000 IU/day). Each participant took the vitamin tablet with glatiramer acetate, a synthetic protein that simulates myelin.

Participants were assessed every 12 weeks to measure serum 25(OH)D levels and every 24 weeks for a number of movement and coordination tests, as well as two 3T clinical brain MRIs to check for lesions.

By the trial’s end at 96 weeks, the researchers found no differences in relapse risk between the high- and low-dose groups (P = .57). In addition, there were no differences in MRI outcomes between the two groups.

Dr. Mowry said that more than a few people have asked her if she is disappointed by the results of the VIDAMS trial. “I tell them that no, I’m not – that we are scientists and clinicians, and it is our job to understand what they can do to fight their disease. And if the answer is not vitamin D, that’s OK – we have many other ideas.”

These include helping patients minimize cardiometabolic comorbidities, such as heart disease and blood pressure, she said.
 

Exclusion criteria too broad?

Commenting on the findings, Alberto Ascherio, MD, professor of epidemiology and nutrition at Harvard School of Public Health, Boston, said a key principle of recommending vitamin supplements is that they are, generally speaking, only beneficial for individuals with vitamin deficiencies.

He noted that “patients with vitamin D deficiency (25(OH)D < 15 ng/mL, which corresponds to 37.5 nmol/L) were excluded from this study. Most importantly, the baseline mean 25(OH)D levels were about 30 ng/mL (75 nmol/L), which is considered a sufficient level (the IOM considers 20 ng/mL = 50 nmol/L as an adequate level),” with the level further increasing during the trial due to the supplementation.

“It would be a serious mistake to conclude from this trial (or any of the previous trials) that vitamin D supplementation is not important in MS patients,” Dr. Ascherio said.

He added that many individuals with MS have serum vitamin D levels below 20 ng/mL (50 nmol/L) and that this was the median serum value in studies among individuals with MS in Europe.

“These patients would almost certainly benefit from moderate doses of vitamin D supplements or judicious UV light exposure. Most likely even patients with sufficient but suboptimal 25(OH)D levels (between 20 and 30 ng/mL, or 50 and 75 nmol/L) would benefit from an increase,” he said.

The study was funded by the National Multiple Sclerosis Society, Teva Neuroscience, and the National Institute of Health. Dr. Mowry reported grant support from the National MS Society, Biogen, Genentech, and Teva Neuroscience; honoraria from UpToDate; and consulting fees from BeCare Link.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

High-dose vitamin D in patients with relapsing, remitting multiple sclerosis (RRMS) does not prevent relapse, results from a randomized control trial show. However, at least one expert believes the study’s exclusion criteria may have been too broad.

The investigation of vitamin D to prevent relapse of MS is based on older observational studies of people who already had higher blood levels of vitamin D and were less likely to develop MS, said study investigator Ellen Mowry, MD, Richard T. and Frances W. Johnson professor of neurology, Johns Hopkins University, Baltimore.

Later research where participants were given vitamin D as a therapeutic option for MS “were disappointing as the vitamin D had minimal effect,” she said.

“While we were excited by early data suggesting that vitamin D may have an important impact on MS, it’s essential to follow those linkage studies with the gold standard clinical evidence, which we have here,” Dr. Mowry added.

The findings were published online in eClinicalMedicine.
 

No difference in relapse risk

The multisite, phase 3 Vitamin D to Ameliorate MS (VIDAMS) clinical trial included 172 participants aged 18-50 years with RRMS from 16 neurology clinics between 2012 and 2019.

Inclusion criteria were having one or more clinical episodes of MS in the past year and at least one brain lesion on MRI in the past year or having two or more clinical episodes in the past year. Eligible participants also had to have a score of 4 or less on the Kurtzke Expanded Disability Status Scale.

A total of 83 participants were randomly assigned to receive low-dose vitamin D3 (600 IU/day) and 89 to receive high-dose vitamin D3 (5,000 IU/day). Each participant took the vitamin tablet with glatiramer acetate, a synthetic protein that simulates myelin.

Participants were assessed every 12 weeks to measure serum 25(OH)D levels and every 24 weeks for a number of movement and coordination tests, as well as two 3T clinical brain MRIs to check for lesions.

By the trial’s end at 96 weeks, the researchers found no differences in relapse risk between the high- and low-dose groups (P = .57). In addition, there were no differences in MRI outcomes between the two groups.

Dr. Mowry said that more than a few people have asked her if she is disappointed by the results of the VIDAMS trial. “I tell them that no, I’m not – that we are scientists and clinicians, and it is our job to understand what they can do to fight their disease. And if the answer is not vitamin D, that’s OK – we have many other ideas.”

These include helping patients minimize cardiometabolic comorbidities, such as heart disease and blood pressure, she said.
 

Exclusion criteria too broad?

Commenting on the findings, Alberto Ascherio, MD, professor of epidemiology and nutrition at Harvard School of Public Health, Boston, said a key principle of recommending vitamin supplements is that they are, generally speaking, only beneficial for individuals with vitamin deficiencies.

He noted that “patients with vitamin D deficiency (25(OH)D < 15 ng/mL, which corresponds to 37.5 nmol/L) were excluded from this study. Most importantly, the baseline mean 25(OH)D levels were about 30 ng/mL (75 nmol/L), which is considered a sufficient level (the IOM considers 20 ng/mL = 50 nmol/L as an adequate level),” with the level further increasing during the trial due to the supplementation.

“It would be a serious mistake to conclude from this trial (or any of the previous trials) that vitamin D supplementation is not important in MS patients,” Dr. Ascherio said.

He added that many individuals with MS have serum vitamin D levels below 20 ng/mL (50 nmol/L) and that this was the median serum value in studies among individuals with MS in Europe.

“These patients would almost certainly benefit from moderate doses of vitamin D supplements or judicious UV light exposure. Most likely even patients with sufficient but suboptimal 25(OH)D levels (between 20 and 30 ng/mL, or 50 and 75 nmol/L) would benefit from an increase,” he said.

The study was funded by the National Multiple Sclerosis Society, Teva Neuroscience, and the National Institute of Health. Dr. Mowry reported grant support from the National MS Society, Biogen, Genentech, and Teva Neuroscience; honoraria from UpToDate; and consulting fees from BeCare Link.
 

A version of this article first appeared on Medscape.com.

High-dose vitamin D in patients with relapsing, remitting multiple sclerosis (RRMS) does not prevent relapse, results from a randomized control trial show. However, at least one expert believes the study’s exclusion criteria may have been too broad.

The investigation of vitamin D to prevent relapse of MS is based on older observational studies of people who already had higher blood levels of vitamin D and were less likely to develop MS, said study investigator Ellen Mowry, MD, Richard T. and Frances W. Johnson professor of neurology, Johns Hopkins University, Baltimore.

Later research where participants were given vitamin D as a therapeutic option for MS “were disappointing as the vitamin D had minimal effect,” she said.

“While we were excited by early data suggesting that vitamin D may have an important impact on MS, it’s essential to follow those linkage studies with the gold standard clinical evidence, which we have here,” Dr. Mowry added.

The findings were published online in eClinicalMedicine.
 

No difference in relapse risk

The multisite, phase 3 Vitamin D to Ameliorate MS (VIDAMS) clinical trial included 172 participants aged 18-50 years with RRMS from 16 neurology clinics between 2012 and 2019.

Inclusion criteria were having one or more clinical episodes of MS in the past year and at least one brain lesion on MRI in the past year or having two or more clinical episodes in the past year. Eligible participants also had to have a score of 4 or less on the Kurtzke Expanded Disability Status Scale.

A total of 83 participants were randomly assigned to receive low-dose vitamin D3 (600 IU/day) and 89 to receive high-dose vitamin D3 (5,000 IU/day). Each participant took the vitamin tablet with glatiramer acetate, a synthetic protein that simulates myelin.

Participants were assessed every 12 weeks to measure serum 25(OH)D levels and every 24 weeks for a number of movement and coordination tests, as well as two 3T clinical brain MRIs to check for lesions.

By the trial’s end at 96 weeks, the researchers found no differences in relapse risk between the high- and low-dose groups (P = .57). In addition, there were no differences in MRI outcomes between the two groups.

Dr. Mowry said that more than a few people have asked her if she is disappointed by the results of the VIDAMS trial. “I tell them that no, I’m not – that we are scientists and clinicians, and it is our job to understand what they can do to fight their disease. And if the answer is not vitamin D, that’s OK – we have many other ideas.”

These include helping patients minimize cardiometabolic comorbidities, such as heart disease and blood pressure, she said.
 

Exclusion criteria too broad?

Commenting on the findings, Alberto Ascherio, MD, professor of epidemiology and nutrition at Harvard School of Public Health, Boston, said a key principle of recommending vitamin supplements is that they are, generally speaking, only beneficial for individuals with vitamin deficiencies.

He noted that “patients with vitamin D deficiency (25(OH)D < 15 ng/mL, which corresponds to 37.5 nmol/L) were excluded from this study. Most importantly, the baseline mean 25(OH)D levels were about 30 ng/mL (75 nmol/L), which is considered a sufficient level (the IOM considers 20 ng/mL = 50 nmol/L as an adequate level),” with the level further increasing during the trial due to the supplementation.

“It would be a serious mistake to conclude from this trial (or any of the previous trials) that vitamin D supplementation is not important in MS patients,” Dr. Ascherio said.

He added that many individuals with MS have serum vitamin D levels below 20 ng/mL (50 nmol/L) and that this was the median serum value in studies among individuals with MS in Europe.

“These patients would almost certainly benefit from moderate doses of vitamin D supplements or judicious UV light exposure. Most likely even patients with sufficient but suboptimal 25(OH)D levels (between 20 and 30 ng/mL, or 50 and 75 nmol/L) would benefit from an increase,” he said.

The study was funded by the National Multiple Sclerosis Society, Teva Neuroscience, and the National Institute of Health. Dr. Mowry reported grant support from the National MS Society, Biogen, Genentech, and Teva Neuroscience; honoraria from UpToDate; and consulting fees from BeCare Link.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ECLINICALMEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Machine learning can predict primary progressive MS progression

Article Type
Changed
Fri, 04/28/2023 - 00:27

BOSTON – In patients with primary progressive multiple sclerosis (PPMS), machine learning can predict progression with reasonable accuracy on the basis of the blood transcriptome, according to a proof-of-concept study presented at the 2023 annual meeting of the American Academy of Neurology.

The accuracy was sufficient for the lead author of the study, Michael Gurevich, PhD, head of the Neuroimmunological Laboratory in the Multiple Sclerosis Center of Sheba Medical Center in Ramat Gan, Israel, to say that it is already clinically viable even as the tool is still evolving.

“We are looking at larger sample sizes to improve the accuracy and generalizability of our model, but we can use it now to inform treatment decisions,” Dr. Gurevich said.

In patients with PPMS who have a highly variable course, the model predicts disability progression with an accuracy of approximately 70%, according to the data he presented. He said he believes this is already at a level that it is meaningful for a risk-to-benefit calculation when considering treatment.
 

Machine learning analyzes blood samples

The study pursues the concept that the genetic information governing highly complex pathophysiological processes is contained in RNA sequencing. While multimodal omics generate data that are too complex for human pattern recognition, there is a growing body of evidence, including that provided by this study, to suggest that machine learning can employ these same RNA profiles and predict disease activity.

In this study, blood samples were collected from patients who participated in the phase 3 clinical ORATORIO trial that led to approval of ocrelizumab for PPMS. Analyses were conducted only on blood samples from those randomized to placebo, who, like those in the active treatment arm, were evaluated at baseline and at 12-week intervals for more than 2 years.

After development of a prediction model and creation of a training dataset, machine learning was applied to the deep sequencing of the blood transcriptome data for predicting two endpoints. One was disease progression at 120 weeks defined as a 1 point or more increase in the Expanded Disability Status Scale (EDSS) among patients with confirmed disability progression for at least 12 weeks (12W-CDP).

The other was change at 120 weeks in brain morphology defined as a 1% or more reduction in brain volume (120W PBVC).

The peripheral blood samples were subjected to RNA sequencing analysis (RNA-Seq) using commercially available analysis techniques. The prediction model for the disability endpoint was based on data generated from the blood transcriptome of 135 patients of which 53 (39%) met the endpoint at 120 weeks. The prediction model for the change in brain morphology was based on the blood transcriptome from 94 patients of which 63 (67%) met the endpoint.

On the basis of 10 genes that most significantly differentiated those who met the disability endpoint from those who did not, machine recognition of patterns after training was able to predict correctly the outcome in 70.9%. The sensitivity was 55.6%, and the specificity was 79.0%. The positive and negative predictive values were 59.0% and 76.8%, respectively.

On the basis of the 12 genes the most significantly differentiated those that reached the 120W PBVC endpoint from those who did not, machine learning resulted in a correct prediction of outcomes in 75.1%. The sensitivity was 78.1%, and the specificity was 66.7%. The positive and negative predictive values were 83.3% and 58.8%, respectively

Typical of a PPMS trial population, the mean age of the patients was about 44 years. The mean disease duration was about 6 years. The majority of patients had an EDSS score below 5.5 at baseline. The baseline T2 lesion number was approximately 50.

If further validated by others and in larger studies, this type of information could play a valuable role in PPMS management, according to Dr. Gurevich. Now that there is an approved therapy for PPMS, it can help clinicians and patients determine whether to initiate treatment early to address the high risk of progression or delay treatment that might not be needed.
 

 

 

A useful tool

In the field of MS, most of the studies performed with machine learning have focused on the analysis of radiological images. However, others are now looking at the blood transcriptome as a potential path to better classifying a highly complex disease with substantial heterogeneity in presentation, progression, and outcome.

For example, machine learning of the blood transcriptome has also shown high accuracy in the diagnosis and classification of MS in patients with clinically isolated syndrome (CIS). One study, published in Cell Reports Medicine, was led by Cinthia Farina, PhD, Institute of Experimental Neurology, IRCCS San Raffaele Scientific Institute, Milan.

Although she did not hear the presentation by Dr. Gurevich, Dr. Farina is enthusiastic about the potential for machine learning to help manage MS through the analysis of the blood transcriptome. “I do believe that transcriptomics in peripheral immune cells may become a useful tool for MS diagnosis and prognosis,” she said.

In her own study, in which machine learning algorithms were developed and trained on the basis of peripheral blood from patients with CIS, the tool proved accurate with a strong potential for being incorporated into routine clinical management.

“Machine learning applied to the blood transcriptomes was extremely efficient with a 95.6% accuracy in discriminating PPMS from RRMS [relapsing-remitting] MS,” she reported.

Dr. Gurevich has no potential financial conflicts of interest to report. He reported funding for the study was provided by Roche. Dr. Farina reports financial relationships with Merck-Serono, Novartis, and Teva.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

BOSTON – In patients with primary progressive multiple sclerosis (PPMS), machine learning can predict progression with reasonable accuracy on the basis of the blood transcriptome, according to a proof-of-concept study presented at the 2023 annual meeting of the American Academy of Neurology.

The accuracy was sufficient for the lead author of the study, Michael Gurevich, PhD, head of the Neuroimmunological Laboratory in the Multiple Sclerosis Center of Sheba Medical Center in Ramat Gan, Israel, to say that it is already clinically viable even as the tool is still evolving.

“We are looking at larger sample sizes to improve the accuracy and generalizability of our model, but we can use it now to inform treatment decisions,” Dr. Gurevich said.

In patients with PPMS who have a highly variable course, the model predicts disability progression with an accuracy of approximately 70%, according to the data he presented. He said he believes this is already at a level that it is meaningful for a risk-to-benefit calculation when considering treatment.
 

Machine learning analyzes blood samples

The study pursues the concept that the genetic information governing highly complex pathophysiological processes is contained in RNA sequencing. While multimodal omics generate data that are too complex for human pattern recognition, there is a growing body of evidence, including that provided by this study, to suggest that machine learning can employ these same RNA profiles and predict disease activity.

In this study, blood samples were collected from patients who participated in the phase 3 clinical ORATORIO trial that led to approval of ocrelizumab for PPMS. Analyses were conducted only on blood samples from those randomized to placebo, who, like those in the active treatment arm, were evaluated at baseline and at 12-week intervals for more than 2 years.

After development of a prediction model and creation of a training dataset, machine learning was applied to the deep sequencing of the blood transcriptome data for predicting two endpoints. One was disease progression at 120 weeks defined as a 1 point or more increase in the Expanded Disability Status Scale (EDSS) among patients with confirmed disability progression for at least 12 weeks (12W-CDP).

The other was change at 120 weeks in brain morphology defined as a 1% or more reduction in brain volume (120W PBVC).

The peripheral blood samples were subjected to RNA sequencing analysis (RNA-Seq) using commercially available analysis techniques. The prediction model for the disability endpoint was based on data generated from the blood transcriptome of 135 patients of which 53 (39%) met the endpoint at 120 weeks. The prediction model for the change in brain morphology was based on the blood transcriptome from 94 patients of which 63 (67%) met the endpoint.

On the basis of 10 genes that most significantly differentiated those who met the disability endpoint from those who did not, machine recognition of patterns after training was able to predict correctly the outcome in 70.9%. The sensitivity was 55.6%, and the specificity was 79.0%. The positive and negative predictive values were 59.0% and 76.8%, respectively.

On the basis of the 12 genes the most significantly differentiated those that reached the 120W PBVC endpoint from those who did not, machine learning resulted in a correct prediction of outcomes in 75.1%. The sensitivity was 78.1%, and the specificity was 66.7%. The positive and negative predictive values were 83.3% and 58.8%, respectively

Typical of a PPMS trial population, the mean age of the patients was about 44 years. The mean disease duration was about 6 years. The majority of patients had an EDSS score below 5.5 at baseline. The baseline T2 lesion number was approximately 50.

If further validated by others and in larger studies, this type of information could play a valuable role in PPMS management, according to Dr. Gurevich. Now that there is an approved therapy for PPMS, it can help clinicians and patients determine whether to initiate treatment early to address the high risk of progression or delay treatment that might not be needed.
 

 

 

A useful tool

In the field of MS, most of the studies performed with machine learning have focused on the analysis of radiological images. However, others are now looking at the blood transcriptome as a potential path to better classifying a highly complex disease with substantial heterogeneity in presentation, progression, and outcome.

For example, machine learning of the blood transcriptome has also shown high accuracy in the diagnosis and classification of MS in patients with clinically isolated syndrome (CIS). One study, published in Cell Reports Medicine, was led by Cinthia Farina, PhD, Institute of Experimental Neurology, IRCCS San Raffaele Scientific Institute, Milan.

Although she did not hear the presentation by Dr. Gurevich, Dr. Farina is enthusiastic about the potential for machine learning to help manage MS through the analysis of the blood transcriptome. “I do believe that transcriptomics in peripheral immune cells may become a useful tool for MS diagnosis and prognosis,” she said.

In her own study, in which machine learning algorithms were developed and trained on the basis of peripheral blood from patients with CIS, the tool proved accurate with a strong potential for being incorporated into routine clinical management.

“Machine learning applied to the blood transcriptomes was extremely efficient with a 95.6% accuracy in discriminating PPMS from RRMS [relapsing-remitting] MS,” she reported.

Dr. Gurevich has no potential financial conflicts of interest to report. He reported funding for the study was provided by Roche. Dr. Farina reports financial relationships with Merck-Serono, Novartis, and Teva.

BOSTON – In patients with primary progressive multiple sclerosis (PPMS), machine learning can predict progression with reasonable accuracy on the basis of the blood transcriptome, according to a proof-of-concept study presented at the 2023 annual meeting of the American Academy of Neurology.

The accuracy was sufficient for the lead author of the study, Michael Gurevich, PhD, head of the Neuroimmunological Laboratory in the Multiple Sclerosis Center of Sheba Medical Center in Ramat Gan, Israel, to say that it is already clinically viable even as the tool is still evolving.

“We are looking at larger sample sizes to improve the accuracy and generalizability of our model, but we can use it now to inform treatment decisions,” Dr. Gurevich said.

In patients with PPMS who have a highly variable course, the model predicts disability progression with an accuracy of approximately 70%, according to the data he presented. He said he believes this is already at a level that it is meaningful for a risk-to-benefit calculation when considering treatment.
 

Machine learning analyzes blood samples

The study pursues the concept that the genetic information governing highly complex pathophysiological processes is contained in RNA sequencing. While multimodal omics generate data that are too complex for human pattern recognition, there is a growing body of evidence, including that provided by this study, to suggest that machine learning can employ these same RNA profiles and predict disease activity.

In this study, blood samples were collected from patients who participated in the phase 3 clinical ORATORIO trial that led to approval of ocrelizumab for PPMS. Analyses were conducted only on blood samples from those randomized to placebo, who, like those in the active treatment arm, were evaluated at baseline and at 12-week intervals for more than 2 years.

After development of a prediction model and creation of a training dataset, machine learning was applied to the deep sequencing of the blood transcriptome data for predicting two endpoints. One was disease progression at 120 weeks defined as a 1 point or more increase in the Expanded Disability Status Scale (EDSS) among patients with confirmed disability progression for at least 12 weeks (12W-CDP).

The other was change at 120 weeks in brain morphology defined as a 1% or more reduction in brain volume (120W PBVC).

The peripheral blood samples were subjected to RNA sequencing analysis (RNA-Seq) using commercially available analysis techniques. The prediction model for the disability endpoint was based on data generated from the blood transcriptome of 135 patients of which 53 (39%) met the endpoint at 120 weeks. The prediction model for the change in brain morphology was based on the blood transcriptome from 94 patients of which 63 (67%) met the endpoint.

On the basis of 10 genes that most significantly differentiated those who met the disability endpoint from those who did not, machine recognition of patterns after training was able to predict correctly the outcome in 70.9%. The sensitivity was 55.6%, and the specificity was 79.0%. The positive and negative predictive values were 59.0% and 76.8%, respectively.

On the basis of the 12 genes the most significantly differentiated those that reached the 120W PBVC endpoint from those who did not, machine learning resulted in a correct prediction of outcomes in 75.1%. The sensitivity was 78.1%, and the specificity was 66.7%. The positive and negative predictive values were 83.3% and 58.8%, respectively

Typical of a PPMS trial population, the mean age of the patients was about 44 years. The mean disease duration was about 6 years. The majority of patients had an EDSS score below 5.5 at baseline. The baseline T2 lesion number was approximately 50.

If further validated by others and in larger studies, this type of information could play a valuable role in PPMS management, according to Dr. Gurevich. Now that there is an approved therapy for PPMS, it can help clinicians and patients determine whether to initiate treatment early to address the high risk of progression or delay treatment that might not be needed.
 

 

 

A useful tool

In the field of MS, most of the studies performed with machine learning have focused on the analysis of radiological images. However, others are now looking at the blood transcriptome as a potential path to better classifying a highly complex disease with substantial heterogeneity in presentation, progression, and outcome.

For example, machine learning of the blood transcriptome has also shown high accuracy in the diagnosis and classification of MS in patients with clinically isolated syndrome (CIS). One study, published in Cell Reports Medicine, was led by Cinthia Farina, PhD, Institute of Experimental Neurology, IRCCS San Raffaele Scientific Institute, Milan.

Although she did not hear the presentation by Dr. Gurevich, Dr. Farina is enthusiastic about the potential for machine learning to help manage MS through the analysis of the blood transcriptome. “I do believe that transcriptomics in peripheral immune cells may become a useful tool for MS diagnosis and prognosis,” she said.

In her own study, in which machine learning algorithms were developed and trained on the basis of peripheral blood from patients with CIS, the tool proved accurate with a strong potential for being incorporated into routine clinical management.

“Machine learning applied to the blood transcriptomes was extremely efficient with a 95.6% accuracy in discriminating PPMS from RRMS [relapsing-remitting] MS,” she reported.

Dr. Gurevich has no potential financial conflicts of interest to report. He reported funding for the study was provided by Roche. Dr. Farina reports financial relationships with Merck-Serono, Novartis, and Teva.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAN 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Unawareness of memory slips could indicate risk for Alzheimer’s

Article Type
Changed
Fri, 04/28/2023 - 08:26

Everyone’s memory fades to some extent as we age, but not everyone will develop Alzheimer’s disease. Screening the most likely people to develop Alzheimer’s remains an ongoing challenge, as some people present only unambiguous symptoms once their disease is advanced.

A new study in JAMA Network Open suggests that one early clue is found in people’s own self-perception of their memory skills. People who are more aware of their own declining memory capacity are less likely to develop Alzheimer’s, the study suggests.

“Some people are very aware of changes in their memory, but many people are unaware,” said study author Patrizia Vannini, PhD, a neurologist at Brigham and Women’s Hospital in Boston. There are gradations of unawareness of memory loss, Dr. Vannini said, from complete unawareness that anything is wrong, to a partial unawareness that memory is declining.

The study compared the records of 436 participants in the Alzheimer’s Disease Neuroimaging Initiative, an Alzheimer’s research institute housed at the University of Southern California. More than 90% of the participants were White, and generally had a college education. Their average age was 75 years, and 53% of participants were women.

Dr. Vannini and colleagues tracked people whose cognitive function was normal at the beginning of the study, based on the Clinical Dementia Rating. Throughout the course of the study, which included data from 2010 to 2021, 91 of the 436 participants experienced a sustained decline in their Clinical Dementia Rating scores, indicating a risk for eventual Alzheimer’s, whereas the other participants held steady.

The people who declined in cognitive function were less aware of slips in their memory, as assessed by discrepancies between people’s self-reports of their own memory skills and the perceptions of someone in their lives. For this part of the study, Dr. Vannini and colleagues used the Everyday Cognition Questionnaire, which evaluates memory tasks such as shopping without a grocery list or recalling conversations from a few days ago. Both the participant and the study partner rated their performance on such tasks compared to 10 years earlier. Those who were less aware of their memory slips were more likely to experience declines in the Clinical Dementia Rating, compared with people with a heightened concern about memory loss (as measured by being more concerned about memory decline than their study partners).

“Partial or complete unawareness is often related to delayed diagnosis of Alzheimer’s, because the patient is unaware they are having problems,” Dr. Vannini said, adding that this is associated with a poorer prognosis as well.
 

Implications for clinicians

Soo Borson, MD, professor of clinical family medicine at the University of Southern California and coleader of a CDC-funded early dementia detection center at New York University, pointed out that sometimes people are genuinely unaware that their memory is declining, while at other times they know it all too well but say everything is fine when a doctor asks about their current memory status. That may be because people fear the label of “Alzheimer’s,” Dr. Borson suggested, or simply because they don’t want to start a protracted diagnostic pathway that could involve lots of tests and time.

Dr. Borson, who was not involved in the study, noted that the population was predominantly White and well-educated, and by definition included people who were concerned enough about potential memory loss to become part of an Alzheimer’s research network. This limits the generalizability of this study’s results to other populations, Dr. Borson said.

Despite that limitation, in Dr. Borson’s view the study points to the continued importance of clinicians (ideally a primary care doctor who knows the patient well) engaging with patients about their brain health once they reach midlife. A doctor could ask if patients have noticed a decline in their thinking or memory over the last year, for example, or a more open-ended question about any memory concerns.

Although some patients may choose to withhold concerns about their memory, Dr. Borson acknowledged, the overall thrust of these questions is to provide a safe space for patients to air their concerns if they so choose. In some cases it would be appropriate to do a simple memory test on the spot, and then proceed accordingly – either for further tests if something of concern emerges, or to reassure the patient if the test doesn’t yield anything of note. In the latter case some patients will still want further tests for additional reassurance, and Dr. Borson thinks doctors should facilitate that request even if in their own judgment nothing is wrong.

“This is not like testing for impaired kidney function by doing a serum creatinine test,” Dr. Borson said. While the orientation of the health care system is toward quick and easy answers for everything, detecting possible dementia eludes such an approach.

Dr. Vannini reports funding from the National Institutes of Health National Institute on Aging. Dr. Borson reported no disclosures.

Publications
Topics
Sections

Everyone’s memory fades to some extent as we age, but not everyone will develop Alzheimer’s disease. Screening the most likely people to develop Alzheimer’s remains an ongoing challenge, as some people present only unambiguous symptoms once their disease is advanced.

A new study in JAMA Network Open suggests that one early clue is found in people’s own self-perception of their memory skills. People who are more aware of their own declining memory capacity are less likely to develop Alzheimer’s, the study suggests.

“Some people are very aware of changes in their memory, but many people are unaware,” said study author Patrizia Vannini, PhD, a neurologist at Brigham and Women’s Hospital in Boston. There are gradations of unawareness of memory loss, Dr. Vannini said, from complete unawareness that anything is wrong, to a partial unawareness that memory is declining.

The study compared the records of 436 participants in the Alzheimer’s Disease Neuroimaging Initiative, an Alzheimer’s research institute housed at the University of Southern California. More than 90% of the participants were White, and generally had a college education. Their average age was 75 years, and 53% of participants were women.

Dr. Vannini and colleagues tracked people whose cognitive function was normal at the beginning of the study, based on the Clinical Dementia Rating. Throughout the course of the study, which included data from 2010 to 2021, 91 of the 436 participants experienced a sustained decline in their Clinical Dementia Rating scores, indicating a risk for eventual Alzheimer’s, whereas the other participants held steady.

The people who declined in cognitive function were less aware of slips in their memory, as assessed by discrepancies between people’s self-reports of their own memory skills and the perceptions of someone in their lives. For this part of the study, Dr. Vannini and colleagues used the Everyday Cognition Questionnaire, which evaluates memory tasks such as shopping without a grocery list or recalling conversations from a few days ago. Both the participant and the study partner rated their performance on such tasks compared to 10 years earlier. Those who were less aware of their memory slips were more likely to experience declines in the Clinical Dementia Rating, compared with people with a heightened concern about memory loss (as measured by being more concerned about memory decline than their study partners).

“Partial or complete unawareness is often related to delayed diagnosis of Alzheimer’s, because the patient is unaware they are having problems,” Dr. Vannini said, adding that this is associated with a poorer prognosis as well.
 

Implications for clinicians

Soo Borson, MD, professor of clinical family medicine at the University of Southern California and coleader of a CDC-funded early dementia detection center at New York University, pointed out that sometimes people are genuinely unaware that their memory is declining, while at other times they know it all too well but say everything is fine when a doctor asks about their current memory status. That may be because people fear the label of “Alzheimer’s,” Dr. Borson suggested, or simply because they don’t want to start a protracted diagnostic pathway that could involve lots of tests and time.

Dr. Borson, who was not involved in the study, noted that the population was predominantly White and well-educated, and by definition included people who were concerned enough about potential memory loss to become part of an Alzheimer’s research network. This limits the generalizability of this study’s results to other populations, Dr. Borson said.

Despite that limitation, in Dr. Borson’s view the study points to the continued importance of clinicians (ideally a primary care doctor who knows the patient well) engaging with patients about their brain health once they reach midlife. A doctor could ask if patients have noticed a decline in their thinking or memory over the last year, for example, or a more open-ended question about any memory concerns.

Although some patients may choose to withhold concerns about their memory, Dr. Borson acknowledged, the overall thrust of these questions is to provide a safe space for patients to air their concerns if they so choose. In some cases it would be appropriate to do a simple memory test on the spot, and then proceed accordingly – either for further tests if something of concern emerges, or to reassure the patient if the test doesn’t yield anything of note. In the latter case some patients will still want further tests for additional reassurance, and Dr. Borson thinks doctors should facilitate that request even if in their own judgment nothing is wrong.

“This is not like testing for impaired kidney function by doing a serum creatinine test,” Dr. Borson said. While the orientation of the health care system is toward quick and easy answers for everything, detecting possible dementia eludes such an approach.

Dr. Vannini reports funding from the National Institutes of Health National Institute on Aging. Dr. Borson reported no disclosures.

Everyone’s memory fades to some extent as we age, but not everyone will develop Alzheimer’s disease. Screening the most likely people to develop Alzheimer’s remains an ongoing challenge, as some people present only unambiguous symptoms once their disease is advanced.

A new study in JAMA Network Open suggests that one early clue is found in people’s own self-perception of their memory skills. People who are more aware of their own declining memory capacity are less likely to develop Alzheimer’s, the study suggests.

“Some people are very aware of changes in their memory, but many people are unaware,” said study author Patrizia Vannini, PhD, a neurologist at Brigham and Women’s Hospital in Boston. There are gradations of unawareness of memory loss, Dr. Vannini said, from complete unawareness that anything is wrong, to a partial unawareness that memory is declining.

The study compared the records of 436 participants in the Alzheimer’s Disease Neuroimaging Initiative, an Alzheimer’s research institute housed at the University of Southern California. More than 90% of the participants were White, and generally had a college education. Their average age was 75 years, and 53% of participants were women.

Dr. Vannini and colleagues tracked people whose cognitive function was normal at the beginning of the study, based on the Clinical Dementia Rating. Throughout the course of the study, which included data from 2010 to 2021, 91 of the 436 participants experienced a sustained decline in their Clinical Dementia Rating scores, indicating a risk for eventual Alzheimer’s, whereas the other participants held steady.

The people who declined in cognitive function were less aware of slips in their memory, as assessed by discrepancies between people’s self-reports of their own memory skills and the perceptions of someone in their lives. For this part of the study, Dr. Vannini and colleagues used the Everyday Cognition Questionnaire, which evaluates memory tasks such as shopping without a grocery list or recalling conversations from a few days ago. Both the participant and the study partner rated their performance on such tasks compared to 10 years earlier. Those who were less aware of their memory slips were more likely to experience declines in the Clinical Dementia Rating, compared with people with a heightened concern about memory loss (as measured by being more concerned about memory decline than their study partners).

“Partial or complete unawareness is often related to delayed diagnosis of Alzheimer’s, because the patient is unaware they are having problems,” Dr. Vannini said, adding that this is associated with a poorer prognosis as well.
 

Implications for clinicians

Soo Borson, MD, professor of clinical family medicine at the University of Southern California and coleader of a CDC-funded early dementia detection center at New York University, pointed out that sometimes people are genuinely unaware that their memory is declining, while at other times they know it all too well but say everything is fine when a doctor asks about their current memory status. That may be because people fear the label of “Alzheimer’s,” Dr. Borson suggested, or simply because they don’t want to start a protracted diagnostic pathway that could involve lots of tests and time.

Dr. Borson, who was not involved in the study, noted that the population was predominantly White and well-educated, and by definition included people who were concerned enough about potential memory loss to become part of an Alzheimer’s research network. This limits the generalizability of this study’s results to other populations, Dr. Borson said.

Despite that limitation, in Dr. Borson’s view the study points to the continued importance of clinicians (ideally a primary care doctor who knows the patient well) engaging with patients about their brain health once they reach midlife. A doctor could ask if patients have noticed a decline in their thinking or memory over the last year, for example, or a more open-ended question about any memory concerns.

Although some patients may choose to withhold concerns about their memory, Dr. Borson acknowledged, the overall thrust of these questions is to provide a safe space for patients to air their concerns if they so choose. In some cases it would be appropriate to do a simple memory test on the spot, and then proceed accordingly – either for further tests if something of concern emerges, or to reassure the patient if the test doesn’t yield anything of note. In the latter case some patients will still want further tests for additional reassurance, and Dr. Borson thinks doctors should facilitate that request even if in their own judgment nothing is wrong.

“This is not like testing for impaired kidney function by doing a serum creatinine test,” Dr. Borson said. While the orientation of the health care system is toward quick and easy answers for everything, detecting possible dementia eludes such an approach.

Dr. Vannini reports funding from the National Institutes of Health National Institute on Aging. Dr. Borson reported no disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Noninvasive testing in midlife flags late-onset epilepsy risk

Article Type
Changed
Fri, 04/28/2023 - 00:35

BOSTON – Noninvasive tests performed in midlife may help identify people who are at risk of late-onset epilepsy, a new study suggests. New data from the Framingham Heart Study show those who scored better on a neurocognitive test that measures executive function were 75% less likely to develop late-onset epilepsy.

An analysis of MRI revealed that those with higher cortical volumes also had a lower risk of epilepsy later in life, while those with higher white matter hyperintensities had an increased risk.

The study could help identify at-risk individuals years before symptoms of epilepsy appear.

“We present possible markers that could potentially identify patients at risk for developing late-onset epilepsy, even in the preclinical phase and before the clinical manifestation of conditions like stroke and dementia that are known now to be linked with the condition,” said lead investigator Maria Stefanidou, MD, assistant professor of neurology at Boston University.

The findings were presented at the 2023 annual meeting of the American Academy of Neurology.
 

Protection against late-onset epilepsy?

Hypertension and stroke are known risk factors for late-onset epilepsy. Dementia is also a known risk factor. But in about 30% of cases, the cause of epilepsy in older individuals is unknown.

For this study, investigators analyzed data from the offspring cohort of the Framingham Heart Study. Participants were at least 45 years old; underwent neuropsychological evaluation and brain MRI; and had no prior history of stroke, dementia, or epilepsy. Cognitive measures included Visual Reproductions Delayed Recall, Logical Memory Delayed Recall, Similarities, Trail Making B-A (TrB-TrA), and the Hooper Visual Organization Test.

Participants also underwent an MRI to measure total cerebral brain volume, cortical gray matter volume, white matter hyperintensities, and hippocampal volume.

After a mean follow-up of 13.5 years, late-onset epilepsy was diagnosed in 31 of participants who underwent neuropsychological testing (n = 2,349) and in 27 of those who underwent MRI (n = 2,056).

Better performance on the TrB-TrA test (a measure of executive function, processing speed, and occult vascular injury) was associated with a reduced risk of late-onset epilepsy (adjusted hazard ratio, 0.25; P = .011).

The findings held even after adjusting for age, sex, educational level, and known risk factors for late-onset epilepsy, such as hypertension (aHR, 0.30; P = .0401).

Higher white matter hyperintensities, a measure of occult vascular injury, was associated with increased epilepsy risk (aHR, 1.5; P = .042) when adjusted only for age, sex, and education, but was no longer significant after adjusting for hypertension and other risk factors (aHR, 1.47; P = .065).

The analysis also revealed that participants with a higher cortical gray matter volume had a lower risk for late-onset epilepsy (aHR, 0.73; P = .001).

“There is increasing literature supporting that late-onset epilepsy may be secondary to accumulative occult cerebrovascular and neurodegenerative processes that occur during aging,” Dr. Stefanidou said. “Our findings likely reflect that a lesser degree of occult vascular brain injury in midlife may be protective against late-onset epilepsy.”

However, the epidemiological study points to association, not causation, Dr. Stefanidou cautions.

“Further studies will be needed to study our observations in the clinical setting,” she said.
 

‘Intriguing’ findings

Commenting on the findings, Joseph Sirven, MD, a neurologist at the Mayo Clinic in Jacksonville, Fla., said the findings are “intriguing,” but also raise some questions. “Late-onset epilepsy remains an issue for many and it’s common,” said Dr. Sirven, who has patients with late-onset epilepsy.

Dr. Sirven was particularly interested in the findings on white matter hyperintensities. “Hippocampal volumes have been used but not so much cortical volumes,” he said. “I would like to know more about how white matter changes suggest pathology that would explain epilepsy.”

Study funding was not disclosed. Dr. Stefanidou and Dr. Sirven report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

BOSTON – Noninvasive tests performed in midlife may help identify people who are at risk of late-onset epilepsy, a new study suggests. New data from the Framingham Heart Study show those who scored better on a neurocognitive test that measures executive function were 75% less likely to develop late-onset epilepsy.

An analysis of MRI revealed that those with higher cortical volumes also had a lower risk of epilepsy later in life, while those with higher white matter hyperintensities had an increased risk.

The study could help identify at-risk individuals years before symptoms of epilepsy appear.

“We present possible markers that could potentially identify patients at risk for developing late-onset epilepsy, even in the preclinical phase and before the clinical manifestation of conditions like stroke and dementia that are known now to be linked with the condition,” said lead investigator Maria Stefanidou, MD, assistant professor of neurology at Boston University.

The findings were presented at the 2023 annual meeting of the American Academy of Neurology.
 

Protection against late-onset epilepsy?

Hypertension and stroke are known risk factors for late-onset epilepsy. Dementia is also a known risk factor. But in about 30% of cases, the cause of epilepsy in older individuals is unknown.

For this study, investigators analyzed data from the offspring cohort of the Framingham Heart Study. Participants were at least 45 years old; underwent neuropsychological evaluation and brain MRI; and had no prior history of stroke, dementia, or epilepsy. Cognitive measures included Visual Reproductions Delayed Recall, Logical Memory Delayed Recall, Similarities, Trail Making B-A (TrB-TrA), and the Hooper Visual Organization Test.

Participants also underwent an MRI to measure total cerebral brain volume, cortical gray matter volume, white matter hyperintensities, and hippocampal volume.

After a mean follow-up of 13.5 years, late-onset epilepsy was diagnosed in 31 of participants who underwent neuropsychological testing (n = 2,349) and in 27 of those who underwent MRI (n = 2,056).

Better performance on the TrB-TrA test (a measure of executive function, processing speed, and occult vascular injury) was associated with a reduced risk of late-onset epilepsy (adjusted hazard ratio, 0.25; P = .011).

The findings held even after adjusting for age, sex, educational level, and known risk factors for late-onset epilepsy, such as hypertension (aHR, 0.30; P = .0401).

Higher white matter hyperintensities, a measure of occult vascular injury, was associated with increased epilepsy risk (aHR, 1.5; P = .042) when adjusted only for age, sex, and education, but was no longer significant after adjusting for hypertension and other risk factors (aHR, 1.47; P = .065).

The analysis also revealed that participants with a higher cortical gray matter volume had a lower risk for late-onset epilepsy (aHR, 0.73; P = .001).

“There is increasing literature supporting that late-onset epilepsy may be secondary to accumulative occult cerebrovascular and neurodegenerative processes that occur during aging,” Dr. Stefanidou said. “Our findings likely reflect that a lesser degree of occult vascular brain injury in midlife may be protective against late-onset epilepsy.”

However, the epidemiological study points to association, not causation, Dr. Stefanidou cautions.

“Further studies will be needed to study our observations in the clinical setting,” she said.
 

‘Intriguing’ findings

Commenting on the findings, Joseph Sirven, MD, a neurologist at the Mayo Clinic in Jacksonville, Fla., said the findings are “intriguing,” but also raise some questions. “Late-onset epilepsy remains an issue for many and it’s common,” said Dr. Sirven, who has patients with late-onset epilepsy.

Dr. Sirven was particularly interested in the findings on white matter hyperintensities. “Hippocampal volumes have been used but not so much cortical volumes,” he said. “I would like to know more about how white matter changes suggest pathology that would explain epilepsy.”

Study funding was not disclosed. Dr. Stefanidou and Dr. Sirven report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

BOSTON – Noninvasive tests performed in midlife may help identify people who are at risk of late-onset epilepsy, a new study suggests. New data from the Framingham Heart Study show those who scored better on a neurocognitive test that measures executive function were 75% less likely to develop late-onset epilepsy.

An analysis of MRI revealed that those with higher cortical volumes also had a lower risk of epilepsy later in life, while those with higher white matter hyperintensities had an increased risk.

The study could help identify at-risk individuals years before symptoms of epilepsy appear.

“We present possible markers that could potentially identify patients at risk for developing late-onset epilepsy, even in the preclinical phase and before the clinical manifestation of conditions like stroke and dementia that are known now to be linked with the condition,” said lead investigator Maria Stefanidou, MD, assistant professor of neurology at Boston University.

The findings were presented at the 2023 annual meeting of the American Academy of Neurology.
 

Protection against late-onset epilepsy?

Hypertension and stroke are known risk factors for late-onset epilepsy. Dementia is also a known risk factor. But in about 30% of cases, the cause of epilepsy in older individuals is unknown.

For this study, investigators analyzed data from the offspring cohort of the Framingham Heart Study. Participants were at least 45 years old; underwent neuropsychological evaluation and brain MRI; and had no prior history of stroke, dementia, or epilepsy. Cognitive measures included Visual Reproductions Delayed Recall, Logical Memory Delayed Recall, Similarities, Trail Making B-A (TrB-TrA), and the Hooper Visual Organization Test.

Participants also underwent an MRI to measure total cerebral brain volume, cortical gray matter volume, white matter hyperintensities, and hippocampal volume.

After a mean follow-up of 13.5 years, late-onset epilepsy was diagnosed in 31 of participants who underwent neuropsychological testing (n = 2,349) and in 27 of those who underwent MRI (n = 2,056).

Better performance on the TrB-TrA test (a measure of executive function, processing speed, and occult vascular injury) was associated with a reduced risk of late-onset epilepsy (adjusted hazard ratio, 0.25; P = .011).

The findings held even after adjusting for age, sex, educational level, and known risk factors for late-onset epilepsy, such as hypertension (aHR, 0.30; P = .0401).

Higher white matter hyperintensities, a measure of occult vascular injury, was associated with increased epilepsy risk (aHR, 1.5; P = .042) when adjusted only for age, sex, and education, but was no longer significant after adjusting for hypertension and other risk factors (aHR, 1.47; P = .065).

The analysis also revealed that participants with a higher cortical gray matter volume had a lower risk for late-onset epilepsy (aHR, 0.73; P = .001).

“There is increasing literature supporting that late-onset epilepsy may be secondary to accumulative occult cerebrovascular and neurodegenerative processes that occur during aging,” Dr. Stefanidou said. “Our findings likely reflect that a lesser degree of occult vascular brain injury in midlife may be protective against late-onset epilepsy.”

However, the epidemiological study points to association, not causation, Dr. Stefanidou cautions.

“Further studies will be needed to study our observations in the clinical setting,” she said.
 

‘Intriguing’ findings

Commenting on the findings, Joseph Sirven, MD, a neurologist at the Mayo Clinic in Jacksonville, Fla., said the findings are “intriguing,” but also raise some questions. “Late-onset epilepsy remains an issue for many and it’s common,” said Dr. Sirven, who has patients with late-onset epilepsy.

Dr. Sirven was particularly interested in the findings on white matter hyperintensities. “Hippocampal volumes have been used but not so much cortical volumes,” he said. “I would like to know more about how white matter changes suggest pathology that would explain epilepsy.”

Study funding was not disclosed. Dr. Stefanidou and Dr. Sirven report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAN 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Therapy to reverse muscle dystrophies shows promise

Article Type
Changed
Fri, 04/28/2023 - 00:36

Becker (BMD) and Duchenne muscle dystrophy (DMD) progress largely from irreversible contraction-induced injury of skeletal muscles, making the very positive interim results of an early-phase trial with a drug that prevents these injuries worth attention.

The phase 1b data in BMD, presented at the 2023 annual meeting of the American Academy of Neurology, were sufficiently promising that controlled phase 2 trials in both BMD and DMD are already enrolling, reported Joanne Donovan, MD, PhD, an adjunct professor at Boston University and chief medical officer of Edgewise Therapeutics, the company developing the drug.
 

Phase 1 study

Early phase studies are largely focused on safety, but the 6-month interim data of a 12-month study showed rapid reductions in multiple biomarkers of muscle injury, reductions in anti-inflammatory markers, proteomic changes consistent with sustained effects, and a trend for functional improvement in muscle dystrophies.

Moreover, the evidence of a clinical effect was achieved in adult patients with a North Star Ambulatory Assessment (NSAA) score of 15, signifying advanced disease. Only 12 patients were enrolled and there were no controls, but objective evidence of a favorable effect was generated by highly significant reductions in creatine kinase (CK) and fast skeletal muscle (TNNI2) troponin, which are both biomarkers commonly used to track muscular dystrophy progression.

In patients with BMD or DMD, a lack of dystrophin is a key pathogenic feature, according to Dr. Donovan. She explained that dystrophin in muscles connects contractile proteins to membranes and surrounding matrix. In the presence of dystrophin, muscle fibers support each other, but when this protein is absent, contraction causes injury.

The drug in development, currently identified as EDG-5506, is a selective fast myosin inhibitor. This agent was shown to prevent the muscle injury caused by lack of dystrophin in animal models of muscular dystrophy and is now showing the same effect in humans. Preservation of muscle is critical to preventing BMD and DMD progression according to several sets of data, according to Dr. Donovan.

For one, it has been shown that BMD or DMD patients with relatively preserved function as defined by a NSAA score above 32 have minimal muscle damage. As NSAA scores fall below 32 points, muscle mass diminishes and fat accumulates. In natural history studies of BMD, there is a 1.2-point decline in NSAA score over 5 years, and this tracks with muscle loss and not with other variables, such as patient age.

“Progression depends on the degree of muscle loss,” Dr. Donovan stated, providing the rationale for moving forward with EDG-5506.
 

Proof of concept

In experimental studies, modulation of fast myelin provided complete protection against muscle injury while preserving its contractile function, and this translated into protection against loss of function. Phase 1 studies in BMD patients and healthy controls have already provided evidence that EDG-5506 is well tolerated and safe, but the new phase 1b provides a proof of concept for its ability to inhibit muscle injury in BMD patients.

In this study, called ARCH, 12 adults 18 years of age or older with a dystrophin mutation and a BMD phenotype who could complete a 100-meter timed test were enrolled. The median age at entry was 32 years. Several patients had participated in a previous phase 1 safety study. The daily starting dose of 10 mg was increased from 10 mg to 15 mg at 2 months. The dose was increased again to 20 mg at 6 months, but the data presented by Dr. Donovan were restricted to the first 6 months.

At the interim 6-month analysis, creatine kinase was reduced by 40% and TINN2 was reduced by 84% (both P < 0.001). The significant reductions in these biomarkers and others, such as myoglobin, were mostly achieved within the first month, although further reductions were observed for some biomarkers subsequently.

The NSAA score at 6 months improved on average by about 1 point on treatment. Natural history studies of BMD predict a 1-point reduction in NSAA score over this period of time. The modest improvements from baseline in pain scores at 1 month were sustained at 6 months.

On the basis of a proteomic analysis, 125 proteins mostly associated with metabolic pathways consistent with muscle injury were found to be altered in BMD patients relative to healthy controls. The majority of these proteins, whether assessed collectively or individually, normalized after 1 to 2 months of treatment with EDG-5506 and have remained stable during follow-up to date, according to Dr. Donovan.

As in previous studies, the drug was well tolerated. The three most common treatment-emergent events were dizziness, somnolence, and headache. Each was reported by about 25% of patients, but no patient discontinued therapy as a result of adverse events.
 

 

 

Findings deemed ‘a big deal’

These data, despite the small number of patients in the study and the limited follow-up, “are a big deal,” according to Nicholas E. Johnson, MD, division chief, neuromuscular disorders, Virginia Commonwealth University, Richmond. He pointed out that there are no effective treatments currently for BMD, and the mechanism of action is plausible.

“I am excited about the potential of this treatment, although we clearly need longer follow-up and more patients evaluated on this treatment,” Dr. Johnson said. He said that clinicians with BMD patients should be aware of the phase 2 trial that is now recruiting adult subjects.

“Becker muscular dystrophy is highly disabling. As disease advances, most patients have very limited function,” said Dr. Johnson, emphasizing the urgent unmet need for an effective therapy.

Dr. Donovan is a full time employee of Edgewise Therapeutics, which funded this study. Dr. Johnson has financial relationships with Acceleron, Arthex, AveXis, Avidity, Biogen, Dyne Therapeutics, Entrada, Juvena, ML Bio, Sarepta Therapeutics, Triplet Therapeutics, and Vertex Pharma.
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Becker (BMD) and Duchenne muscle dystrophy (DMD) progress largely from irreversible contraction-induced injury of skeletal muscles, making the very positive interim results of an early-phase trial with a drug that prevents these injuries worth attention.

The phase 1b data in BMD, presented at the 2023 annual meeting of the American Academy of Neurology, were sufficiently promising that controlled phase 2 trials in both BMD and DMD are already enrolling, reported Joanne Donovan, MD, PhD, an adjunct professor at Boston University and chief medical officer of Edgewise Therapeutics, the company developing the drug.
 

Phase 1 study

Early phase studies are largely focused on safety, but the 6-month interim data of a 12-month study showed rapid reductions in multiple biomarkers of muscle injury, reductions in anti-inflammatory markers, proteomic changes consistent with sustained effects, and a trend for functional improvement in muscle dystrophies.

Moreover, the evidence of a clinical effect was achieved in adult patients with a North Star Ambulatory Assessment (NSAA) score of 15, signifying advanced disease. Only 12 patients were enrolled and there were no controls, but objective evidence of a favorable effect was generated by highly significant reductions in creatine kinase (CK) and fast skeletal muscle (TNNI2) troponin, which are both biomarkers commonly used to track muscular dystrophy progression.

In patients with BMD or DMD, a lack of dystrophin is a key pathogenic feature, according to Dr. Donovan. She explained that dystrophin in muscles connects contractile proteins to membranes and surrounding matrix. In the presence of dystrophin, muscle fibers support each other, but when this protein is absent, contraction causes injury.

The drug in development, currently identified as EDG-5506, is a selective fast myosin inhibitor. This agent was shown to prevent the muscle injury caused by lack of dystrophin in animal models of muscular dystrophy and is now showing the same effect in humans. Preservation of muscle is critical to preventing BMD and DMD progression according to several sets of data, according to Dr. Donovan.

For one, it has been shown that BMD or DMD patients with relatively preserved function as defined by a NSAA score above 32 have minimal muscle damage. As NSAA scores fall below 32 points, muscle mass diminishes and fat accumulates. In natural history studies of BMD, there is a 1.2-point decline in NSAA score over 5 years, and this tracks with muscle loss and not with other variables, such as patient age.

“Progression depends on the degree of muscle loss,” Dr. Donovan stated, providing the rationale for moving forward with EDG-5506.
 

Proof of concept

In experimental studies, modulation of fast myelin provided complete protection against muscle injury while preserving its contractile function, and this translated into protection against loss of function. Phase 1 studies in BMD patients and healthy controls have already provided evidence that EDG-5506 is well tolerated and safe, but the new phase 1b provides a proof of concept for its ability to inhibit muscle injury in BMD patients.

In this study, called ARCH, 12 adults 18 years of age or older with a dystrophin mutation and a BMD phenotype who could complete a 100-meter timed test were enrolled. The median age at entry was 32 years. Several patients had participated in a previous phase 1 safety study. The daily starting dose of 10 mg was increased from 10 mg to 15 mg at 2 months. The dose was increased again to 20 mg at 6 months, but the data presented by Dr. Donovan were restricted to the first 6 months.

At the interim 6-month analysis, creatine kinase was reduced by 40% and TINN2 was reduced by 84% (both P < 0.001). The significant reductions in these biomarkers and others, such as myoglobin, were mostly achieved within the first month, although further reductions were observed for some biomarkers subsequently.

The NSAA score at 6 months improved on average by about 1 point on treatment. Natural history studies of BMD predict a 1-point reduction in NSAA score over this period of time. The modest improvements from baseline in pain scores at 1 month were sustained at 6 months.

On the basis of a proteomic analysis, 125 proteins mostly associated with metabolic pathways consistent with muscle injury were found to be altered in BMD patients relative to healthy controls. The majority of these proteins, whether assessed collectively or individually, normalized after 1 to 2 months of treatment with EDG-5506 and have remained stable during follow-up to date, according to Dr. Donovan.

As in previous studies, the drug was well tolerated. The three most common treatment-emergent events were dizziness, somnolence, and headache. Each was reported by about 25% of patients, but no patient discontinued therapy as a result of adverse events.
 

 

 

Findings deemed ‘a big deal’

These data, despite the small number of patients in the study and the limited follow-up, “are a big deal,” according to Nicholas E. Johnson, MD, division chief, neuromuscular disorders, Virginia Commonwealth University, Richmond. He pointed out that there are no effective treatments currently for BMD, and the mechanism of action is plausible.

“I am excited about the potential of this treatment, although we clearly need longer follow-up and more patients evaluated on this treatment,” Dr. Johnson said. He said that clinicians with BMD patients should be aware of the phase 2 trial that is now recruiting adult subjects.

“Becker muscular dystrophy is highly disabling. As disease advances, most patients have very limited function,” said Dr. Johnson, emphasizing the urgent unmet need for an effective therapy.

Dr. Donovan is a full time employee of Edgewise Therapeutics, which funded this study. Dr. Johnson has financial relationships with Acceleron, Arthex, AveXis, Avidity, Biogen, Dyne Therapeutics, Entrada, Juvena, ML Bio, Sarepta Therapeutics, Triplet Therapeutics, and Vertex Pharma.
 

Becker (BMD) and Duchenne muscle dystrophy (DMD) progress largely from irreversible contraction-induced injury of skeletal muscles, making the very positive interim results of an early-phase trial with a drug that prevents these injuries worth attention.

The phase 1b data in BMD, presented at the 2023 annual meeting of the American Academy of Neurology, were sufficiently promising that controlled phase 2 trials in both BMD and DMD are already enrolling, reported Joanne Donovan, MD, PhD, an adjunct professor at Boston University and chief medical officer of Edgewise Therapeutics, the company developing the drug.
 

Phase 1 study

Early phase studies are largely focused on safety, but the 6-month interim data of a 12-month study showed rapid reductions in multiple biomarkers of muscle injury, reductions in anti-inflammatory markers, proteomic changes consistent with sustained effects, and a trend for functional improvement in muscle dystrophies.

Moreover, the evidence of a clinical effect was achieved in adult patients with a North Star Ambulatory Assessment (NSAA) score of 15, signifying advanced disease. Only 12 patients were enrolled and there were no controls, but objective evidence of a favorable effect was generated by highly significant reductions in creatine kinase (CK) and fast skeletal muscle (TNNI2) troponin, which are both biomarkers commonly used to track muscular dystrophy progression.

In patients with BMD or DMD, a lack of dystrophin is a key pathogenic feature, according to Dr. Donovan. She explained that dystrophin in muscles connects contractile proteins to membranes and surrounding matrix. In the presence of dystrophin, muscle fibers support each other, but when this protein is absent, contraction causes injury.

The drug in development, currently identified as EDG-5506, is a selective fast myosin inhibitor. This agent was shown to prevent the muscle injury caused by lack of dystrophin in animal models of muscular dystrophy and is now showing the same effect in humans. Preservation of muscle is critical to preventing BMD and DMD progression according to several sets of data, according to Dr. Donovan.

For one, it has been shown that BMD or DMD patients with relatively preserved function as defined by a NSAA score above 32 have minimal muscle damage. As NSAA scores fall below 32 points, muscle mass diminishes and fat accumulates. In natural history studies of BMD, there is a 1.2-point decline in NSAA score over 5 years, and this tracks with muscle loss and not with other variables, such as patient age.

“Progression depends on the degree of muscle loss,” Dr. Donovan stated, providing the rationale for moving forward with EDG-5506.
 

Proof of concept

In experimental studies, modulation of fast myelin provided complete protection against muscle injury while preserving its contractile function, and this translated into protection against loss of function. Phase 1 studies in BMD patients and healthy controls have already provided evidence that EDG-5506 is well tolerated and safe, but the new phase 1b provides a proof of concept for its ability to inhibit muscle injury in BMD patients.

In this study, called ARCH, 12 adults 18 years of age or older with a dystrophin mutation and a BMD phenotype who could complete a 100-meter timed test were enrolled. The median age at entry was 32 years. Several patients had participated in a previous phase 1 safety study. The daily starting dose of 10 mg was increased from 10 mg to 15 mg at 2 months. The dose was increased again to 20 mg at 6 months, but the data presented by Dr. Donovan were restricted to the first 6 months.

At the interim 6-month analysis, creatine kinase was reduced by 40% and TINN2 was reduced by 84% (both P < 0.001). The significant reductions in these biomarkers and others, such as myoglobin, were mostly achieved within the first month, although further reductions were observed for some biomarkers subsequently.

The NSAA score at 6 months improved on average by about 1 point on treatment. Natural history studies of BMD predict a 1-point reduction in NSAA score over this period of time. The modest improvements from baseline in pain scores at 1 month were sustained at 6 months.

On the basis of a proteomic analysis, 125 proteins mostly associated with metabolic pathways consistent with muscle injury were found to be altered in BMD patients relative to healthy controls. The majority of these proteins, whether assessed collectively or individually, normalized after 1 to 2 months of treatment with EDG-5506 and have remained stable during follow-up to date, according to Dr. Donovan.

As in previous studies, the drug was well tolerated. The three most common treatment-emergent events were dizziness, somnolence, and headache. Each was reported by about 25% of patients, but no patient discontinued therapy as a result of adverse events.
 

 

 

Findings deemed ‘a big deal’

These data, despite the small number of patients in the study and the limited follow-up, “are a big deal,” according to Nicholas E. Johnson, MD, division chief, neuromuscular disorders, Virginia Commonwealth University, Richmond. He pointed out that there are no effective treatments currently for BMD, and the mechanism of action is plausible.

“I am excited about the potential of this treatment, although we clearly need longer follow-up and more patients evaluated on this treatment,” Dr. Johnson said. He said that clinicians with BMD patients should be aware of the phase 2 trial that is now recruiting adult subjects.

“Becker muscular dystrophy is highly disabling. As disease advances, most patients have very limited function,” said Dr. Johnson, emphasizing the urgent unmet need for an effective therapy.

Dr. Donovan is a full time employee of Edgewise Therapeutics, which funded this study. Dr. Johnson has financial relationships with Acceleron, Arthex, AveXis, Avidity, Biogen, Dyne Therapeutics, Entrada, Juvena, ML Bio, Sarepta Therapeutics, Triplet Therapeutics, and Vertex Pharma.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAN 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Drive, chip, and putt your way to osteoarthritis relief

Article Type
Changed
Tue, 05/16/2023 - 02:28

 

Taking a swing against arthritis

Osteoarthritis is a tough disease to manage. Exercise helps ease the stiffness and pain of the joints, but at the same time, the disease makes it difficult to do that beneficial exercise. Even a relatively simple activity like jogging can hurt more than it helps. If only there were a low-impact exercise that was incredibly popular among the generally older population who are likely to have arthritis.

We love a good golf study here at LOTME, and a group of Australian and U.K. researchers have provided. Osteoarthritis affects 2 million people in the land down under, making it the most common source of disability there. In that population, only 64% reported their physical health to be good, very good, or excellent. Among the 459 golfers with OA that the study authors surveyed, however, the percentage reporting good health rose to more than 90%.

jacoblund/Getty Images

A similar story emerged when they looked at mental health. Nearly a quarter of nongolfers with OA reported high or very high levels of psychological distress, compared with just 8% of golfers. This pattern of improved physical and mental health remained when the researchers looked at the general, non-OA population.

This isn’t the first time golf’s been connected with improved health, and previous studies have shown golf to reduce the risks of cardiovascular disease, diabetes, and obesity, among other things. Just walking one 18-hole round significantly exceeds the CDC’s recommended 150 minutes of physical activity per week. Go out multiple times a week – leaving the cart and beer at home, American golfers – and you’ll be fit for a lifetime.

The golfers on our staff, however, are still waiting for those mental health benefits to kick in. Because when we’re adding up our scorecard after that string of four double bogeys to end the round, we’re most definitely thinking: “Yes, this sport is reducing my psychological distress. I am having fun right now.”
 

Battle of the sexes’ intestines

There are, we’re sure you’ve noticed, some differences between males and females. Females, for one thing, have longer small intestines than males. Everybody knows that, right? You didn’t know? Really? … Really?

Afif Ramdhasuma/Unsplash

Well, then, we’re guessing you haven’t read “Hidden diversity: Comparative functional morphology of humans and other species” by Erin A. McKenney, PhD, of North Carolina State University, Raleigh, and associates, which just appeared in PeerJ. We couldn’t put it down, even in the shower – a real page-turner/scroller. (It’s a great way to clean a phone, for those who also like to scroll, text, or talk on the toilet.)

The researchers got out their rulers, calipers, and string and took many measurements of the digestive systems of 45 human cadavers (21 female and 24 male), which were compared with data from 10 rats, 10 pigs, and 10 bullfrogs, which had been collected (the measurements, not the animals) by undergraduate students enrolled in a comparative anatomy laboratory course at the university.

There was little intestinal-length variation among the four-legged subjects, but when it comes to humans, females have “consistently and significantly longer small intestines than males,” the investigators noted.

The women’s small intestines, almost 14 feet long on average, were about a foot longer than the men’s, which suggests that women are better able to extract nutrients from food and “supports the canalization hypothesis, which posits that women are better able to survive during periods of stress,” coauthor Amanda Hale said in a written statement from the school. The way to a man’s heart may be through his stomach, but the way to a woman’s heart is through her duodenum, it seems.

Fascinating stuff, to be sure, but the thing that really caught our eye in the PeerJ article was the authors’ suggestion “that organs behave independently of one another, both within and across species.” Organs behaving independently? A somewhat ominous concept, no doubt, but it does explain a lot of the sounds we hear coming from our guts, which can get pretty frightening, especially on chili night.
 

 

 

Dog walking is dangerous business

Yes, you did read that right. A lot of strange things can send you to the emergency department. Go ahead and add dog walking onto that list.

Investigators from Johns Hopkins University estimate that over 422,000 adults presented to U.S. emergency departments with leash-dependent dog walking-related injuries between 2001 and 2020.

freestocks/Unsplash

With almost 53% of U.S. households owning at least one dog in 2021-2022 in the wake of the COVID pet boom, this kind of occurrence is becoming more common than you think. The annual number of dog-walking injuries more than quadrupled from 7,300 to 32,000 over the course of the study, and the researchers link that spike to the promotion of dog walking for fitness, along with the boost of ownership itself.

The most common injuries listed in the National Electronic Injury Surveillance System database were finger fracture, traumatic brain injury, and shoulder sprain or strain. These mostly involved falls from being pulled, tripped, or tangled up in the leash while walking. For those aged 65 years and older, traumatic brain injury and hip fracture were the most common.

Women were 50% more likely to sustain a fracture than were men, and dog owners aged 65 and older were three times as likely to fall, twice as likely to get a fracture, and 60% more likely to have brain injury than were younger people. Now, that’s not to say younger people don’t also get hurt. After all, dogs aren’t ageists. The researchers have that data but it’s coming out later.

Meanwhile, the pitfalls involved with just trying to get our daily steps in while letting Muffin do her business have us on the lookout for random squirrels.

Publications
Topics
Sections

 

Taking a swing against arthritis

Osteoarthritis is a tough disease to manage. Exercise helps ease the stiffness and pain of the joints, but at the same time, the disease makes it difficult to do that beneficial exercise. Even a relatively simple activity like jogging can hurt more than it helps. If only there were a low-impact exercise that was incredibly popular among the generally older population who are likely to have arthritis.

We love a good golf study here at LOTME, and a group of Australian and U.K. researchers have provided. Osteoarthritis affects 2 million people in the land down under, making it the most common source of disability there. In that population, only 64% reported their physical health to be good, very good, or excellent. Among the 459 golfers with OA that the study authors surveyed, however, the percentage reporting good health rose to more than 90%.

jacoblund/Getty Images

A similar story emerged when they looked at mental health. Nearly a quarter of nongolfers with OA reported high or very high levels of psychological distress, compared with just 8% of golfers. This pattern of improved physical and mental health remained when the researchers looked at the general, non-OA population.

This isn’t the first time golf’s been connected with improved health, and previous studies have shown golf to reduce the risks of cardiovascular disease, diabetes, and obesity, among other things. Just walking one 18-hole round significantly exceeds the CDC’s recommended 150 minutes of physical activity per week. Go out multiple times a week – leaving the cart and beer at home, American golfers – and you’ll be fit for a lifetime.

The golfers on our staff, however, are still waiting for those mental health benefits to kick in. Because when we’re adding up our scorecard after that string of four double bogeys to end the round, we’re most definitely thinking: “Yes, this sport is reducing my psychological distress. I am having fun right now.”
 

Battle of the sexes’ intestines

There are, we’re sure you’ve noticed, some differences between males and females. Females, for one thing, have longer small intestines than males. Everybody knows that, right? You didn’t know? Really? … Really?

Afif Ramdhasuma/Unsplash

Well, then, we’re guessing you haven’t read “Hidden diversity: Comparative functional morphology of humans and other species” by Erin A. McKenney, PhD, of North Carolina State University, Raleigh, and associates, which just appeared in PeerJ. We couldn’t put it down, even in the shower – a real page-turner/scroller. (It’s a great way to clean a phone, for those who also like to scroll, text, or talk on the toilet.)

The researchers got out their rulers, calipers, and string and took many measurements of the digestive systems of 45 human cadavers (21 female and 24 male), which were compared with data from 10 rats, 10 pigs, and 10 bullfrogs, which had been collected (the measurements, not the animals) by undergraduate students enrolled in a comparative anatomy laboratory course at the university.

There was little intestinal-length variation among the four-legged subjects, but when it comes to humans, females have “consistently and significantly longer small intestines than males,” the investigators noted.

The women’s small intestines, almost 14 feet long on average, were about a foot longer than the men’s, which suggests that women are better able to extract nutrients from food and “supports the canalization hypothesis, which posits that women are better able to survive during periods of stress,” coauthor Amanda Hale said in a written statement from the school. The way to a man’s heart may be through his stomach, but the way to a woman’s heart is through her duodenum, it seems.

Fascinating stuff, to be sure, but the thing that really caught our eye in the PeerJ article was the authors’ suggestion “that organs behave independently of one another, both within and across species.” Organs behaving independently? A somewhat ominous concept, no doubt, but it does explain a lot of the sounds we hear coming from our guts, which can get pretty frightening, especially on chili night.
 

 

 

Dog walking is dangerous business

Yes, you did read that right. A lot of strange things can send you to the emergency department. Go ahead and add dog walking onto that list.

Investigators from Johns Hopkins University estimate that over 422,000 adults presented to U.S. emergency departments with leash-dependent dog walking-related injuries between 2001 and 2020.

freestocks/Unsplash

With almost 53% of U.S. households owning at least one dog in 2021-2022 in the wake of the COVID pet boom, this kind of occurrence is becoming more common than you think. The annual number of dog-walking injuries more than quadrupled from 7,300 to 32,000 over the course of the study, and the researchers link that spike to the promotion of dog walking for fitness, along with the boost of ownership itself.

The most common injuries listed in the National Electronic Injury Surveillance System database were finger fracture, traumatic brain injury, and shoulder sprain or strain. These mostly involved falls from being pulled, tripped, or tangled up in the leash while walking. For those aged 65 years and older, traumatic brain injury and hip fracture were the most common.

Women were 50% more likely to sustain a fracture than were men, and dog owners aged 65 and older were three times as likely to fall, twice as likely to get a fracture, and 60% more likely to have brain injury than were younger people. Now, that’s not to say younger people don’t also get hurt. After all, dogs aren’t ageists. The researchers have that data but it’s coming out later.

Meanwhile, the pitfalls involved with just trying to get our daily steps in while letting Muffin do her business have us on the lookout for random squirrels.

 

Taking a swing against arthritis

Osteoarthritis is a tough disease to manage. Exercise helps ease the stiffness and pain of the joints, but at the same time, the disease makes it difficult to do that beneficial exercise. Even a relatively simple activity like jogging can hurt more than it helps. If only there were a low-impact exercise that was incredibly popular among the generally older population who are likely to have arthritis.

We love a good golf study here at LOTME, and a group of Australian and U.K. researchers have provided. Osteoarthritis affects 2 million people in the land down under, making it the most common source of disability there. In that population, only 64% reported their physical health to be good, very good, or excellent. Among the 459 golfers with OA that the study authors surveyed, however, the percentage reporting good health rose to more than 90%.

jacoblund/Getty Images

A similar story emerged when they looked at mental health. Nearly a quarter of nongolfers with OA reported high or very high levels of psychological distress, compared with just 8% of golfers. This pattern of improved physical and mental health remained when the researchers looked at the general, non-OA population.

This isn’t the first time golf’s been connected with improved health, and previous studies have shown golf to reduce the risks of cardiovascular disease, diabetes, and obesity, among other things. Just walking one 18-hole round significantly exceeds the CDC’s recommended 150 minutes of physical activity per week. Go out multiple times a week – leaving the cart and beer at home, American golfers – and you’ll be fit for a lifetime.

The golfers on our staff, however, are still waiting for those mental health benefits to kick in. Because when we’re adding up our scorecard after that string of four double bogeys to end the round, we’re most definitely thinking: “Yes, this sport is reducing my psychological distress. I am having fun right now.”
 

Battle of the sexes’ intestines

There are, we’re sure you’ve noticed, some differences between males and females. Females, for one thing, have longer small intestines than males. Everybody knows that, right? You didn’t know? Really? … Really?

Afif Ramdhasuma/Unsplash

Well, then, we’re guessing you haven’t read “Hidden diversity: Comparative functional morphology of humans and other species” by Erin A. McKenney, PhD, of North Carolina State University, Raleigh, and associates, which just appeared in PeerJ. We couldn’t put it down, even in the shower – a real page-turner/scroller. (It’s a great way to clean a phone, for those who also like to scroll, text, or talk on the toilet.)

The researchers got out their rulers, calipers, and string and took many measurements of the digestive systems of 45 human cadavers (21 female and 24 male), which were compared with data from 10 rats, 10 pigs, and 10 bullfrogs, which had been collected (the measurements, not the animals) by undergraduate students enrolled in a comparative anatomy laboratory course at the university.

There was little intestinal-length variation among the four-legged subjects, but when it comes to humans, females have “consistently and significantly longer small intestines than males,” the investigators noted.

The women’s small intestines, almost 14 feet long on average, were about a foot longer than the men’s, which suggests that women are better able to extract nutrients from food and “supports the canalization hypothesis, which posits that women are better able to survive during periods of stress,” coauthor Amanda Hale said in a written statement from the school. The way to a man’s heart may be through his stomach, but the way to a woman’s heart is through her duodenum, it seems.

Fascinating stuff, to be sure, but the thing that really caught our eye in the PeerJ article was the authors’ suggestion “that organs behave independently of one another, both within and across species.” Organs behaving independently? A somewhat ominous concept, no doubt, but it does explain a lot of the sounds we hear coming from our guts, which can get pretty frightening, especially on chili night.
 

 

 

Dog walking is dangerous business

Yes, you did read that right. A lot of strange things can send you to the emergency department. Go ahead and add dog walking onto that list.

Investigators from Johns Hopkins University estimate that over 422,000 adults presented to U.S. emergency departments with leash-dependent dog walking-related injuries between 2001 and 2020.

freestocks/Unsplash

With almost 53% of U.S. households owning at least one dog in 2021-2022 in the wake of the COVID pet boom, this kind of occurrence is becoming more common than you think. The annual number of dog-walking injuries more than quadrupled from 7,300 to 32,000 over the course of the study, and the researchers link that spike to the promotion of dog walking for fitness, along with the boost of ownership itself.

The most common injuries listed in the National Electronic Injury Surveillance System database were finger fracture, traumatic brain injury, and shoulder sprain or strain. These mostly involved falls from being pulled, tripped, or tangled up in the leash while walking. For those aged 65 years and older, traumatic brain injury and hip fracture were the most common.

Women were 50% more likely to sustain a fracture than were men, and dog owners aged 65 and older were three times as likely to fall, twice as likely to get a fracture, and 60% more likely to have brain injury than were younger people. Now, that’s not to say younger people don’t also get hurt. After all, dogs aren’t ageists. The researchers have that data but it’s coming out later.

Meanwhile, the pitfalls involved with just trying to get our daily steps in while letting Muffin do her business have us on the lookout for random squirrels.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

FDA gives fast-track approval to new ALS drug

Article Type
Changed
Thu, 04/27/2023 - 12:03

The Food and Drug Administration has approved the first treatment that takes a genetics-based approach to slowing or stopping the progression of a rare form of amyotrophic lateral sclerosis (ALS), the debilitating and deadly disease for which there is no cure.
 

Most people with ALS die within 3-5 years of when symptoms appear, usually of respiratory failure.

The newly approved drug, called Qalsody, is made by the Swiss company Biogen. The FDA fast-tracked the approval based on early trial results. The agency said in a news release that its decision was based on the demonstrated ability of the drug to reduce a protein in the blood that is a sign of degeneration of brain and nerve cells.

While the drug was shown to impact the chemical process in the body linked to degeneration, there was no significant change in people’s symptoms during the first 28 weeks that they took the drug, Biogen said in a news release. But the company noted that some patients did see improved functioning after starting treatment.

“I have observed the positive impact Qalsody has on slowing the progression of ALS in people with SOD1 mutations,” Timothy M. Miller, MD, PhD, researcher and codirector of the ALS Center at Washington University in St. Louis, said in a statement released by Biogen. “The FDA’s approval of Qalsody gives me hope that people living with this rare form of ALS could experience a reduction in decline in strength, clinical function, and respiratory function.”

Qalsody is given to people via a spinal injection, with an initial course of three injections every 2 weeks. People then get the injection once every 28 days.

The new treatment is approved only for people with a rare kind of ALS called SOD1-ALS, which is known for a genetic mutation. While ALS affects up to 32,000 people in the United States, just 2% of people with ALS have the SOD1 gene mutation. The FDA says the number of people in the United States who could use Qalsody is about 500.

In trials, 147 people received either Qalsody or a placebo, and the treatment significantly reduced the level of a protein in people’s blood that is associated with the loss of control of voluntary muscles. 

Because Qalsody received a fast-track approval from the FDA, it must still provide more research data in the future, including from a trial examining how the drug affects people who carry the SOD1 gene but do not yet show symptoms of ALS.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The Food and Drug Administration has approved the first treatment that takes a genetics-based approach to slowing or stopping the progression of a rare form of amyotrophic lateral sclerosis (ALS), the debilitating and deadly disease for which there is no cure.
 

Most people with ALS die within 3-5 years of when symptoms appear, usually of respiratory failure.

The newly approved drug, called Qalsody, is made by the Swiss company Biogen. The FDA fast-tracked the approval based on early trial results. The agency said in a news release that its decision was based on the demonstrated ability of the drug to reduce a protein in the blood that is a sign of degeneration of brain and nerve cells.

While the drug was shown to impact the chemical process in the body linked to degeneration, there was no significant change in people’s symptoms during the first 28 weeks that they took the drug, Biogen said in a news release. But the company noted that some patients did see improved functioning after starting treatment.

“I have observed the positive impact Qalsody has on slowing the progression of ALS in people with SOD1 mutations,” Timothy M. Miller, MD, PhD, researcher and codirector of the ALS Center at Washington University in St. Louis, said in a statement released by Biogen. “The FDA’s approval of Qalsody gives me hope that people living with this rare form of ALS could experience a reduction in decline in strength, clinical function, and respiratory function.”

Qalsody is given to people via a spinal injection, with an initial course of three injections every 2 weeks. People then get the injection once every 28 days.

The new treatment is approved only for people with a rare kind of ALS called SOD1-ALS, which is known for a genetic mutation. While ALS affects up to 32,000 people in the United States, just 2% of people with ALS have the SOD1 gene mutation. The FDA says the number of people in the United States who could use Qalsody is about 500.

In trials, 147 people received either Qalsody or a placebo, and the treatment significantly reduced the level of a protein in people’s blood that is associated with the loss of control of voluntary muscles. 

Because Qalsody received a fast-track approval from the FDA, it must still provide more research data in the future, including from a trial examining how the drug affects people who carry the SOD1 gene but do not yet show symptoms of ALS.

A version of this article first appeared on Medscape.com.

The Food and Drug Administration has approved the first treatment that takes a genetics-based approach to slowing or stopping the progression of a rare form of amyotrophic lateral sclerosis (ALS), the debilitating and deadly disease for which there is no cure.
 

Most people with ALS die within 3-5 years of when symptoms appear, usually of respiratory failure.

The newly approved drug, called Qalsody, is made by the Swiss company Biogen. The FDA fast-tracked the approval based on early trial results. The agency said in a news release that its decision was based on the demonstrated ability of the drug to reduce a protein in the blood that is a sign of degeneration of brain and nerve cells.

While the drug was shown to impact the chemical process in the body linked to degeneration, there was no significant change in people’s symptoms during the first 28 weeks that they took the drug, Biogen said in a news release. But the company noted that some patients did see improved functioning after starting treatment.

“I have observed the positive impact Qalsody has on slowing the progression of ALS in people with SOD1 mutations,” Timothy M. Miller, MD, PhD, researcher and codirector of the ALS Center at Washington University in St. Louis, said in a statement released by Biogen. “The FDA’s approval of Qalsody gives me hope that people living with this rare form of ALS could experience a reduction in decline in strength, clinical function, and respiratory function.”

Qalsody is given to people via a spinal injection, with an initial course of three injections every 2 weeks. People then get the injection once every 28 days.

The new treatment is approved only for people with a rare kind of ALS called SOD1-ALS, which is known for a genetic mutation. While ALS affects up to 32,000 people in the United States, just 2% of people with ALS have the SOD1 gene mutation. The FDA says the number of people in the United States who could use Qalsody is about 500.

In trials, 147 people received either Qalsody or a placebo, and the treatment significantly reduced the level of a protein in people’s blood that is associated with the loss of control of voluntary muscles. 

Because Qalsody received a fast-track approval from the FDA, it must still provide more research data in the future, including from a trial examining how the drug affects people who carry the SOD1 gene but do not yet show symptoms of ALS.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Atogepant prevents episodic migraine in some difficult-to-treat cases

Article Type
Changed
Thu, 05/04/2023 - 09:01

Atogepant helped reduce the number of mean migraine days among adults with episodic migraine who failed multiple other oral migraine medications, according to findings from a study presented at the 2023 annual meeting of the American Academy of Neurology.

Initial results from the double-blind ELEVATE trial showed the oral atogepant group had significantly fewer mean monthly migraine days (MMD) compared with a placebo group. There was also a significant difference in the number of participants who achieved 50% or greater reduction in the number of mean MMDs and a significant reduction in acute medication use days compared with the placebo group, according to Patricia Pozo-Rosich, MD, PhD, a headache specialist in the neurology department and director of the headache and craniofacial pain clinical unit and the Migraine Adaptive Brain Center at the Vall d’Hebron University Hospital in Barcelona, and colleagues.

Vall d’Hebron University Hospital
Dr. Patricia Pozo-Rosich

The oral calcitonin gene-related peptide (CGRP) receptor antagonist is currently approved in the United States by the Food and Drug Administration as a preventative for both episodic and chronic migraine.
 

Results from ELEVATE

Overall, ELEVATE’s initial efficacy analysis population consisted of 309 adults aged between 18 and 80 years from North America and Europe with episodic migraine who had 4-14 MMDs and had treatment failure with at least two classes of conventional oral medication. After a 28-day screening period, participants received either 60 mg of oral atogepant once per day (154 participants) or a placebo (155 participants). In the efficacy analysis population, 56.0% of participants had failed two oral migraine preventative medication classes, while 44.0% failed three or more classes of medication. Dr. Pozo-Rosich noted that participants were taking a number of different oral preventatives across different medication classes, including flunarizine, beta blockers, topiramate, and amitriptyline, but data are not yet available on which participants had received certain combinations of oral medications.

“[T]hese people have already taken some type of prevention, so they’re not naive patients,” she said. “They’re usually more or less well treated in the sense of having had a contact with specialists or a general neurologist, someone that actually tries to do some prevention.”

The researchers examined change from MMDs at baseline and at 12 weeks as a primary outcome, with 50% or greater MMD reduction, change in mean monthly headache days, and change in acute medication use days as secondary outcomes. Regarding the different acute medications used, Dr. Pozo-Rosich said the main three types were analgesics, nonsteroid anti-inflammatory drugs, and triptans, with participants excluded from the trial if they were taking opioids.

The results showed participants in the atogepant group had significantly fewer mean MMDs compared with the placebo group at 12 weeks compared with baseline (–4.20 vs. –1.85 days; P < .0001). Researchers also found statistically significant improvement in the atogepant group for 50% or greater reduction in MMD, change in mean monthly headache days, and change in acute medication use days across 12 weeks of treatment compared with the placebo group. While the specific data analyses for secondary outcomes were not conducted in the initial analysis, Dr. Pozo-Rosich said the numbers “correlate with the primary outcome” as seen in other migraine trials.

Compared with the placebo group, participants in the atogepant group had higher rates of constipation (10.3% vs. 2.5%), COVID-19 (9.6% vs. 8.3%), and nausea (7.1% vs. 3.2%), while the placebo group had a higher rate of nasopharyngitis (5.1% vs. 7.6%).*

Migraine is a prevalent and undertreated disease, and patients around the world with migraine are in need of treatment options that are both safe and effective, Dr. Pozo-Rosich said in an interview. “[E]ven in these hard-to-treat or difficult-to-treat migraine patients, you have a drug that works, and is safe, and well tolerated and effective,” she said.

That’s “kind of good news for all of us,” she said. Patients “need this type of good news and solution,” she explained, because they may not tolerate or have access to injectable medications. Atogepant would also give clinicians have another option to offer patients with difficult-to-treat migraine cases, she noted. “It makes life easier for many physicians and many patients for many different reasons,” she said.

Dr. Pozo-Rosich said the likely next step in the research is to conduct the main analysis as well as post hoc analyses with accumulated data from pathology trials “to understand patterns of response, understand the sustainability of the response, [and] adherence to the treatment in the long term.”
 

 

 

‘Exciting that it works well’ in difficult-to-treat patients

Commenting on the study, Alan M. Rapoport, MD, clinical professor of neurology at University of California, Los Angeles, and past president of the International Headache Society, agreed that better options in migraine treatment and prevention are needed.

“We needed something that was going to be better than what we had before,” he said.

Dr. Rapoport noted the study was well designed with strongly positive results. “It looks like it’s an effective drug, and it looks really good in that it’s effective for people that have failed all these preventives that have very little hope for the future,” he said.

He specifically praised the inclusion of older participants in the population. “You never see a study on 80-year-olds,” he said, “but I like that, because they felt it would be safe. There are 80-year-old patients – fewer of them than 40-year-old patients – but there are 80-year-old patients who still have migraine, so I’m really glad they put older patients in it,” he said.

For atogepant, he noted that “some patients won’t get the side effects, and some patients will tolerate the side effects because it’s working really well.” While the study was not a head-to-head comparison against other oral migraine preventatives, he pointed out the high rate of constipation among participants in the trial setting may be a warning sign of future issues, as seen with other CGRP receptor agonists.

“I can tell you that with erenumab, the monoclonal antibody that was injected in the double-blind studies, they didn’t find any significant increase in constipation,” he explained. However, some clinicians using erenumab in the real world have reported up to 20% of their patients are constipated. “It’s not good that they’re reporting 10% are constipated” in the study, he said.

Overall, “all you can really say is it does work well,” Dr. Rapoport said. “It’s exciting that it works well in such difficult-to-treat patients, and it does come with some side effects.”

Dr. Pozo-Rosich reports serving as a consultant and developing education materials for AbbVie, Eli Lilly, Novartis, Teva Pharmaceuticals, and Pfizer. Dr. Rapoport is the editor-in-chief of Neurology Reviews; he reports being a consultant for AbbVie, the developer of atogepant. The ELEVATE trial is supported by AbbVie.

*Correction, 5/4/23: An earlier version of this article misstated the percentage of COVID-positive patients in the study population. 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Atogepant helped reduce the number of mean migraine days among adults with episodic migraine who failed multiple other oral migraine medications, according to findings from a study presented at the 2023 annual meeting of the American Academy of Neurology.

Initial results from the double-blind ELEVATE trial showed the oral atogepant group had significantly fewer mean monthly migraine days (MMD) compared with a placebo group. There was also a significant difference in the number of participants who achieved 50% or greater reduction in the number of mean MMDs and a significant reduction in acute medication use days compared with the placebo group, according to Patricia Pozo-Rosich, MD, PhD, a headache specialist in the neurology department and director of the headache and craniofacial pain clinical unit and the Migraine Adaptive Brain Center at the Vall d’Hebron University Hospital in Barcelona, and colleagues.

Vall d’Hebron University Hospital
Dr. Patricia Pozo-Rosich

The oral calcitonin gene-related peptide (CGRP) receptor antagonist is currently approved in the United States by the Food and Drug Administration as a preventative for both episodic and chronic migraine.
 

Results from ELEVATE

Overall, ELEVATE’s initial efficacy analysis population consisted of 309 adults aged between 18 and 80 years from North America and Europe with episodic migraine who had 4-14 MMDs and had treatment failure with at least two classes of conventional oral medication. After a 28-day screening period, participants received either 60 mg of oral atogepant once per day (154 participants) or a placebo (155 participants). In the efficacy analysis population, 56.0% of participants had failed two oral migraine preventative medication classes, while 44.0% failed three or more classes of medication. Dr. Pozo-Rosich noted that participants were taking a number of different oral preventatives across different medication classes, including flunarizine, beta blockers, topiramate, and amitriptyline, but data are not yet available on which participants had received certain combinations of oral medications.

“[T]hese people have already taken some type of prevention, so they’re not naive patients,” she said. “They’re usually more or less well treated in the sense of having had a contact with specialists or a general neurologist, someone that actually tries to do some prevention.”

The researchers examined change from MMDs at baseline and at 12 weeks as a primary outcome, with 50% or greater MMD reduction, change in mean monthly headache days, and change in acute medication use days as secondary outcomes. Regarding the different acute medications used, Dr. Pozo-Rosich said the main three types were analgesics, nonsteroid anti-inflammatory drugs, and triptans, with participants excluded from the trial if they were taking opioids.

The results showed participants in the atogepant group had significantly fewer mean MMDs compared with the placebo group at 12 weeks compared with baseline (–4.20 vs. –1.85 days; P < .0001). Researchers also found statistically significant improvement in the atogepant group for 50% or greater reduction in MMD, change in mean monthly headache days, and change in acute medication use days across 12 weeks of treatment compared with the placebo group. While the specific data analyses for secondary outcomes were not conducted in the initial analysis, Dr. Pozo-Rosich said the numbers “correlate with the primary outcome” as seen in other migraine trials.

Compared with the placebo group, participants in the atogepant group had higher rates of constipation (10.3% vs. 2.5%), COVID-19 (9.6% vs. 8.3%), and nausea (7.1% vs. 3.2%), while the placebo group had a higher rate of nasopharyngitis (5.1% vs. 7.6%).*

Migraine is a prevalent and undertreated disease, and patients around the world with migraine are in need of treatment options that are both safe and effective, Dr. Pozo-Rosich said in an interview. “[E]ven in these hard-to-treat or difficult-to-treat migraine patients, you have a drug that works, and is safe, and well tolerated and effective,” she said.

That’s “kind of good news for all of us,” she said. Patients “need this type of good news and solution,” she explained, because they may not tolerate or have access to injectable medications. Atogepant would also give clinicians have another option to offer patients with difficult-to-treat migraine cases, she noted. “It makes life easier for many physicians and many patients for many different reasons,” she said.

Dr. Pozo-Rosich said the likely next step in the research is to conduct the main analysis as well as post hoc analyses with accumulated data from pathology trials “to understand patterns of response, understand the sustainability of the response, [and] adherence to the treatment in the long term.”
 

 

 

‘Exciting that it works well’ in difficult-to-treat patients

Commenting on the study, Alan M. Rapoport, MD, clinical professor of neurology at University of California, Los Angeles, and past president of the International Headache Society, agreed that better options in migraine treatment and prevention are needed.

“We needed something that was going to be better than what we had before,” he said.

Dr. Rapoport noted the study was well designed with strongly positive results. “It looks like it’s an effective drug, and it looks really good in that it’s effective for people that have failed all these preventives that have very little hope for the future,” he said.

He specifically praised the inclusion of older participants in the population. “You never see a study on 80-year-olds,” he said, “but I like that, because they felt it would be safe. There are 80-year-old patients – fewer of them than 40-year-old patients – but there are 80-year-old patients who still have migraine, so I’m really glad they put older patients in it,” he said.

For atogepant, he noted that “some patients won’t get the side effects, and some patients will tolerate the side effects because it’s working really well.” While the study was not a head-to-head comparison against other oral migraine preventatives, he pointed out the high rate of constipation among participants in the trial setting may be a warning sign of future issues, as seen with other CGRP receptor agonists.

“I can tell you that with erenumab, the monoclonal antibody that was injected in the double-blind studies, they didn’t find any significant increase in constipation,” he explained. However, some clinicians using erenumab in the real world have reported up to 20% of their patients are constipated. “It’s not good that they’re reporting 10% are constipated” in the study, he said.

Overall, “all you can really say is it does work well,” Dr. Rapoport said. “It’s exciting that it works well in such difficult-to-treat patients, and it does come with some side effects.”

Dr. Pozo-Rosich reports serving as a consultant and developing education materials for AbbVie, Eli Lilly, Novartis, Teva Pharmaceuticals, and Pfizer. Dr. Rapoport is the editor-in-chief of Neurology Reviews; he reports being a consultant for AbbVie, the developer of atogepant. The ELEVATE trial is supported by AbbVie.

*Correction, 5/4/23: An earlier version of this article misstated the percentage of COVID-positive patients in the study population. 

Atogepant helped reduce the number of mean migraine days among adults with episodic migraine who failed multiple other oral migraine medications, according to findings from a study presented at the 2023 annual meeting of the American Academy of Neurology.

Initial results from the double-blind ELEVATE trial showed the oral atogepant group had significantly fewer mean monthly migraine days (MMD) compared with a placebo group. There was also a significant difference in the number of participants who achieved 50% or greater reduction in the number of mean MMDs and a significant reduction in acute medication use days compared with the placebo group, according to Patricia Pozo-Rosich, MD, PhD, a headache specialist in the neurology department and director of the headache and craniofacial pain clinical unit and the Migraine Adaptive Brain Center at the Vall d’Hebron University Hospital in Barcelona, and colleagues.

Vall d’Hebron University Hospital
Dr. Patricia Pozo-Rosich

The oral calcitonin gene-related peptide (CGRP) receptor antagonist is currently approved in the United States by the Food and Drug Administration as a preventative for both episodic and chronic migraine.
 

Results from ELEVATE

Overall, ELEVATE’s initial efficacy analysis population consisted of 309 adults aged between 18 and 80 years from North America and Europe with episodic migraine who had 4-14 MMDs and had treatment failure with at least two classes of conventional oral medication. After a 28-day screening period, participants received either 60 mg of oral atogepant once per day (154 participants) or a placebo (155 participants). In the efficacy analysis population, 56.0% of participants had failed two oral migraine preventative medication classes, while 44.0% failed three or more classes of medication. Dr. Pozo-Rosich noted that participants were taking a number of different oral preventatives across different medication classes, including flunarizine, beta blockers, topiramate, and amitriptyline, but data are not yet available on which participants had received certain combinations of oral medications.

“[T]hese people have already taken some type of prevention, so they’re not naive patients,” she said. “They’re usually more or less well treated in the sense of having had a contact with specialists or a general neurologist, someone that actually tries to do some prevention.”

The researchers examined change from MMDs at baseline and at 12 weeks as a primary outcome, with 50% or greater MMD reduction, change in mean monthly headache days, and change in acute medication use days as secondary outcomes. Regarding the different acute medications used, Dr. Pozo-Rosich said the main three types were analgesics, nonsteroid anti-inflammatory drugs, and triptans, with participants excluded from the trial if they were taking opioids.

The results showed participants in the atogepant group had significantly fewer mean MMDs compared with the placebo group at 12 weeks compared with baseline (–4.20 vs. –1.85 days; P < .0001). Researchers also found statistically significant improvement in the atogepant group for 50% or greater reduction in MMD, change in mean monthly headache days, and change in acute medication use days across 12 weeks of treatment compared with the placebo group. While the specific data analyses for secondary outcomes were not conducted in the initial analysis, Dr. Pozo-Rosich said the numbers “correlate with the primary outcome” as seen in other migraine trials.

Compared with the placebo group, participants in the atogepant group had higher rates of constipation (10.3% vs. 2.5%), COVID-19 (9.6% vs. 8.3%), and nausea (7.1% vs. 3.2%), while the placebo group had a higher rate of nasopharyngitis (5.1% vs. 7.6%).*

Migraine is a prevalent and undertreated disease, and patients around the world with migraine are in need of treatment options that are both safe and effective, Dr. Pozo-Rosich said in an interview. “[E]ven in these hard-to-treat or difficult-to-treat migraine patients, you have a drug that works, and is safe, and well tolerated and effective,” she said.

That’s “kind of good news for all of us,” she said. Patients “need this type of good news and solution,” she explained, because they may not tolerate or have access to injectable medications. Atogepant would also give clinicians have another option to offer patients with difficult-to-treat migraine cases, she noted. “It makes life easier for many physicians and many patients for many different reasons,” she said.

Dr. Pozo-Rosich said the likely next step in the research is to conduct the main analysis as well as post hoc analyses with accumulated data from pathology trials “to understand patterns of response, understand the sustainability of the response, [and] adherence to the treatment in the long term.”
 

 

 

‘Exciting that it works well’ in difficult-to-treat patients

Commenting on the study, Alan M. Rapoport, MD, clinical professor of neurology at University of California, Los Angeles, and past president of the International Headache Society, agreed that better options in migraine treatment and prevention are needed.

“We needed something that was going to be better than what we had before,” he said.

Dr. Rapoport noted the study was well designed with strongly positive results. “It looks like it’s an effective drug, and it looks really good in that it’s effective for people that have failed all these preventives that have very little hope for the future,” he said.

He specifically praised the inclusion of older participants in the population. “You never see a study on 80-year-olds,” he said, “but I like that, because they felt it would be safe. There are 80-year-old patients – fewer of them than 40-year-old patients – but there are 80-year-old patients who still have migraine, so I’m really glad they put older patients in it,” he said.

For atogepant, he noted that “some patients won’t get the side effects, and some patients will tolerate the side effects because it’s working really well.” While the study was not a head-to-head comparison against other oral migraine preventatives, he pointed out the high rate of constipation among participants in the trial setting may be a warning sign of future issues, as seen with other CGRP receptor agonists.

“I can tell you that with erenumab, the monoclonal antibody that was injected in the double-blind studies, they didn’t find any significant increase in constipation,” he explained. However, some clinicians using erenumab in the real world have reported up to 20% of their patients are constipated. “It’s not good that they’re reporting 10% are constipated” in the study, he said.

Overall, “all you can really say is it does work well,” Dr. Rapoport said. “It’s exciting that it works well in such difficult-to-treat patients, and it does come with some side effects.”

Dr. Pozo-Rosich reports serving as a consultant and developing education materials for AbbVie, Eli Lilly, Novartis, Teva Pharmaceuticals, and Pfizer. Dr. Rapoport is the editor-in-chief of Neurology Reviews; he reports being a consultant for AbbVie, the developer of atogepant. The ELEVATE trial is supported by AbbVie.

*Correction, 5/4/23: An earlier version of this article misstated the percentage of COVID-positive patients in the study population. 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAN 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

BMI is a flawed measure of obesity. What are alternatives?

Article Type
Changed
Mon, 05/01/2023 - 13:53

“BMI is trash. Full stop.” This controversial tweet, which received thousands of likes and retweets, was cited in a recent article by one doctor on when physicians might stop using body mass index (BMI) to diagnose obesity.

BMI has for years been the consensus default method for assessing whether a person is overweight or has obesity, and is still widely used as the gatekeeper metric for treatment eligibility for certain weight-loss agents and bariatric surgery.

But growing appreciation of the limitations of BMI is causing many clinicians to consider alternative measures of obesity that can better assess both the amount of adiposity as well as its body location, an important determinant of the cardiometabolic consequences of fat.

Alternative metrics include waist circumference and/or waist-to-height ratio (WHtR); imaging methods such as CT, MRI, and dual-energy x-ray absorptiometry (DXA); and bioelectrical impedance to assess fat volume and location. All have made some inroads on the tight grip BMI has had on obesity assessment.

Chances are, however, that BMI will not fade away anytime soon given how entrenched it has become in clinical practice and for insurance coverage, as well as its relative simplicity and precision.

“BMI is embedded in a wide range of guidelines on the use of medications and surgery. It’s embedded in Food and Drug Administration regulations and for billing and insurance coverage. It would take extremely strong data and years of work to undo the infrastructure built around BMI and replace it with something else. I don’t see that happening [anytime soon],” commented Daniel H. Bessesen, MD, a professor at the University of Colorado at Denver, Aurora, and chief of endocrinology for Denver Health.

“It would be almost impossible to replace all the studies that have used BMI with investigations using some other measure,” he said.
 

BMI Is ‘imperfect’

The entrenched position of BMI as the go-to metric doesn’t keep detractors from weighing in. As noted in a commentary on current clinical challenges surrounding obesity recently published in Annals of Internal Medicine, the journal’s editor-in-chief, Christine Laine, MD, and senior deputy editor Christina C. Wee, MD, listed six top issues clinicians must deal with, one of which, they say, is the need for a better measure of obesity than BMI.

“Unfortunately, BMI is an imperfect measure of body composition that differs with ethnicity, sex, body frame, and muscle mass,” noted Dr. Laine and Dr. Wee.

BMI is based on a person’s weight in kilograms divided by the square of their height in meters. A “healthy” BMI is between 18.5 and 24.9 kg/m2, overweight is 25-29.9, and 30 or greater is considered to represent obesity. However, certain ethnic groups have lower cutoffs for overweight or obesity because of evidence that such individuals can be at higher risk of obesity-related comorbidities at lower BMIs.

“BMI was chosen as the initial screening tool [for obesity] not because anyone thought it was perfect or the best measure but because of its simplicity. All you need is height, weight, and a calculator,” Dr. Wee said in an interview.

Numerous online calculators are available, including one from the Centers for Disease Control and Prevention where height in feet and inches and weight in pounds can be entered to generate the BMI.

BMI is also inherently limited by being “a proxy for adiposity” and not a direct measure, added Dr. Wee, who is also director of the Obesity Research Program of Beth Israel Deaconess Medical Center, Boston.

As such, BMI can’t distinguish between fat and muscle because it relies on weight only to gauge adiposity, noted Tiffany Powell-Wiley, MD, an obesity researcher at the National Heart, Lung, and Blood Institute in Bethesda, Md. Another shortcoming of BMI is that it “is good for distinguishing population-level risk for cardiovascular disease and other chronic diseases, but it does not help as much for distinguishing risk at an individual level,” she said in an interview.

These and other drawbacks have prompted researchers to look for other useful metrics. WHtR, for example, has recently made headway as a potential BMI alternative or complement.
 

 

 

The case for WHtR

Concern about overreliance on BMI despite its limitations is not new. In 2015, an American Heart Association scientific statement from the group’s Obesity Committee concluded that “BMI alone, even with lower thresholds, is a useful but not an ideal tool for identification of obesity or assessment of cardiovascular risk,” especially for people from Asian, Black, Hispanic, and Pacific Islander populations.

The writing panel also recommended that clinicians measure waist circumference annually and use that information along with BMI “to better gauge cardiovascular risk in diverse populations.”

Momentum for moving beyond BMI alone has continued to build following the AHA statement.

In September 2022, the National Institute for Health and Care Excellence, which sets policies for the United Kingdom’s National Health Service, revised its guidancefor assessment and management of people with obesity. The updated guidance recommends that when clinicians assess “adults with BMI below 35 kg/m2, measure and use their WHtR, as well as their BMI, as a practical estimate of central adiposity and use these measurements to help to assess and predict health risks.”

NICE released an extensive literature review with the revision, and based on the evidence, said that “using waist-to-height ratio as well as BMI would help give a practical estimate of central adiposity in adults with BMI under 35 kg/m2. This would in turn help professionals assess and predict health risks.”

However, the review added that, “because people with a BMI over 35 kg/m2 are always likely to have a high WHtR, the committee recognized that it may not be a useful addition for predicting health risks in this group.” The 2022 NICE review also said that it is “important to estimate central adiposity when assessing future health risks, including for people whose BMI is in the healthy-weight category.”

This new emphasis by NICE on measuring and using WHtR as part of obesity assessment “represents an important change in population health policy,” commented Dr. Powell-Wiley. “I expect more professional organizations will endorse use of waist circumference or waist-to-height ratio now that NICE has taken this step,” she predicted.

Waist circumference and WHtR may become standard measures of adiposity in clinical practice over the next 5-10 years.

The recent move by NICE to highlight a complementary role for WHtR “is another acknowledgment that BMI is an imperfect tool for stratifying cardiometabolic risk in a diverse population, especially in people with lower BMIs” because of its variability, commented Jamie Almandoz, MD, medical director of the weight wellness program at UT Southwestern Medical Center, Dallas.
 

WHtR vs. BMI

Another recent step forward for WHtR came with the publication of a post hoc analysis of data collected in the PARADIGM-HF trial, a study that had the primary purpose of comparing two medications for improving outcomes in more than 8,000 patients with heart failure with reduced ejection fraction.

The new analysis showed that “two indices that incorporate waist circumference and height, but not weight, showed a clearer association between greater adiposity and a higher risk of heart failure hospitalization,” compared with BMI.

WHtR was one of the two indices identified as being a better correlate for the adverse effect of excess adiposity compared with BMI.

The authors of the post hoc analysis did not design their analysis to compare WHtR with BMI. Instead, their goal was to better understand what’s known as the “obesity paradox” in people with heart failure with reduced ejection fraction: The recurring observation that, when these patients with heart failure have lower BMIs they fare worse, with higher rates of mortality and adverse cardiovascular outcomes, compared with patients with higher BMIs.

The new analysis showed that this paradox disappeared when WHtR was substituted for BMI as the obesity metric.

This “provides meaningful data about the superiority of WHtR, compared with BMI, for predicting heart failure outcomes,” said Dr. Powell-Wiley, although she cautioned that the analysis was limited by scant data in diverse populations and did not look at other important cardiovascular disease outcomes. While Dr. Powell-Wiley does not think that WHtR needs assessment in a prospective, controlled trial, she called for analysis of pooled prospective studies with more diverse populations to better document the advantages of WHtR over BMI.

The PARADIGM-HF post hoc analysis shows again how flawed BMI is for health assessment and the relative importance of an individualized understanding of a person’s body composition, Dr. Almandoz said in an interview. “As we collect more data, there is increasing awareness of how imperfect BMI is.”
 

 

 

Measuring waist circumference is tricky

Although WHtR looks promising as a substitute for or add-on to BMI, it has its own limitations, particularly the challenge of accurately measuring waist circumference.

Measuring waist circumference “not only takes more time but requires the assessor to be well trained about where to put the tape measure and making sure it’s measured at the same place each time,” even when different people take serial measurements from individual patients, noted Dr. Wee. Determining waist circumference can also be technically difficult when done on larger people, she added, and collectively these challenges make waist circumference “less reproducible from measurement to measurement.”

“It’s relatively clear how to standardize measurement of weight and height, but there is a huge amount of variability when the waist is measured,” agreed Dr. Almandoz. “And waist circumference also differs by ethnicity, race, sex, and body frame. There are significant differences in waist circumference levels that associate with increased health risks” between, for example, White and South Asian people.

Another limitation of waist circumference and WHtR is that they “cannot differentiate between visceral and abdominal subcutaneous adipose tissue, which are vastly different regarding cardiometabolic risk, commented Ian Neeland, MD, director of cardiovascular prevention at the University Hospitals Harrington Heart & Vascular Institute, Cleveland.
 

The imaging option

“Waist-to-height ratio is not the ultimate answer,” Dr. Neeland said in an interview. He instead endorsed “advanced imaging for body fat distribution,” such as CT or MRI scans, as his pick for what should be the standard obesity metric, “given that it is much more specific and actionable for both risk assessment and response to therapy. I expect slow but steady advancements that move away from BMI cutoffs, for example for bariatric surgery, given that BMI is an imprecise and crude tool.”

But although imaging with methods like CT and MRI may provide the best accuracy and precision for tracking the volume of a person’s cardiometabolically dangerous fat, they are also hampered by relatively high cost and, for CT and DXA, the issue of radiation exposure.

“CT, MRI, and DXA scans give more in-depth assessment of body composition, but should we expose people to the radiation and the cost?” Dr. Almandoz wondered.

“Height, weight, and waist circumference cost nothing to obtain,” creating a big relative disadvantage for imaging, said Naveed Sattar, MD, professor of metabolic medicine at the University of Glasgow.

“Data would need to show that imaging gives clinicians substantially more information about future risk” to justify its price, Dr. Sattar emphasized.
 

BMI’s limits mean adding on

Regardless of whichever alternatives to BMI end up getting used most, experts generally agree that BMI alone is looking increasingly inadequate.

“Over the next 5 years, BMI will come to be seen as a screening tool that categorizes people into general risk groups” that also needs “other metrics and variables, such as age, race, ethnicity, family history, blood glucose, and blood pressure to better describe health risk in an individual,” predicted Dr. Bessesen.

The endorsement of WHtR by NICE “will lead to more research into how to incorporate WHtR into routine practice. We need more evidence to translate what NICE said into practice,” said Dr. Sattar. “I don’t think we’ll see a shift away from BMI, but we’ll add alternative measures that are particularly useful in certain patients.”

“Because we live in diverse societies, we need to individualize risk assessment and couple that with technology that makes analysis of body composition more accessible,” agreed Dr. Almandoz. He noted that the UT Southwestern weight wellness program where he practices has, for about the past decade, routinely collected waist circumference and bioelectrical impedance data as well as BMI on all people seen in the practice for obesity concerns. Making these additional measurements on a routine basis also helps strengthen patient engagement.

“We get into trouble when we make rigid health policy and clinical decisions based on BMI alone without looking at the patient holistically,” said Dr. Wee. “Patients are more than arbitrary numbers, and clinicians should make clinical decisions based on the totality of evidence for each individual patient.”

Dr. Bessesen, Dr. Wee, Dr. Powell-Wiley, and Dr. Almandoz reported no relevant financial relationships. Dr. Neeland has reported being a consultant for Merck. Dr. Sattar has reported being a consultant or speaker for Abbott Laboratories, Afimmune, Amgen, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Hanmi Pharmaceuticals, Janssen, MSD, Novartis, Novo Nordisk, Pfizer, Roche Diagnostics, and Sanofi.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

“BMI is trash. Full stop.” This controversial tweet, which received thousands of likes and retweets, was cited in a recent article by one doctor on when physicians might stop using body mass index (BMI) to diagnose obesity.

BMI has for years been the consensus default method for assessing whether a person is overweight or has obesity, and is still widely used as the gatekeeper metric for treatment eligibility for certain weight-loss agents and bariatric surgery.

But growing appreciation of the limitations of BMI is causing many clinicians to consider alternative measures of obesity that can better assess both the amount of adiposity as well as its body location, an important determinant of the cardiometabolic consequences of fat.

Alternative metrics include waist circumference and/or waist-to-height ratio (WHtR); imaging methods such as CT, MRI, and dual-energy x-ray absorptiometry (DXA); and bioelectrical impedance to assess fat volume and location. All have made some inroads on the tight grip BMI has had on obesity assessment.

Chances are, however, that BMI will not fade away anytime soon given how entrenched it has become in clinical practice and for insurance coverage, as well as its relative simplicity and precision.

“BMI is embedded in a wide range of guidelines on the use of medications and surgery. It’s embedded in Food and Drug Administration regulations and for billing and insurance coverage. It would take extremely strong data and years of work to undo the infrastructure built around BMI and replace it with something else. I don’t see that happening [anytime soon],” commented Daniel H. Bessesen, MD, a professor at the University of Colorado at Denver, Aurora, and chief of endocrinology for Denver Health.

“It would be almost impossible to replace all the studies that have used BMI with investigations using some other measure,” he said.
 

BMI Is ‘imperfect’

The entrenched position of BMI as the go-to metric doesn’t keep detractors from weighing in. As noted in a commentary on current clinical challenges surrounding obesity recently published in Annals of Internal Medicine, the journal’s editor-in-chief, Christine Laine, MD, and senior deputy editor Christina C. Wee, MD, listed six top issues clinicians must deal with, one of which, they say, is the need for a better measure of obesity than BMI.

“Unfortunately, BMI is an imperfect measure of body composition that differs with ethnicity, sex, body frame, and muscle mass,” noted Dr. Laine and Dr. Wee.

BMI is based on a person’s weight in kilograms divided by the square of their height in meters. A “healthy” BMI is between 18.5 and 24.9 kg/m2, overweight is 25-29.9, and 30 or greater is considered to represent obesity. However, certain ethnic groups have lower cutoffs for overweight or obesity because of evidence that such individuals can be at higher risk of obesity-related comorbidities at lower BMIs.

“BMI was chosen as the initial screening tool [for obesity] not because anyone thought it was perfect or the best measure but because of its simplicity. All you need is height, weight, and a calculator,” Dr. Wee said in an interview.

Numerous online calculators are available, including one from the Centers for Disease Control and Prevention where height in feet and inches and weight in pounds can be entered to generate the BMI.

BMI is also inherently limited by being “a proxy for adiposity” and not a direct measure, added Dr. Wee, who is also director of the Obesity Research Program of Beth Israel Deaconess Medical Center, Boston.

As such, BMI can’t distinguish between fat and muscle because it relies on weight only to gauge adiposity, noted Tiffany Powell-Wiley, MD, an obesity researcher at the National Heart, Lung, and Blood Institute in Bethesda, Md. Another shortcoming of BMI is that it “is good for distinguishing population-level risk for cardiovascular disease and other chronic diseases, but it does not help as much for distinguishing risk at an individual level,” she said in an interview.

These and other drawbacks have prompted researchers to look for other useful metrics. WHtR, for example, has recently made headway as a potential BMI alternative or complement.
 

 

 

The case for WHtR

Concern about overreliance on BMI despite its limitations is not new. In 2015, an American Heart Association scientific statement from the group’s Obesity Committee concluded that “BMI alone, even with lower thresholds, is a useful but not an ideal tool for identification of obesity or assessment of cardiovascular risk,” especially for people from Asian, Black, Hispanic, and Pacific Islander populations.

The writing panel also recommended that clinicians measure waist circumference annually and use that information along with BMI “to better gauge cardiovascular risk in diverse populations.”

Momentum for moving beyond BMI alone has continued to build following the AHA statement.

In September 2022, the National Institute for Health and Care Excellence, which sets policies for the United Kingdom’s National Health Service, revised its guidancefor assessment and management of people with obesity. The updated guidance recommends that when clinicians assess “adults with BMI below 35 kg/m2, measure and use their WHtR, as well as their BMI, as a practical estimate of central adiposity and use these measurements to help to assess and predict health risks.”

NICE released an extensive literature review with the revision, and based on the evidence, said that “using waist-to-height ratio as well as BMI would help give a practical estimate of central adiposity in adults with BMI under 35 kg/m2. This would in turn help professionals assess and predict health risks.”

However, the review added that, “because people with a BMI over 35 kg/m2 are always likely to have a high WHtR, the committee recognized that it may not be a useful addition for predicting health risks in this group.” The 2022 NICE review also said that it is “important to estimate central adiposity when assessing future health risks, including for people whose BMI is in the healthy-weight category.”

This new emphasis by NICE on measuring and using WHtR as part of obesity assessment “represents an important change in population health policy,” commented Dr. Powell-Wiley. “I expect more professional organizations will endorse use of waist circumference or waist-to-height ratio now that NICE has taken this step,” she predicted.

Waist circumference and WHtR may become standard measures of adiposity in clinical practice over the next 5-10 years.

The recent move by NICE to highlight a complementary role for WHtR “is another acknowledgment that BMI is an imperfect tool for stratifying cardiometabolic risk in a diverse population, especially in people with lower BMIs” because of its variability, commented Jamie Almandoz, MD, medical director of the weight wellness program at UT Southwestern Medical Center, Dallas.
 

WHtR vs. BMI

Another recent step forward for WHtR came with the publication of a post hoc analysis of data collected in the PARADIGM-HF trial, a study that had the primary purpose of comparing two medications for improving outcomes in more than 8,000 patients with heart failure with reduced ejection fraction.

The new analysis showed that “two indices that incorporate waist circumference and height, but not weight, showed a clearer association between greater adiposity and a higher risk of heart failure hospitalization,” compared with BMI.

WHtR was one of the two indices identified as being a better correlate for the adverse effect of excess adiposity compared with BMI.

The authors of the post hoc analysis did not design their analysis to compare WHtR with BMI. Instead, their goal was to better understand what’s known as the “obesity paradox” in people with heart failure with reduced ejection fraction: The recurring observation that, when these patients with heart failure have lower BMIs they fare worse, with higher rates of mortality and adverse cardiovascular outcomes, compared with patients with higher BMIs.

The new analysis showed that this paradox disappeared when WHtR was substituted for BMI as the obesity metric.

This “provides meaningful data about the superiority of WHtR, compared with BMI, for predicting heart failure outcomes,” said Dr. Powell-Wiley, although she cautioned that the analysis was limited by scant data in diverse populations and did not look at other important cardiovascular disease outcomes. While Dr. Powell-Wiley does not think that WHtR needs assessment in a prospective, controlled trial, she called for analysis of pooled prospective studies with more diverse populations to better document the advantages of WHtR over BMI.

The PARADIGM-HF post hoc analysis shows again how flawed BMI is for health assessment and the relative importance of an individualized understanding of a person’s body composition, Dr. Almandoz said in an interview. “As we collect more data, there is increasing awareness of how imperfect BMI is.”
 

 

 

Measuring waist circumference is tricky

Although WHtR looks promising as a substitute for or add-on to BMI, it has its own limitations, particularly the challenge of accurately measuring waist circumference.

Measuring waist circumference “not only takes more time but requires the assessor to be well trained about where to put the tape measure and making sure it’s measured at the same place each time,” even when different people take serial measurements from individual patients, noted Dr. Wee. Determining waist circumference can also be technically difficult when done on larger people, she added, and collectively these challenges make waist circumference “less reproducible from measurement to measurement.”

“It’s relatively clear how to standardize measurement of weight and height, but there is a huge amount of variability when the waist is measured,” agreed Dr. Almandoz. “And waist circumference also differs by ethnicity, race, sex, and body frame. There are significant differences in waist circumference levels that associate with increased health risks” between, for example, White and South Asian people.

Another limitation of waist circumference and WHtR is that they “cannot differentiate between visceral and abdominal subcutaneous adipose tissue, which are vastly different regarding cardiometabolic risk, commented Ian Neeland, MD, director of cardiovascular prevention at the University Hospitals Harrington Heart & Vascular Institute, Cleveland.
 

The imaging option

“Waist-to-height ratio is not the ultimate answer,” Dr. Neeland said in an interview. He instead endorsed “advanced imaging for body fat distribution,” such as CT or MRI scans, as his pick for what should be the standard obesity metric, “given that it is much more specific and actionable for both risk assessment and response to therapy. I expect slow but steady advancements that move away from BMI cutoffs, for example for bariatric surgery, given that BMI is an imprecise and crude tool.”

But although imaging with methods like CT and MRI may provide the best accuracy and precision for tracking the volume of a person’s cardiometabolically dangerous fat, they are also hampered by relatively high cost and, for CT and DXA, the issue of radiation exposure.

“CT, MRI, and DXA scans give more in-depth assessment of body composition, but should we expose people to the radiation and the cost?” Dr. Almandoz wondered.

“Height, weight, and waist circumference cost nothing to obtain,” creating a big relative disadvantage for imaging, said Naveed Sattar, MD, professor of metabolic medicine at the University of Glasgow.

“Data would need to show that imaging gives clinicians substantially more information about future risk” to justify its price, Dr. Sattar emphasized.
 

BMI’s limits mean adding on

Regardless of whichever alternatives to BMI end up getting used most, experts generally agree that BMI alone is looking increasingly inadequate.

“Over the next 5 years, BMI will come to be seen as a screening tool that categorizes people into general risk groups” that also needs “other metrics and variables, such as age, race, ethnicity, family history, blood glucose, and blood pressure to better describe health risk in an individual,” predicted Dr. Bessesen.

The endorsement of WHtR by NICE “will lead to more research into how to incorporate WHtR into routine practice. We need more evidence to translate what NICE said into practice,” said Dr. Sattar. “I don’t think we’ll see a shift away from BMI, but we’ll add alternative measures that are particularly useful in certain patients.”

“Because we live in diverse societies, we need to individualize risk assessment and couple that with technology that makes analysis of body composition more accessible,” agreed Dr. Almandoz. He noted that the UT Southwestern weight wellness program where he practices has, for about the past decade, routinely collected waist circumference and bioelectrical impedance data as well as BMI on all people seen in the practice for obesity concerns. Making these additional measurements on a routine basis also helps strengthen patient engagement.

“We get into trouble when we make rigid health policy and clinical decisions based on BMI alone without looking at the patient holistically,” said Dr. Wee. “Patients are more than arbitrary numbers, and clinicians should make clinical decisions based on the totality of evidence for each individual patient.”

Dr. Bessesen, Dr. Wee, Dr. Powell-Wiley, and Dr. Almandoz reported no relevant financial relationships. Dr. Neeland has reported being a consultant for Merck. Dr. Sattar has reported being a consultant or speaker for Abbott Laboratories, Afimmune, Amgen, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Hanmi Pharmaceuticals, Janssen, MSD, Novartis, Novo Nordisk, Pfizer, Roche Diagnostics, and Sanofi.

A version of this article originally appeared on Medscape.com.

“BMI is trash. Full stop.” This controversial tweet, which received thousands of likes and retweets, was cited in a recent article by one doctor on when physicians might stop using body mass index (BMI) to diagnose obesity.

BMI has for years been the consensus default method for assessing whether a person is overweight or has obesity, and is still widely used as the gatekeeper metric for treatment eligibility for certain weight-loss agents and bariatric surgery.

But growing appreciation of the limitations of BMI is causing many clinicians to consider alternative measures of obesity that can better assess both the amount of adiposity as well as its body location, an important determinant of the cardiometabolic consequences of fat.

Alternative metrics include waist circumference and/or waist-to-height ratio (WHtR); imaging methods such as CT, MRI, and dual-energy x-ray absorptiometry (DXA); and bioelectrical impedance to assess fat volume and location. All have made some inroads on the tight grip BMI has had on obesity assessment.

Chances are, however, that BMI will not fade away anytime soon given how entrenched it has become in clinical practice and for insurance coverage, as well as its relative simplicity and precision.

“BMI is embedded in a wide range of guidelines on the use of medications and surgery. It’s embedded in Food and Drug Administration regulations and for billing and insurance coverage. It would take extremely strong data and years of work to undo the infrastructure built around BMI and replace it with something else. I don’t see that happening [anytime soon],” commented Daniel H. Bessesen, MD, a professor at the University of Colorado at Denver, Aurora, and chief of endocrinology for Denver Health.

“It would be almost impossible to replace all the studies that have used BMI with investigations using some other measure,” he said.
 

BMI Is ‘imperfect’

The entrenched position of BMI as the go-to metric doesn’t keep detractors from weighing in. As noted in a commentary on current clinical challenges surrounding obesity recently published in Annals of Internal Medicine, the journal’s editor-in-chief, Christine Laine, MD, and senior deputy editor Christina C. Wee, MD, listed six top issues clinicians must deal with, one of which, they say, is the need for a better measure of obesity than BMI.

“Unfortunately, BMI is an imperfect measure of body composition that differs with ethnicity, sex, body frame, and muscle mass,” noted Dr. Laine and Dr. Wee.

BMI is based on a person’s weight in kilograms divided by the square of their height in meters. A “healthy” BMI is between 18.5 and 24.9 kg/m2, overweight is 25-29.9, and 30 or greater is considered to represent obesity. However, certain ethnic groups have lower cutoffs for overweight or obesity because of evidence that such individuals can be at higher risk of obesity-related comorbidities at lower BMIs.

“BMI was chosen as the initial screening tool [for obesity] not because anyone thought it was perfect or the best measure but because of its simplicity. All you need is height, weight, and a calculator,” Dr. Wee said in an interview.

Numerous online calculators are available, including one from the Centers for Disease Control and Prevention where height in feet and inches and weight in pounds can be entered to generate the BMI.

BMI is also inherently limited by being “a proxy for adiposity” and not a direct measure, added Dr. Wee, who is also director of the Obesity Research Program of Beth Israel Deaconess Medical Center, Boston.

As such, BMI can’t distinguish between fat and muscle because it relies on weight only to gauge adiposity, noted Tiffany Powell-Wiley, MD, an obesity researcher at the National Heart, Lung, and Blood Institute in Bethesda, Md. Another shortcoming of BMI is that it “is good for distinguishing population-level risk for cardiovascular disease and other chronic diseases, but it does not help as much for distinguishing risk at an individual level,” she said in an interview.

These and other drawbacks have prompted researchers to look for other useful metrics. WHtR, for example, has recently made headway as a potential BMI alternative or complement.
 

 

 

The case for WHtR

Concern about overreliance on BMI despite its limitations is not new. In 2015, an American Heart Association scientific statement from the group’s Obesity Committee concluded that “BMI alone, even with lower thresholds, is a useful but not an ideal tool for identification of obesity or assessment of cardiovascular risk,” especially for people from Asian, Black, Hispanic, and Pacific Islander populations.

The writing panel also recommended that clinicians measure waist circumference annually and use that information along with BMI “to better gauge cardiovascular risk in diverse populations.”

Momentum for moving beyond BMI alone has continued to build following the AHA statement.

In September 2022, the National Institute for Health and Care Excellence, which sets policies for the United Kingdom’s National Health Service, revised its guidancefor assessment and management of people with obesity. The updated guidance recommends that when clinicians assess “adults with BMI below 35 kg/m2, measure and use their WHtR, as well as their BMI, as a practical estimate of central adiposity and use these measurements to help to assess and predict health risks.”

NICE released an extensive literature review with the revision, and based on the evidence, said that “using waist-to-height ratio as well as BMI would help give a practical estimate of central adiposity in adults with BMI under 35 kg/m2. This would in turn help professionals assess and predict health risks.”

However, the review added that, “because people with a BMI over 35 kg/m2 are always likely to have a high WHtR, the committee recognized that it may not be a useful addition for predicting health risks in this group.” The 2022 NICE review also said that it is “important to estimate central adiposity when assessing future health risks, including for people whose BMI is in the healthy-weight category.”

This new emphasis by NICE on measuring and using WHtR as part of obesity assessment “represents an important change in population health policy,” commented Dr. Powell-Wiley. “I expect more professional organizations will endorse use of waist circumference or waist-to-height ratio now that NICE has taken this step,” she predicted.

Waist circumference and WHtR may become standard measures of adiposity in clinical practice over the next 5-10 years.

The recent move by NICE to highlight a complementary role for WHtR “is another acknowledgment that BMI is an imperfect tool for stratifying cardiometabolic risk in a diverse population, especially in people with lower BMIs” because of its variability, commented Jamie Almandoz, MD, medical director of the weight wellness program at UT Southwestern Medical Center, Dallas.
 

WHtR vs. BMI

Another recent step forward for WHtR came with the publication of a post hoc analysis of data collected in the PARADIGM-HF trial, a study that had the primary purpose of comparing two medications for improving outcomes in more than 8,000 patients with heart failure with reduced ejection fraction.

The new analysis showed that “two indices that incorporate waist circumference and height, but not weight, showed a clearer association between greater adiposity and a higher risk of heart failure hospitalization,” compared with BMI.

WHtR was one of the two indices identified as being a better correlate for the adverse effect of excess adiposity compared with BMI.

The authors of the post hoc analysis did not design their analysis to compare WHtR with BMI. Instead, their goal was to better understand what’s known as the “obesity paradox” in people with heart failure with reduced ejection fraction: The recurring observation that, when these patients with heart failure have lower BMIs they fare worse, with higher rates of mortality and adverse cardiovascular outcomes, compared with patients with higher BMIs.

The new analysis showed that this paradox disappeared when WHtR was substituted for BMI as the obesity metric.

This “provides meaningful data about the superiority of WHtR, compared with BMI, for predicting heart failure outcomes,” said Dr. Powell-Wiley, although she cautioned that the analysis was limited by scant data in diverse populations and did not look at other important cardiovascular disease outcomes. While Dr. Powell-Wiley does not think that WHtR needs assessment in a prospective, controlled trial, she called for analysis of pooled prospective studies with more diverse populations to better document the advantages of WHtR over BMI.

The PARADIGM-HF post hoc analysis shows again how flawed BMI is for health assessment and the relative importance of an individualized understanding of a person’s body composition, Dr. Almandoz said in an interview. “As we collect more data, there is increasing awareness of how imperfect BMI is.”
 

 

 

Measuring waist circumference is tricky

Although WHtR looks promising as a substitute for or add-on to BMI, it has its own limitations, particularly the challenge of accurately measuring waist circumference.

Measuring waist circumference “not only takes more time but requires the assessor to be well trained about where to put the tape measure and making sure it’s measured at the same place each time,” even when different people take serial measurements from individual patients, noted Dr. Wee. Determining waist circumference can also be technically difficult when done on larger people, she added, and collectively these challenges make waist circumference “less reproducible from measurement to measurement.”

“It’s relatively clear how to standardize measurement of weight and height, but there is a huge amount of variability when the waist is measured,” agreed Dr. Almandoz. “And waist circumference also differs by ethnicity, race, sex, and body frame. There are significant differences in waist circumference levels that associate with increased health risks” between, for example, White and South Asian people.

Another limitation of waist circumference and WHtR is that they “cannot differentiate between visceral and abdominal subcutaneous adipose tissue, which are vastly different regarding cardiometabolic risk, commented Ian Neeland, MD, director of cardiovascular prevention at the University Hospitals Harrington Heart & Vascular Institute, Cleveland.
 

The imaging option

“Waist-to-height ratio is not the ultimate answer,” Dr. Neeland said in an interview. He instead endorsed “advanced imaging for body fat distribution,” such as CT or MRI scans, as his pick for what should be the standard obesity metric, “given that it is much more specific and actionable for both risk assessment and response to therapy. I expect slow but steady advancements that move away from BMI cutoffs, for example for bariatric surgery, given that BMI is an imprecise and crude tool.”

But although imaging with methods like CT and MRI may provide the best accuracy and precision for tracking the volume of a person’s cardiometabolically dangerous fat, they are also hampered by relatively high cost and, for CT and DXA, the issue of radiation exposure.

“CT, MRI, and DXA scans give more in-depth assessment of body composition, but should we expose people to the radiation and the cost?” Dr. Almandoz wondered.

“Height, weight, and waist circumference cost nothing to obtain,” creating a big relative disadvantage for imaging, said Naveed Sattar, MD, professor of metabolic medicine at the University of Glasgow.

“Data would need to show that imaging gives clinicians substantially more information about future risk” to justify its price, Dr. Sattar emphasized.
 

BMI’s limits mean adding on

Regardless of whichever alternatives to BMI end up getting used most, experts generally agree that BMI alone is looking increasingly inadequate.

“Over the next 5 years, BMI will come to be seen as a screening tool that categorizes people into general risk groups” that also needs “other metrics and variables, such as age, race, ethnicity, family history, blood glucose, and blood pressure to better describe health risk in an individual,” predicted Dr. Bessesen.

The endorsement of WHtR by NICE “will lead to more research into how to incorporate WHtR into routine practice. We need more evidence to translate what NICE said into practice,” said Dr. Sattar. “I don’t think we’ll see a shift away from BMI, but we’ll add alternative measures that are particularly useful in certain patients.”

“Because we live in diverse societies, we need to individualize risk assessment and couple that with technology that makes analysis of body composition more accessible,” agreed Dr. Almandoz. He noted that the UT Southwestern weight wellness program where he practices has, for about the past decade, routinely collected waist circumference and bioelectrical impedance data as well as BMI on all people seen in the practice for obesity concerns. Making these additional measurements on a routine basis also helps strengthen patient engagement.

“We get into trouble when we make rigid health policy and clinical decisions based on BMI alone without looking at the patient holistically,” said Dr. Wee. “Patients are more than arbitrary numbers, and clinicians should make clinical decisions based on the totality of evidence for each individual patient.”

Dr. Bessesen, Dr. Wee, Dr. Powell-Wiley, and Dr. Almandoz reported no relevant financial relationships. Dr. Neeland has reported being a consultant for Merck. Dr. Sattar has reported being a consultant or speaker for Abbott Laboratories, Afimmune, Amgen, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Hanmi Pharmaceuticals, Janssen, MSD, Novartis, Novo Nordisk, Pfizer, Roche Diagnostics, and Sanofi.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ablation for atrial fibrillation may protect the aging brain

Article Type
Changed
Wed, 04/26/2023 - 10:08

Treating atrial fibrillation with catheter ablation in addition to medical management may offer greater protection against cognitive impairment than medical management alone, new research suggests.

Investigators found adults who had previously undergone catheter ablation were significantly less likely to be cognitively impaired during the 2-year study period, compared with those who receive medical management alone.

“Catheter ablation is intended to stop atrial fibrillation and restore the normal rhythm of the heart. By doing so, there is an improved cerebral hemodynamic profile,” said Bahadar S. Srichawla, DO, department of neurology, University of Massachusetts, Worcester.

“Thus, long-term cognitive outcomes may be improved due to improved blood flow to the brain by restoring the normal rhythm of the heart,” he added.

This research was presented at the 2023 annual meeting of the American Academy of Neurology.
 

Heart-brain connection

The study involved 887 older adults (mean age 75; 49% women) with atrial fibrillation participating in the SAGE-AF (Systematic Assessment of Geriatric Elements) study. A total of 193 (22%) participants underwent catheter ablation prior to enrollment. These individuals more frequently had an implantable cardiac device (46% vs. 28%, P < .001) and persistent atrial fibrillation (31% vs. 23%, P < .05).

Cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) tool at baseline and 1 and 2 years, with cognitive impairment defined as a MoCA score of 23 or below. Individuals who had catheter ablation had an average MoCA score of 25, compared with an average score of 23 in those who didn’t have catheter ablation.

After adjusting for potential confounding factors such as heart disease, renal disease, sleep apnea, and atrial fibrillation risk score, those who underwent catheter ablation were 36% less likely to develop cognitive impairment over 2 years than those who were treated only with medication (adjusted odds ratio, 0.64; 95% CI, 0.46-0.88).

During his presentation, Dr. Srichawla noted there is a hypothesis that individuals who are anticoagulated with warfarin may be prone to cerebral microbleeds and may be more cognitively impaired over time.

However, in a subgroup analysis, “cognitive function was similar at 2-year follow-up in those anticoagulated with warfarin, compared with all other anticoagulants. However, it should be noted that in this study, a direct head-to-head comparison was not done,” Dr. Srichawla told attendees.

“In patients with atrial fibrillation, catheter ablation should be discussed as a potential treatment strategy, particularly in patients who have or are at risk for cognitive decline and dementia,” Dr. Srichawla said.
 

Intriguing findings

Commenting on the research, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said the study is “intriguing and adds to what we know from previous research connecting cardiovascular and cognitive health.”

“However, there are limitations to this study,” Dr. Griffin said, “including its predominantly White cohort and the use of only neuropsychiatric testing to diagnose dementia. More research is needed to fully understand the impact of atrial fibrillation on cognitive outcomes in all people.”

“It’s well known that the heart and the brain are intimately connected. Individuals experiencing any cardiovascular issues should speak to their doctor,” Dr. Griffin added.

Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, agreed. “If you ever get up too quickly and feel woozy, that is your brain not getting enough blood flow and you are getting all the warning signs to correct that – or else! Similarly, with atrial fibrillation, the heart is contracting, but not effectively pumping blood to the brain,” he said.

“This line of research shows that correcting the abnormal heart rhythm by zapping the faulty circuit with a catheter is actually better for your brain health than just taking medications alone,” added Dr. Lakhan, who was not involved with the study.

The study had no commercial funding. Dr. Srichawla, Dr. Griffin, and Dr. Lakhan report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Treating atrial fibrillation with catheter ablation in addition to medical management may offer greater protection against cognitive impairment than medical management alone, new research suggests.

Investigators found adults who had previously undergone catheter ablation were significantly less likely to be cognitively impaired during the 2-year study period, compared with those who receive medical management alone.

“Catheter ablation is intended to stop atrial fibrillation and restore the normal rhythm of the heart. By doing so, there is an improved cerebral hemodynamic profile,” said Bahadar S. Srichawla, DO, department of neurology, University of Massachusetts, Worcester.

“Thus, long-term cognitive outcomes may be improved due to improved blood flow to the brain by restoring the normal rhythm of the heart,” he added.

This research was presented at the 2023 annual meeting of the American Academy of Neurology.
 

Heart-brain connection

The study involved 887 older adults (mean age 75; 49% women) with atrial fibrillation participating in the SAGE-AF (Systematic Assessment of Geriatric Elements) study. A total of 193 (22%) participants underwent catheter ablation prior to enrollment. These individuals more frequently had an implantable cardiac device (46% vs. 28%, P < .001) and persistent atrial fibrillation (31% vs. 23%, P < .05).

Cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) tool at baseline and 1 and 2 years, with cognitive impairment defined as a MoCA score of 23 or below. Individuals who had catheter ablation had an average MoCA score of 25, compared with an average score of 23 in those who didn’t have catheter ablation.

After adjusting for potential confounding factors such as heart disease, renal disease, sleep apnea, and atrial fibrillation risk score, those who underwent catheter ablation were 36% less likely to develop cognitive impairment over 2 years than those who were treated only with medication (adjusted odds ratio, 0.64; 95% CI, 0.46-0.88).

During his presentation, Dr. Srichawla noted there is a hypothesis that individuals who are anticoagulated with warfarin may be prone to cerebral microbleeds and may be more cognitively impaired over time.

However, in a subgroup analysis, “cognitive function was similar at 2-year follow-up in those anticoagulated with warfarin, compared with all other anticoagulants. However, it should be noted that in this study, a direct head-to-head comparison was not done,” Dr. Srichawla told attendees.

“In patients with atrial fibrillation, catheter ablation should be discussed as a potential treatment strategy, particularly in patients who have or are at risk for cognitive decline and dementia,” Dr. Srichawla said.
 

Intriguing findings

Commenting on the research, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said the study is “intriguing and adds to what we know from previous research connecting cardiovascular and cognitive health.”

“However, there are limitations to this study,” Dr. Griffin said, “including its predominantly White cohort and the use of only neuropsychiatric testing to diagnose dementia. More research is needed to fully understand the impact of atrial fibrillation on cognitive outcomes in all people.”

“It’s well known that the heart and the brain are intimately connected. Individuals experiencing any cardiovascular issues should speak to their doctor,” Dr. Griffin added.

Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, agreed. “If you ever get up too quickly and feel woozy, that is your brain not getting enough blood flow and you are getting all the warning signs to correct that – or else! Similarly, with atrial fibrillation, the heart is contracting, but not effectively pumping blood to the brain,” he said.

“This line of research shows that correcting the abnormal heart rhythm by zapping the faulty circuit with a catheter is actually better for your brain health than just taking medications alone,” added Dr. Lakhan, who was not involved with the study.

The study had no commercial funding. Dr. Srichawla, Dr. Griffin, and Dr. Lakhan report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Treating atrial fibrillation with catheter ablation in addition to medical management may offer greater protection against cognitive impairment than medical management alone, new research suggests.

Investigators found adults who had previously undergone catheter ablation were significantly less likely to be cognitively impaired during the 2-year study period, compared with those who receive medical management alone.

“Catheter ablation is intended to stop atrial fibrillation and restore the normal rhythm of the heart. By doing so, there is an improved cerebral hemodynamic profile,” said Bahadar S. Srichawla, DO, department of neurology, University of Massachusetts, Worcester.

“Thus, long-term cognitive outcomes may be improved due to improved blood flow to the brain by restoring the normal rhythm of the heart,” he added.

This research was presented at the 2023 annual meeting of the American Academy of Neurology.
 

Heart-brain connection

The study involved 887 older adults (mean age 75; 49% women) with atrial fibrillation participating in the SAGE-AF (Systematic Assessment of Geriatric Elements) study. A total of 193 (22%) participants underwent catheter ablation prior to enrollment. These individuals more frequently had an implantable cardiac device (46% vs. 28%, P < .001) and persistent atrial fibrillation (31% vs. 23%, P < .05).

Cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) tool at baseline and 1 and 2 years, with cognitive impairment defined as a MoCA score of 23 or below. Individuals who had catheter ablation had an average MoCA score of 25, compared with an average score of 23 in those who didn’t have catheter ablation.

After adjusting for potential confounding factors such as heart disease, renal disease, sleep apnea, and atrial fibrillation risk score, those who underwent catheter ablation were 36% less likely to develop cognitive impairment over 2 years than those who were treated only with medication (adjusted odds ratio, 0.64; 95% CI, 0.46-0.88).

During his presentation, Dr. Srichawla noted there is a hypothesis that individuals who are anticoagulated with warfarin may be prone to cerebral microbleeds and may be more cognitively impaired over time.

However, in a subgroup analysis, “cognitive function was similar at 2-year follow-up in those anticoagulated with warfarin, compared with all other anticoagulants. However, it should be noted that in this study, a direct head-to-head comparison was not done,” Dr. Srichawla told attendees.

“In patients with atrial fibrillation, catheter ablation should be discussed as a potential treatment strategy, particularly in patients who have or are at risk for cognitive decline and dementia,” Dr. Srichawla said.
 

Intriguing findings

Commenting on the research, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said the study is “intriguing and adds to what we know from previous research connecting cardiovascular and cognitive health.”

“However, there are limitations to this study,” Dr. Griffin said, “including its predominantly White cohort and the use of only neuropsychiatric testing to diagnose dementia. More research is needed to fully understand the impact of atrial fibrillation on cognitive outcomes in all people.”

“It’s well known that the heart and the brain are intimately connected. Individuals experiencing any cardiovascular issues should speak to their doctor,” Dr. Griffin added.

Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, agreed. “If you ever get up too quickly and feel woozy, that is your brain not getting enough blood flow and you are getting all the warning signs to correct that – or else! Similarly, with atrial fibrillation, the heart is contracting, but not effectively pumping blood to the brain,” he said.

“This line of research shows that correcting the abnormal heart rhythm by zapping the faulty circuit with a catheter is actually better for your brain health than just taking medications alone,” added Dr. Lakhan, who was not involved with the study.

The study had no commercial funding. Dr. Srichawla, Dr. Griffin, and Dr. Lakhan report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAN 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article