More Evidence Ties Semaglutide to Reduced Alzheimer’s Risk

Article Type
Changed
Tue, 10/29/2024 - 05:49

A new study provides real-world evidence to support the potential repurposing of glucagon-like peptide 1 receptor agonists (GLP-1 RAs), used to treat type 2 diabetes and obesity, for prevention of Alzheimer’s disease.

Adults with type 2 diabetes who were prescribed the GLP-1 RA semaglutide had a significantly lower risk for Alzheimer’s disease compared with their peers who were prescribed any of seven other antidiabetic medications, including other types of GLP-1 receptor–targeting medications. 

“These findings support further clinical trials to assess semaglutide’s potential in delaying or preventing Alzheimer’s disease,” wrote the investigators, led by Rong Xu, PhD, with Case Western Reserve School of Medicine, Cleveland, Ohio. 

The study was published online on October 24 in Alzheimer’s & Dementia.
 

Real-World Data

Semaglutide has shown neuroprotective effects in animal models of neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease. In animal models of Alzheimer’s disease, the drug reduced beta-amyloid deposition and improved spatial learning and memory, as well as glucose metabolism in the brain. 

In a real-world analysis, Xu and colleagues used electronic health record data to identify 17,104 new users of semaglutide and 1,077,657 new users of seven other antidiabetic medications, including other GLP-1 RAs, insulin, metformin, dipeptidyl peptidase 4 inhibitors, sodium-glucose cotransporter 2 inhibitors, sulfonylurea, and thiazolidinedione.

Over 3 years, treatment with semaglutide was associated with significantly reduced risk of developing Alzheimer’s disease, most strongly compared with insulin (hazard ratio [HR], 0.33) and most weakly compared with other GLP-1 RAs (HR, 0.59). 

Compared with the other medications, semaglutide was associated with a 40%-70% reduced risk for first-time diagnosis of Alzheimer’s disease in patients with type 2 diabetes, with similar reductions seen across obesity status and gender and age groups, the authors reported. 

The findings align with recent evidence suggesting GLP-1 RAs may protect cognitive function. 

For example, as previously reported, in the phase 2b ELAD clinical trial, adults with early-stage Alzheimer’s disease taking the GLP-1 RA liraglutide exhibited slower decline in memory and thinking and experienced less brain atrophy over 12 months compared with placebo. 
 

Promising, but Preliminary 

Reached for comment, Courtney Kloske, PhD, Alzheimer’s Association director of scientific engagement, noted that diabetes is a known risk factor for AD and managing diabetes with drugs such as semaglutide “could benefit brain health simply by managing diabetes.”

“However, we still need large clinical trials in representative populations to determine if semaglutide specifically lowers the risk of Alzheimer’s, so it is too early to recommend it for prevention,” Kloske said. 

She noted that some research suggests that GLP-1 RAs “may help reduce inflammation and positively impact brain energy use. However, more research is needed to fully understand how these processes might contribute to preventing cognitive decline or Alzheimer’s,” Kloske cautioned. 

The Alzheimer’s Association’s “Part the Cloud” initiative has invested more than $68 million to advance 65 clinical trials targeting a variety of compounds, including repurposed drugs that may address known and potential new aspects of the disease, Kloske said. 

The study was supported by grants from the National Institute on Aging and the National Center for Advancing Translational Sciences. Xu and Kloske have no relevant conflicts.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A new study provides real-world evidence to support the potential repurposing of glucagon-like peptide 1 receptor agonists (GLP-1 RAs), used to treat type 2 diabetes and obesity, for prevention of Alzheimer’s disease.

Adults with type 2 diabetes who were prescribed the GLP-1 RA semaglutide had a significantly lower risk for Alzheimer’s disease compared with their peers who were prescribed any of seven other antidiabetic medications, including other types of GLP-1 receptor–targeting medications. 

“These findings support further clinical trials to assess semaglutide’s potential in delaying or preventing Alzheimer’s disease,” wrote the investigators, led by Rong Xu, PhD, with Case Western Reserve School of Medicine, Cleveland, Ohio. 

The study was published online on October 24 in Alzheimer’s & Dementia.
 

Real-World Data

Semaglutide has shown neuroprotective effects in animal models of neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease. In animal models of Alzheimer’s disease, the drug reduced beta-amyloid deposition and improved spatial learning and memory, as well as glucose metabolism in the brain. 

In a real-world analysis, Xu and colleagues used electronic health record data to identify 17,104 new users of semaglutide and 1,077,657 new users of seven other antidiabetic medications, including other GLP-1 RAs, insulin, metformin, dipeptidyl peptidase 4 inhibitors, sodium-glucose cotransporter 2 inhibitors, sulfonylurea, and thiazolidinedione.

Over 3 years, treatment with semaglutide was associated with significantly reduced risk of developing Alzheimer’s disease, most strongly compared with insulin (hazard ratio [HR], 0.33) and most weakly compared with other GLP-1 RAs (HR, 0.59). 

Compared with the other medications, semaglutide was associated with a 40%-70% reduced risk for first-time diagnosis of Alzheimer’s disease in patients with type 2 diabetes, with similar reductions seen across obesity status and gender and age groups, the authors reported. 

The findings align with recent evidence suggesting GLP-1 RAs may protect cognitive function. 

For example, as previously reported, in the phase 2b ELAD clinical trial, adults with early-stage Alzheimer’s disease taking the GLP-1 RA liraglutide exhibited slower decline in memory and thinking and experienced less brain atrophy over 12 months compared with placebo. 
 

Promising, but Preliminary 

Reached for comment, Courtney Kloske, PhD, Alzheimer’s Association director of scientific engagement, noted that diabetes is a known risk factor for AD and managing diabetes with drugs such as semaglutide “could benefit brain health simply by managing diabetes.”

“However, we still need large clinical trials in representative populations to determine if semaglutide specifically lowers the risk of Alzheimer’s, so it is too early to recommend it for prevention,” Kloske said. 

She noted that some research suggests that GLP-1 RAs “may help reduce inflammation and positively impact brain energy use. However, more research is needed to fully understand how these processes might contribute to preventing cognitive decline or Alzheimer’s,” Kloske cautioned. 

The Alzheimer’s Association’s “Part the Cloud” initiative has invested more than $68 million to advance 65 clinical trials targeting a variety of compounds, including repurposed drugs that may address known and potential new aspects of the disease, Kloske said. 

The study was supported by grants from the National Institute on Aging and the National Center for Advancing Translational Sciences. Xu and Kloske have no relevant conflicts.
 

A version of this article appeared on Medscape.com.

A new study provides real-world evidence to support the potential repurposing of glucagon-like peptide 1 receptor agonists (GLP-1 RAs), used to treat type 2 diabetes and obesity, for prevention of Alzheimer’s disease.

Adults with type 2 diabetes who were prescribed the GLP-1 RA semaglutide had a significantly lower risk for Alzheimer’s disease compared with their peers who were prescribed any of seven other antidiabetic medications, including other types of GLP-1 receptor–targeting medications. 

“These findings support further clinical trials to assess semaglutide’s potential in delaying or preventing Alzheimer’s disease,” wrote the investigators, led by Rong Xu, PhD, with Case Western Reserve School of Medicine, Cleveland, Ohio. 

The study was published online on October 24 in Alzheimer’s & Dementia.
 

Real-World Data

Semaglutide has shown neuroprotective effects in animal models of neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease. In animal models of Alzheimer’s disease, the drug reduced beta-amyloid deposition and improved spatial learning and memory, as well as glucose metabolism in the brain. 

In a real-world analysis, Xu and colleagues used electronic health record data to identify 17,104 new users of semaglutide and 1,077,657 new users of seven other antidiabetic medications, including other GLP-1 RAs, insulin, metformin, dipeptidyl peptidase 4 inhibitors, sodium-glucose cotransporter 2 inhibitors, sulfonylurea, and thiazolidinedione.

Over 3 years, treatment with semaglutide was associated with significantly reduced risk of developing Alzheimer’s disease, most strongly compared with insulin (hazard ratio [HR], 0.33) and most weakly compared with other GLP-1 RAs (HR, 0.59). 

Compared with the other medications, semaglutide was associated with a 40%-70% reduced risk for first-time diagnosis of Alzheimer’s disease in patients with type 2 diabetes, with similar reductions seen across obesity status and gender and age groups, the authors reported. 

The findings align with recent evidence suggesting GLP-1 RAs may protect cognitive function. 

For example, as previously reported, in the phase 2b ELAD clinical trial, adults with early-stage Alzheimer’s disease taking the GLP-1 RA liraglutide exhibited slower decline in memory and thinking and experienced less brain atrophy over 12 months compared with placebo. 
 

Promising, but Preliminary 

Reached for comment, Courtney Kloske, PhD, Alzheimer’s Association director of scientific engagement, noted that diabetes is a known risk factor for AD and managing diabetes with drugs such as semaglutide “could benefit brain health simply by managing diabetes.”

“However, we still need large clinical trials in representative populations to determine if semaglutide specifically lowers the risk of Alzheimer’s, so it is too early to recommend it for prevention,” Kloske said. 

She noted that some research suggests that GLP-1 RAs “may help reduce inflammation and positively impact brain energy use. However, more research is needed to fully understand how these processes might contribute to preventing cognitive decline or Alzheimer’s,” Kloske cautioned. 

The Alzheimer’s Association’s “Part the Cloud” initiative has invested more than $68 million to advance 65 clinical trials targeting a variety of compounds, including repurposed drugs that may address known and potential new aspects of the disease, Kloske said. 

The study was supported by grants from the National Institute on Aging and the National Center for Advancing Translational Sciences. Xu and Kloske have no relevant conflicts.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ALZHEIMER’S & DEMENTIA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How Old Are You? Stand on One Leg and I’ll Tell You

Article Type
Changed
Tue, 10/29/2024 - 05:40

This transcript has been edited for clarity

So I was lying in bed the other night, trying to read my phone, and started complaining to my wife about how my vision keeps getting worse, and then how stiff I feel when I wake up in the morning, and how a recent injury is taking too long to heal, and she said, “Well, yeah. You’re 44. That’s when things start to head downhill.”

And I was like, “Forty-four? That seems very specific. I thought 50 was what people complain about.” And she said, “No, it’s a thing — 44 years old and 60 years old. There’s a drop-off there.”

And you know what? She was right.

A study, “Nonlinear dynamics of multi-omics profiles during human aging,” published in Nature Aging in August 2024, analyzed a ton of proteins and metabolites in people of various ages and found, when you put it all together, that there are some big changes in body chemistry over time — and those changes peak at age 44 and age 60. I should know better than to doubt my brilliant spouse.

Nature


But deep down, I believe the cliché that age is just a number. I don’t particularly care about being 44, or turning 50 or 60. I care about how my body and brain are aging. If I can be a happy, healthy, 80-year-old in full command of my faculties, I would consider that a major win no matter what the calendar says.

So I’m always interested in ways to quantify how my body is aging, independent of how many birthdays I have passed. And, according to a new study, there’s actually a really easy way to do this: Just stand on one leg.

The surprising results come from “Age-related changes in gait, balance, and strength parameters: A cross-sectional study,” appearing in PLOS One, which analyzed 40 individuals — half under age 65 and half over age 65 — across a variety of domains of strength, balance, and gait. The conceit of the study? We all know that things like strength and balance worsen over time, but what worsens fastest? What might be the best metric to tell us how our bodies are aging?

To that end, you have a variety of correlations between various metrics and calendar age.

PLOS One


As age increases, grip strength goes down. Men (inexplicably in pink) have higher grip strength overall, and women (confusingly in blue) lower. Somewhat less strong correlations were seen for knee strength.

PLOS One


What about balance?

To assess this, the researchers had the participants stand on a pressure plate. In one scenario, they did this with eyes open, and the next with eyes closed. They then measured how much the pressure varied around the center of the individual on the plate — basically, how much the person swayed while they were standing there.

Sway increased as age increased. Sway increased a bit more with eyes closed than with eyes open.

PLOS One


But the strongest correlation between any of these metrics and age was a simple one: How long can you stand on one leg?

Particularly for the nondominant leg, what you see here is a pretty dramatic drop-off in balance time around age 65, with younger people able to do 10 seconds with ease and some older people barely being able to make it to 2. 

PLOS One


Of course, I had to try this for myself. And as I was standing around on one leg, it became clear to me exactly why this might be a good metric. It really integrates balance and strength in a way that the other tests don’t: balance, clearly, since you have to stay vertical over a relatively small base; but strength as well, because, well, one leg is holding up all the rest of you. You do feel it after a while.

So this metric passes the smell test to me, at least as a potential proxy for age-related physical decline.

But I should be careful to note that this was a cross-sectional study; the researchers looked at various people who were all different ages, not the same people over time to watch how these things change as they aged.

Also, the use of the correlation coefficient in graphs like this implies a certain linear relationship between age and standing-on-one-foot time. The raw data — the points on this graph — don’t appear that linear to me. As I mentioned above, it seems like there might be a bit of a sharp drop-off somewhere in the mid-60s. That means that we may not be able to use this as a sensitive test for aging that slowly changes as your body gets older. It might be that you’re able to essentially stand on one leg as long as you want until, one day, you can’t. That gives us less warning and less to act on.

And finally, we don’t know that changing this metric will change your health for the better. I’m sure a good physiatrist or physical therapist could design some exercises to increase any of our standing-on-one leg times. And no doubt, with practice, you could get your numbers way up. But that doesn’t necessarily mean you’re healthier. It’s like “teaching to the test”; you might score better on the standardized exam but you didn’t really learn the material. 

So I am not adding one-leg standing to my daily exercise routine. But I won’t lie and tell you that, from time to time, and certainly on my 60th birthday, you may find me standing like a flamingo with a stopwatch in my hand.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity

So I was lying in bed the other night, trying to read my phone, and started complaining to my wife about how my vision keeps getting worse, and then how stiff I feel when I wake up in the morning, and how a recent injury is taking too long to heal, and she said, “Well, yeah. You’re 44. That’s when things start to head downhill.”

And I was like, “Forty-four? That seems very specific. I thought 50 was what people complain about.” And she said, “No, it’s a thing — 44 years old and 60 years old. There’s a drop-off there.”

And you know what? She was right.

A study, “Nonlinear dynamics of multi-omics profiles during human aging,” published in Nature Aging in August 2024, analyzed a ton of proteins and metabolites in people of various ages and found, when you put it all together, that there are some big changes in body chemistry over time — and those changes peak at age 44 and age 60. I should know better than to doubt my brilliant spouse.

Nature


But deep down, I believe the cliché that age is just a number. I don’t particularly care about being 44, or turning 50 or 60. I care about how my body and brain are aging. If I can be a happy, healthy, 80-year-old in full command of my faculties, I would consider that a major win no matter what the calendar says.

So I’m always interested in ways to quantify how my body is aging, independent of how many birthdays I have passed. And, according to a new study, there’s actually a really easy way to do this: Just stand on one leg.

The surprising results come from “Age-related changes in gait, balance, and strength parameters: A cross-sectional study,” appearing in PLOS One, which analyzed 40 individuals — half under age 65 and half over age 65 — across a variety of domains of strength, balance, and gait. The conceit of the study? We all know that things like strength and balance worsen over time, but what worsens fastest? What might be the best metric to tell us how our bodies are aging?

To that end, you have a variety of correlations between various metrics and calendar age.

PLOS One


As age increases, grip strength goes down. Men (inexplicably in pink) have higher grip strength overall, and women (confusingly in blue) lower. Somewhat less strong correlations were seen for knee strength.

PLOS One


What about balance?

To assess this, the researchers had the participants stand on a pressure plate. In one scenario, they did this with eyes open, and the next with eyes closed. They then measured how much the pressure varied around the center of the individual on the plate — basically, how much the person swayed while they were standing there.

Sway increased as age increased. Sway increased a bit more with eyes closed than with eyes open.

PLOS One


But the strongest correlation between any of these metrics and age was a simple one: How long can you stand on one leg?

Particularly for the nondominant leg, what you see here is a pretty dramatic drop-off in balance time around age 65, with younger people able to do 10 seconds with ease and some older people barely being able to make it to 2. 

PLOS One


Of course, I had to try this for myself. And as I was standing around on one leg, it became clear to me exactly why this might be a good metric. It really integrates balance and strength in a way that the other tests don’t: balance, clearly, since you have to stay vertical over a relatively small base; but strength as well, because, well, one leg is holding up all the rest of you. You do feel it after a while.

So this metric passes the smell test to me, at least as a potential proxy for age-related physical decline.

But I should be careful to note that this was a cross-sectional study; the researchers looked at various people who were all different ages, not the same people over time to watch how these things change as they aged.

Also, the use of the correlation coefficient in graphs like this implies a certain linear relationship between age and standing-on-one-foot time. The raw data — the points on this graph — don’t appear that linear to me. As I mentioned above, it seems like there might be a bit of a sharp drop-off somewhere in the mid-60s. That means that we may not be able to use this as a sensitive test for aging that slowly changes as your body gets older. It might be that you’re able to essentially stand on one leg as long as you want until, one day, you can’t. That gives us less warning and less to act on.

And finally, we don’t know that changing this metric will change your health for the better. I’m sure a good physiatrist or physical therapist could design some exercises to increase any of our standing-on-one leg times. And no doubt, with practice, you could get your numbers way up. But that doesn’t necessarily mean you’re healthier. It’s like “teaching to the test”; you might score better on the standardized exam but you didn’t really learn the material. 

So I am not adding one-leg standing to my daily exercise routine. But I won’t lie and tell you that, from time to time, and certainly on my 60th birthday, you may find me standing like a flamingo with a stopwatch in my hand.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

 

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity

So I was lying in bed the other night, trying to read my phone, and started complaining to my wife about how my vision keeps getting worse, and then how stiff I feel when I wake up in the morning, and how a recent injury is taking too long to heal, and she said, “Well, yeah. You’re 44. That’s when things start to head downhill.”

And I was like, “Forty-four? That seems very specific. I thought 50 was what people complain about.” And she said, “No, it’s a thing — 44 years old and 60 years old. There’s a drop-off there.”

And you know what? She was right.

A study, “Nonlinear dynamics of multi-omics profiles during human aging,” published in Nature Aging in August 2024, analyzed a ton of proteins and metabolites in people of various ages and found, when you put it all together, that there are some big changes in body chemistry over time — and those changes peak at age 44 and age 60. I should know better than to doubt my brilliant spouse.

Nature


But deep down, I believe the cliché that age is just a number. I don’t particularly care about being 44, or turning 50 or 60. I care about how my body and brain are aging. If I can be a happy, healthy, 80-year-old in full command of my faculties, I would consider that a major win no matter what the calendar says.

So I’m always interested in ways to quantify how my body is aging, independent of how many birthdays I have passed. And, according to a new study, there’s actually a really easy way to do this: Just stand on one leg.

The surprising results come from “Age-related changes in gait, balance, and strength parameters: A cross-sectional study,” appearing in PLOS One, which analyzed 40 individuals — half under age 65 and half over age 65 — across a variety of domains of strength, balance, and gait. The conceit of the study? We all know that things like strength and balance worsen over time, but what worsens fastest? What might be the best metric to tell us how our bodies are aging?

To that end, you have a variety of correlations between various metrics and calendar age.

PLOS One


As age increases, grip strength goes down. Men (inexplicably in pink) have higher grip strength overall, and women (confusingly in blue) lower. Somewhat less strong correlations were seen for knee strength.

PLOS One


What about balance?

To assess this, the researchers had the participants stand on a pressure plate. In one scenario, they did this with eyes open, and the next with eyes closed. They then measured how much the pressure varied around the center of the individual on the plate — basically, how much the person swayed while they were standing there.

Sway increased as age increased. Sway increased a bit more with eyes closed than with eyes open.

PLOS One


But the strongest correlation between any of these metrics and age was a simple one: How long can you stand on one leg?

Particularly for the nondominant leg, what you see here is a pretty dramatic drop-off in balance time around age 65, with younger people able to do 10 seconds with ease and some older people barely being able to make it to 2. 

PLOS One


Of course, I had to try this for myself. And as I was standing around on one leg, it became clear to me exactly why this might be a good metric. It really integrates balance and strength in a way that the other tests don’t: balance, clearly, since you have to stay vertical over a relatively small base; but strength as well, because, well, one leg is holding up all the rest of you. You do feel it after a while.

So this metric passes the smell test to me, at least as a potential proxy for age-related physical decline.

But I should be careful to note that this was a cross-sectional study; the researchers looked at various people who were all different ages, not the same people over time to watch how these things change as they aged.

Also, the use of the correlation coefficient in graphs like this implies a certain linear relationship between age and standing-on-one-foot time. The raw data — the points on this graph — don’t appear that linear to me. As I mentioned above, it seems like there might be a bit of a sharp drop-off somewhere in the mid-60s. That means that we may not be able to use this as a sensitive test for aging that slowly changes as your body gets older. It might be that you’re able to essentially stand on one leg as long as you want until, one day, you can’t. That gives us less warning and less to act on.

And finally, we don’t know that changing this metric will change your health for the better. I’m sure a good physiatrist or physical therapist could design some exercises to increase any of our standing-on-one leg times. And no doubt, with practice, you could get your numbers way up. But that doesn’t necessarily mean you’re healthier. It’s like “teaching to the test”; you might score better on the standardized exam but you didn’t really learn the material. 

So I am not adding one-leg standing to my daily exercise routine. But I won’t lie and tell you that, from time to time, and certainly on my 60th birthday, you may find me standing like a flamingo with a stopwatch in my hand.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Blood Tests for Alzheimer’s Are Here... Are Clinicians Ready?

Article Type
Changed
Thu, 10/24/2024 - 12:08

With the approval of anti-amyloid monoclonal antibodies to treat early-stage Alzheimer’s disease, the need for accurate and early diagnosis is crucial.

Blood-based biomarkers offer a promising alternative to amyloid PET scans and cerebrospinal fluid (CSF) analysis and are being increasingly used in clinical practice to support an Alzheimer’s disease diagnosis.

Recently, an expert workgroup convened by the Global CEO Initiative on Alzheimer’s Disease published recommendations for the clinical implementation of Alzheimer’s disease blood-based biomarkers.

“Our hope was to provide some recommendations that clinicians could use to develop the best pathways for their clinical practice,” said workgroup co-chair Michelle M. Mielke, PhD, with Wake Forest University School of Medicine, Winston-Salem, North Carolina.
 

Triage and Confirmatory Pathways

The group recommends two implementation pathways for Alzheimer’s disease blood biomarkers — one for current use for triaging and another for future use to confirm amyloid pathology once blood biomarker tests have reached sufficient performance for this purpose.

In the triage pathway, a negative blood biomarker test would flag individuals unlikely to have detectable brain amyloid pathology. This outcome would prompt clinicians to focus on evaluating non–Alzheimer’s disease-related causes of cognitive impairment, which may streamline the diagnosis of other causes of cognitive impairment, the authors said.

A positive triage blood test would suggest a higher likelihood of amyloid pathology and prompt referral to secondary care for further assessment and consideration for a second, more accurate test, such as amyloid PET or CSF for amyloid confirmation.

In the confirmatory pathway, a positive blood biomarker test result would identify amyloid pathology without the need for a second test, providing a faster route to diagnosis, the authors noted.

Mielke emphasized that these recommendations represent a “first step” and will need to be updated as experiences with the Alzheimer’s disease blood biomarkers in clinical care increase and additional barriers and facilitators are identified.

“These updates will likely include community-informed approaches that incorporate feedback from patients as well as healthcare providers, alongside results from validation in diverse real-world settings,” said workgroup co-chair Chi Udeh-Momoh, PhD, MSc, with Wake Forest University School of Medicine and the Brain and Mind Institute, Aga Khan University, Nairobi, Kenya.

The Alzheimer’s Association published “appropriate use” recommendations for blood biomarkers in 2022.

“Currently, the Alzheimer’s Association is building an updated library of clinical guidance that distills the scientific evidence using de novo systematic reviews and translates them into clear and actionable recommendations for clinical practice,” said Rebecca M. Edelmayer, PhD, vice president of scientific engagement, Alzheimer’s Association.

“The first major effort with our new process will be the upcoming Evidence-based Clinical Practice Guideline on the Use of Blood-based Biomarkers (BBMs) in Specialty Care Settings. This guideline’s recommendations will be published in early 2025,” Edelmayer said.
 

Availability and Accuracy

Research has shown that amyloid beta and tau protein blood biomarkers — especially a high plasma phosphorylated (p)–tau217 levels — are highly accurate in identifying Alzheimer’s disease in patients with cognitive symptoms attending primary and secondary care clinics.

Several tests targeting plasma p-tau217 are now available for use. They include the PrecivityAD2 blood test from C2N Diagnostics and the Simoa p-Tau 217 Planar Kit and LucentAD p-Tau 217 — both from Quanterix.

In a recent head-to-head comparison of seven leading blood tests for AD pathology, measures of plasma p-tau217, either individually or in combination with other plasma biomarkers, had the strongest relationships with Alzheimer’s disease outcomes.

A recent Swedish study showed that the PrecivityAD2 test had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.

“We’ve been using these blood biomarkers in research for a long time and we’re now taking the jump to start using them in clinic to risk stratify patients,” said Fanny Elahi, MD, PhD, director of fluid biomarker research for the Barbara and Maurice Deane Center for Wellness and Cognitive Health at Icahn Mount Sinai in New York City.

New York’s Mount Sinai Health System is among the first in the northeast to offer blood tests across primary and specialty care settings for early diagnosis of AD and related dementias.

Edelmayer cautioned, “There is no single, stand-alone test to diagnose Alzheimer’s disease today. Blood testing is one piece of the diagnostic process.”

“Currently, physicians use well-established diagnostic tools combined with medical history and other information, including neurological exams, cognitive and functional assessments as well as brain imaging and spinal fluid analysis and blood to make an accurate diagnosis and to understand which patients are eligible for approved treatments,” she said.

There are also emerging biomarkers in the research pipeline, Edelmayer said.

“For example, some researchers think retinal imaging has the potential to detect biological signs of Alzheimer’s disease within certain areas of the eye,” she explained.

“Other emerging biomarkers include examining components in saliva and the skin for signals that may indicate early biological changes in the brain. These biomarkers are still very exploratory, and more research is needed before these tests or biomarkers can be used more routinely to study risk or aid in diagnosis,” Edelmayer said.
 

 

 

Ideal Candidates for Alzheimer’s Disease Blood Testing?

Experts agree that blood tests represent a convenient and scalable option to address the anticipated surge in demand for biomarker testing with the availability of disease-modifying treatments. For now, however, they are not for all older adults worried about their memory.

“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid,” the authors of a recent JAMA editorial noted.

At Mount Sinai, “we’re not starting with stone-cold asymptomatic individuals. But ultimately, this is what the blood tests are intended for — screening,” Elahi noted.

She also noted that Mount Sinai has a “very diverse population” — some with young onset cognitive symptoms, so the entry criteria for testing are “very wide.”

“Anyone above age 40 with symptoms can qualify to get a blood test. We do ask at this stage that either the individual report symptoms or someone in their life or their clinician be worried about their cognition or their brain function,” Elahi said.
 

Ethical Considerations, Counseling

Elahi emphasized the importance of counseling patients who come to the clinic seeking an Alzheimer’s disease blood test. This should include how the diagnostic process will unfold and what the next steps are with a given result.

Elahi said patients need to be informed that Alzheimer’s disease blood biomarkers are still “relatively new,” and a test can help a patient “know the likelihood of having the disease, but it won’t be 100% definitive.”

To ensure the ethical principle of “do no harm,” counseling should ensure that patients are fully prepared for the implications of the test results and ensure that the decision to test aligns with the patient’s readiness and well-being, Elahi said.

Edelmayer said the forthcoming clinical practice guidelines will provide “evidence-based recommendations for physicians to help guide them through the decision-making process around who should be tested and when. In the meantime, the Alzheimer’s Association urges providers to refer to the 2022 appropriate use recommendations for blood tests in clinical practice and trial settings.”

Mielke has served on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio. Edelmayer and Elahi had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

With the approval of anti-amyloid monoclonal antibodies to treat early-stage Alzheimer’s disease, the need for accurate and early diagnosis is crucial.

Blood-based biomarkers offer a promising alternative to amyloid PET scans and cerebrospinal fluid (CSF) analysis and are being increasingly used in clinical practice to support an Alzheimer’s disease diagnosis.

Recently, an expert workgroup convened by the Global CEO Initiative on Alzheimer’s Disease published recommendations for the clinical implementation of Alzheimer’s disease blood-based biomarkers.

“Our hope was to provide some recommendations that clinicians could use to develop the best pathways for their clinical practice,” said workgroup co-chair Michelle M. Mielke, PhD, with Wake Forest University School of Medicine, Winston-Salem, North Carolina.
 

Triage and Confirmatory Pathways

The group recommends two implementation pathways for Alzheimer’s disease blood biomarkers — one for current use for triaging and another for future use to confirm amyloid pathology once blood biomarker tests have reached sufficient performance for this purpose.

In the triage pathway, a negative blood biomarker test would flag individuals unlikely to have detectable brain amyloid pathology. This outcome would prompt clinicians to focus on evaluating non–Alzheimer’s disease-related causes of cognitive impairment, which may streamline the diagnosis of other causes of cognitive impairment, the authors said.

A positive triage blood test would suggest a higher likelihood of amyloid pathology and prompt referral to secondary care for further assessment and consideration for a second, more accurate test, such as amyloid PET or CSF for amyloid confirmation.

In the confirmatory pathway, a positive blood biomarker test result would identify amyloid pathology without the need for a second test, providing a faster route to diagnosis, the authors noted.

Mielke emphasized that these recommendations represent a “first step” and will need to be updated as experiences with the Alzheimer’s disease blood biomarkers in clinical care increase and additional barriers and facilitators are identified.

“These updates will likely include community-informed approaches that incorporate feedback from patients as well as healthcare providers, alongside results from validation in diverse real-world settings,” said workgroup co-chair Chi Udeh-Momoh, PhD, MSc, with Wake Forest University School of Medicine and the Brain and Mind Institute, Aga Khan University, Nairobi, Kenya.

The Alzheimer’s Association published “appropriate use” recommendations for blood biomarkers in 2022.

“Currently, the Alzheimer’s Association is building an updated library of clinical guidance that distills the scientific evidence using de novo systematic reviews and translates them into clear and actionable recommendations for clinical practice,” said Rebecca M. Edelmayer, PhD, vice president of scientific engagement, Alzheimer’s Association.

“The first major effort with our new process will be the upcoming Evidence-based Clinical Practice Guideline on the Use of Blood-based Biomarkers (BBMs) in Specialty Care Settings. This guideline’s recommendations will be published in early 2025,” Edelmayer said.
 

Availability and Accuracy

Research has shown that amyloid beta and tau protein blood biomarkers — especially a high plasma phosphorylated (p)–tau217 levels — are highly accurate in identifying Alzheimer’s disease in patients with cognitive symptoms attending primary and secondary care clinics.

Several tests targeting plasma p-tau217 are now available for use. They include the PrecivityAD2 blood test from C2N Diagnostics and the Simoa p-Tau 217 Planar Kit and LucentAD p-Tau 217 — both from Quanterix.

In a recent head-to-head comparison of seven leading blood tests for AD pathology, measures of plasma p-tau217, either individually or in combination with other plasma biomarkers, had the strongest relationships with Alzheimer’s disease outcomes.

A recent Swedish study showed that the PrecivityAD2 test had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.

“We’ve been using these blood biomarkers in research for a long time and we’re now taking the jump to start using them in clinic to risk stratify patients,” said Fanny Elahi, MD, PhD, director of fluid biomarker research for the Barbara and Maurice Deane Center for Wellness and Cognitive Health at Icahn Mount Sinai in New York City.

New York’s Mount Sinai Health System is among the first in the northeast to offer blood tests across primary and specialty care settings for early diagnosis of AD and related dementias.

Edelmayer cautioned, “There is no single, stand-alone test to diagnose Alzheimer’s disease today. Blood testing is one piece of the diagnostic process.”

“Currently, physicians use well-established diagnostic tools combined with medical history and other information, including neurological exams, cognitive and functional assessments as well as brain imaging and spinal fluid analysis and blood to make an accurate diagnosis and to understand which patients are eligible for approved treatments,” she said.

There are also emerging biomarkers in the research pipeline, Edelmayer said.

“For example, some researchers think retinal imaging has the potential to detect biological signs of Alzheimer’s disease within certain areas of the eye,” she explained.

“Other emerging biomarkers include examining components in saliva and the skin for signals that may indicate early biological changes in the brain. These biomarkers are still very exploratory, and more research is needed before these tests or biomarkers can be used more routinely to study risk or aid in diagnosis,” Edelmayer said.
 

 

 

Ideal Candidates for Alzheimer’s Disease Blood Testing?

Experts agree that blood tests represent a convenient and scalable option to address the anticipated surge in demand for biomarker testing with the availability of disease-modifying treatments. For now, however, they are not for all older adults worried about their memory.

“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid,” the authors of a recent JAMA editorial noted.

At Mount Sinai, “we’re not starting with stone-cold asymptomatic individuals. But ultimately, this is what the blood tests are intended for — screening,” Elahi noted.

She also noted that Mount Sinai has a “very diverse population” — some with young onset cognitive symptoms, so the entry criteria for testing are “very wide.”

“Anyone above age 40 with symptoms can qualify to get a blood test. We do ask at this stage that either the individual report symptoms or someone in their life or their clinician be worried about their cognition or their brain function,” Elahi said.
 

Ethical Considerations, Counseling

Elahi emphasized the importance of counseling patients who come to the clinic seeking an Alzheimer’s disease blood test. This should include how the diagnostic process will unfold and what the next steps are with a given result.

Elahi said patients need to be informed that Alzheimer’s disease blood biomarkers are still “relatively new,” and a test can help a patient “know the likelihood of having the disease, but it won’t be 100% definitive.”

To ensure the ethical principle of “do no harm,” counseling should ensure that patients are fully prepared for the implications of the test results and ensure that the decision to test aligns with the patient’s readiness and well-being, Elahi said.

Edelmayer said the forthcoming clinical practice guidelines will provide “evidence-based recommendations for physicians to help guide them through the decision-making process around who should be tested and when. In the meantime, the Alzheimer’s Association urges providers to refer to the 2022 appropriate use recommendations for blood tests in clinical practice and trial settings.”

Mielke has served on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio. Edelmayer and Elahi had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

With the approval of anti-amyloid monoclonal antibodies to treat early-stage Alzheimer’s disease, the need for accurate and early diagnosis is crucial.

Blood-based biomarkers offer a promising alternative to amyloid PET scans and cerebrospinal fluid (CSF) analysis and are being increasingly used in clinical practice to support an Alzheimer’s disease diagnosis.

Recently, an expert workgroup convened by the Global CEO Initiative on Alzheimer’s Disease published recommendations for the clinical implementation of Alzheimer’s disease blood-based biomarkers.

“Our hope was to provide some recommendations that clinicians could use to develop the best pathways for their clinical practice,” said workgroup co-chair Michelle M. Mielke, PhD, with Wake Forest University School of Medicine, Winston-Salem, North Carolina.
 

Triage and Confirmatory Pathways

The group recommends two implementation pathways for Alzheimer’s disease blood biomarkers — one for current use for triaging and another for future use to confirm amyloid pathology once blood biomarker tests have reached sufficient performance for this purpose.

In the triage pathway, a negative blood biomarker test would flag individuals unlikely to have detectable brain amyloid pathology. This outcome would prompt clinicians to focus on evaluating non–Alzheimer’s disease-related causes of cognitive impairment, which may streamline the diagnosis of other causes of cognitive impairment, the authors said.

A positive triage blood test would suggest a higher likelihood of amyloid pathology and prompt referral to secondary care for further assessment and consideration for a second, more accurate test, such as amyloid PET or CSF for amyloid confirmation.

In the confirmatory pathway, a positive blood biomarker test result would identify amyloid pathology without the need for a second test, providing a faster route to diagnosis, the authors noted.

Mielke emphasized that these recommendations represent a “first step” and will need to be updated as experiences with the Alzheimer’s disease blood biomarkers in clinical care increase and additional barriers and facilitators are identified.

“These updates will likely include community-informed approaches that incorporate feedback from patients as well as healthcare providers, alongside results from validation in diverse real-world settings,” said workgroup co-chair Chi Udeh-Momoh, PhD, MSc, with Wake Forest University School of Medicine and the Brain and Mind Institute, Aga Khan University, Nairobi, Kenya.

The Alzheimer’s Association published “appropriate use” recommendations for blood biomarkers in 2022.

“Currently, the Alzheimer’s Association is building an updated library of clinical guidance that distills the scientific evidence using de novo systematic reviews and translates them into clear and actionable recommendations for clinical practice,” said Rebecca M. Edelmayer, PhD, vice president of scientific engagement, Alzheimer’s Association.

“The first major effort with our new process will be the upcoming Evidence-based Clinical Practice Guideline on the Use of Blood-based Biomarkers (BBMs) in Specialty Care Settings. This guideline’s recommendations will be published in early 2025,” Edelmayer said.
 

Availability and Accuracy

Research has shown that amyloid beta and tau protein blood biomarkers — especially a high plasma phosphorylated (p)–tau217 levels — are highly accurate in identifying Alzheimer’s disease in patients with cognitive symptoms attending primary and secondary care clinics.

Several tests targeting plasma p-tau217 are now available for use. They include the PrecivityAD2 blood test from C2N Diagnostics and the Simoa p-Tau 217 Planar Kit and LucentAD p-Tau 217 — both from Quanterix.

In a recent head-to-head comparison of seven leading blood tests for AD pathology, measures of plasma p-tau217, either individually or in combination with other plasma biomarkers, had the strongest relationships with Alzheimer’s disease outcomes.

A recent Swedish study showed that the PrecivityAD2 test had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.

“We’ve been using these blood biomarkers in research for a long time and we’re now taking the jump to start using them in clinic to risk stratify patients,” said Fanny Elahi, MD, PhD, director of fluid biomarker research for the Barbara and Maurice Deane Center for Wellness and Cognitive Health at Icahn Mount Sinai in New York City.

New York’s Mount Sinai Health System is among the first in the northeast to offer blood tests across primary and specialty care settings for early diagnosis of AD and related dementias.

Edelmayer cautioned, “There is no single, stand-alone test to diagnose Alzheimer’s disease today. Blood testing is one piece of the diagnostic process.”

“Currently, physicians use well-established diagnostic tools combined with medical history and other information, including neurological exams, cognitive and functional assessments as well as brain imaging and spinal fluid analysis and blood to make an accurate diagnosis and to understand which patients are eligible for approved treatments,” she said.

There are also emerging biomarkers in the research pipeline, Edelmayer said.

“For example, some researchers think retinal imaging has the potential to detect biological signs of Alzheimer’s disease within certain areas of the eye,” she explained.

“Other emerging biomarkers include examining components in saliva and the skin for signals that may indicate early biological changes in the brain. These biomarkers are still very exploratory, and more research is needed before these tests or biomarkers can be used more routinely to study risk or aid in diagnosis,” Edelmayer said.
 

 

 

Ideal Candidates for Alzheimer’s Disease Blood Testing?

Experts agree that blood tests represent a convenient and scalable option to address the anticipated surge in demand for biomarker testing with the availability of disease-modifying treatments. For now, however, they are not for all older adults worried about their memory.

“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid,” the authors of a recent JAMA editorial noted.

At Mount Sinai, “we’re not starting with stone-cold asymptomatic individuals. But ultimately, this is what the blood tests are intended for — screening,” Elahi noted.

She also noted that Mount Sinai has a “very diverse population” — some with young onset cognitive symptoms, so the entry criteria for testing are “very wide.”

“Anyone above age 40 with symptoms can qualify to get a blood test. We do ask at this stage that either the individual report symptoms or someone in their life or their clinician be worried about their cognition or their brain function,” Elahi said.
 

Ethical Considerations, Counseling

Elahi emphasized the importance of counseling patients who come to the clinic seeking an Alzheimer’s disease blood test. This should include how the diagnostic process will unfold and what the next steps are with a given result.

Elahi said patients need to be informed that Alzheimer’s disease blood biomarkers are still “relatively new,” and a test can help a patient “know the likelihood of having the disease, but it won’t be 100% definitive.”

To ensure the ethical principle of “do no harm,” counseling should ensure that patients are fully prepared for the implications of the test results and ensure that the decision to test aligns with the patient’s readiness and well-being, Elahi said.

Edelmayer said the forthcoming clinical practice guidelines will provide “evidence-based recommendations for physicians to help guide them through the decision-making process around who should be tested and when. In the meantime, the Alzheimer’s Association urges providers to refer to the 2022 appropriate use recommendations for blood tests in clinical practice and trial settings.”

Mielke has served on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio. Edelmayer and Elahi had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How Effective Is the High-Dose Flu Vaccine in Older Adults?

Article Type
Changed
Wed, 10/23/2024 - 10:22

How can the immunogenicity and effectiveness of flu vaccines be improved in older adults? Several strategies are available, one being the addition of an adjuvant. For example, the MF59-adjuvanted vaccine has shown superior immunogenicity. However, “we do not have data from controlled and randomized clinical trials showing superior clinical effectiveness versus the standard dose,” Professor Odile Launay, an infectious disease specialist at Cochin Hospital in Paris, France, noted during a press conference. Another option is to increase the antigen dose in the vaccine, creating a high-dose (HD) flu vaccine.

Why is there a need for an HD vaccine? “The elderly population bears the greatest burden from the flu,” explained Launay. “This is due to three factors: An aging immune system, a higher number of comorbidities, and increased frailty.” Standard-dose flu vaccines are seen as offering suboptimal protection for those older than 65 years, which led to the development of a quadrivalent vaccine with four times the antigen dose of standard flu vaccines. This HD vaccine was introduced in France during the 2021/2022 flu season. A real-world cohort study has since been conducted to evaluate its effectiveness in the target population — those aged 65 years or older. The results were recently published in Clinical Microbiology and Infection.

Cohort Study

The study included 405,385 noninstitutionalized people aged 65 years or older matched with 1,621,540 individuals in a 1:4 ratio. The first group received the HD vaccine, while the second group received the standard-dose vaccine. Both the groups had an average age of 77 years, with 56% women, and 51% vaccinated in pharmacies. The majority had been previously vaccinated against flu (91%), and 97% had completed a full COVID-19 vaccination schedule. More than half had at least one chronic illness.

Hospitalization rates for flu — the study’s primary outcome — were 69.5 vs 90.5 per 100,000 person-years in the HD vs standard-dose group. This represented a 23.3% reduction (95% CI, 8.4-35.8; P = .003).
 

Strengths and Limitations

Among the strengths of the study, Launay highlighted the large number of vaccinated participants older than 65 years — more than 7 million — and the widespread use of polymerase chain reaction flu tests in cases of hospitalization for respiratory infections, which improved flu coding in the database used. Additionally, the results were consistent with those of previous studies.

However, limitations included the retrospective design, which did not randomize participants and introduced potential bias. For example, the HD vaccine may have been prioritized for the oldest people or those with multiple comorbidities. Additionally, the 2021/2022 flu season was atypical, with the simultaneous circulation of the flu virus and SARS-CoV-2, as noted by Launay.
 

Conclusion

In conclusion, this first evaluation of the HD flu vaccine’s effectiveness in France showed a 25% reduction in hospitalizations, consistent with existing data covering 12 flu seasons. The vaccine has been available for a longer period in the United States and Northern Europe.

“The latest unpublished data from the 2022/23 season show a 27% reduction in hospitalizations with the HD vaccine in people over 65,” added Launay.

Note: Due to a pricing disagreement with the French government, Sanofi’s HD flu vaccine Efluelda, intended for people older than 65 years, will not be available this year. (See: Withdrawal of the Efluelda Influenza Vaccine: The Academy of Medicine Reacts). However, the company has submitted a dossier for a trivalent form for a return in the 2025/2026 season and is working on developing mRNA vaccines. Additionally, a combined flu/COVID-19 vaccine is currently in development.

The study was funded by Sanofi. Several authors are Sanofi employees. Odile Launay reported conflicts of interest with Sanofi, MSD, Pfizer, GSK, and Moderna.
 

This story was translated from Medscape’s French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

How can the immunogenicity and effectiveness of flu vaccines be improved in older adults? Several strategies are available, one being the addition of an adjuvant. For example, the MF59-adjuvanted vaccine has shown superior immunogenicity. However, “we do not have data from controlled and randomized clinical trials showing superior clinical effectiveness versus the standard dose,” Professor Odile Launay, an infectious disease specialist at Cochin Hospital in Paris, France, noted during a press conference. Another option is to increase the antigen dose in the vaccine, creating a high-dose (HD) flu vaccine.

Why is there a need for an HD vaccine? “The elderly population bears the greatest burden from the flu,” explained Launay. “This is due to three factors: An aging immune system, a higher number of comorbidities, and increased frailty.” Standard-dose flu vaccines are seen as offering suboptimal protection for those older than 65 years, which led to the development of a quadrivalent vaccine with four times the antigen dose of standard flu vaccines. This HD vaccine was introduced in France during the 2021/2022 flu season. A real-world cohort study has since been conducted to evaluate its effectiveness in the target population — those aged 65 years or older. The results were recently published in Clinical Microbiology and Infection.

Cohort Study

The study included 405,385 noninstitutionalized people aged 65 years or older matched with 1,621,540 individuals in a 1:4 ratio. The first group received the HD vaccine, while the second group received the standard-dose vaccine. Both the groups had an average age of 77 years, with 56% women, and 51% vaccinated in pharmacies. The majority had been previously vaccinated against flu (91%), and 97% had completed a full COVID-19 vaccination schedule. More than half had at least one chronic illness.

Hospitalization rates for flu — the study’s primary outcome — were 69.5 vs 90.5 per 100,000 person-years in the HD vs standard-dose group. This represented a 23.3% reduction (95% CI, 8.4-35.8; P = .003).
 

Strengths and Limitations

Among the strengths of the study, Launay highlighted the large number of vaccinated participants older than 65 years — more than 7 million — and the widespread use of polymerase chain reaction flu tests in cases of hospitalization for respiratory infections, which improved flu coding in the database used. Additionally, the results were consistent with those of previous studies.

However, limitations included the retrospective design, which did not randomize participants and introduced potential bias. For example, the HD vaccine may have been prioritized for the oldest people or those with multiple comorbidities. Additionally, the 2021/2022 flu season was atypical, with the simultaneous circulation of the flu virus and SARS-CoV-2, as noted by Launay.
 

Conclusion

In conclusion, this first evaluation of the HD flu vaccine’s effectiveness in France showed a 25% reduction in hospitalizations, consistent with existing data covering 12 flu seasons. The vaccine has been available for a longer period in the United States and Northern Europe.

“The latest unpublished data from the 2022/23 season show a 27% reduction in hospitalizations with the HD vaccine in people over 65,” added Launay.

Note: Due to a pricing disagreement with the French government, Sanofi’s HD flu vaccine Efluelda, intended for people older than 65 years, will not be available this year. (See: Withdrawal of the Efluelda Influenza Vaccine: The Academy of Medicine Reacts). However, the company has submitted a dossier for a trivalent form for a return in the 2025/2026 season and is working on developing mRNA vaccines. Additionally, a combined flu/COVID-19 vaccine is currently in development.

The study was funded by Sanofi. Several authors are Sanofi employees. Odile Launay reported conflicts of interest with Sanofi, MSD, Pfizer, GSK, and Moderna.
 

This story was translated from Medscape’s French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

How can the immunogenicity and effectiveness of flu vaccines be improved in older adults? Several strategies are available, one being the addition of an adjuvant. For example, the MF59-adjuvanted vaccine has shown superior immunogenicity. However, “we do not have data from controlled and randomized clinical trials showing superior clinical effectiveness versus the standard dose,” Professor Odile Launay, an infectious disease specialist at Cochin Hospital in Paris, France, noted during a press conference. Another option is to increase the antigen dose in the vaccine, creating a high-dose (HD) flu vaccine.

Why is there a need for an HD vaccine? “The elderly population bears the greatest burden from the flu,” explained Launay. “This is due to three factors: An aging immune system, a higher number of comorbidities, and increased frailty.” Standard-dose flu vaccines are seen as offering suboptimal protection for those older than 65 years, which led to the development of a quadrivalent vaccine with four times the antigen dose of standard flu vaccines. This HD vaccine was introduced in France during the 2021/2022 flu season. A real-world cohort study has since been conducted to evaluate its effectiveness in the target population — those aged 65 years or older. The results were recently published in Clinical Microbiology and Infection.

Cohort Study

The study included 405,385 noninstitutionalized people aged 65 years or older matched with 1,621,540 individuals in a 1:4 ratio. The first group received the HD vaccine, while the second group received the standard-dose vaccine. Both the groups had an average age of 77 years, with 56% women, and 51% vaccinated in pharmacies. The majority had been previously vaccinated against flu (91%), and 97% had completed a full COVID-19 vaccination schedule. More than half had at least one chronic illness.

Hospitalization rates for flu — the study’s primary outcome — were 69.5 vs 90.5 per 100,000 person-years in the HD vs standard-dose group. This represented a 23.3% reduction (95% CI, 8.4-35.8; P = .003).
 

Strengths and Limitations

Among the strengths of the study, Launay highlighted the large number of vaccinated participants older than 65 years — more than 7 million — and the widespread use of polymerase chain reaction flu tests in cases of hospitalization for respiratory infections, which improved flu coding in the database used. Additionally, the results were consistent with those of previous studies.

However, limitations included the retrospective design, which did not randomize participants and introduced potential bias. For example, the HD vaccine may have been prioritized for the oldest people or those with multiple comorbidities. Additionally, the 2021/2022 flu season was atypical, with the simultaneous circulation of the flu virus and SARS-CoV-2, as noted by Launay.
 

Conclusion

In conclusion, this first evaluation of the HD flu vaccine’s effectiveness in France showed a 25% reduction in hospitalizations, consistent with existing data covering 12 flu seasons. The vaccine has been available for a longer period in the United States and Northern Europe.

“The latest unpublished data from the 2022/23 season show a 27% reduction in hospitalizations with the HD vaccine in people over 65,” added Launay.

Note: Due to a pricing disagreement with the French government, Sanofi’s HD flu vaccine Efluelda, intended for people older than 65 years, will not be available this year. (See: Withdrawal of the Efluelda Influenza Vaccine: The Academy of Medicine Reacts). However, the company has submitted a dossier for a trivalent form for a return in the 2025/2026 season and is working on developing mRNA vaccines. Additionally, a combined flu/COVID-19 vaccine is currently in development.

The study was funded by Sanofi. Several authors are Sanofi employees. Odile Launay reported conflicts of interest with Sanofi, MSD, Pfizer, GSK, and Moderna.
 

This story was translated from Medscape’s French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Risk Assessment Tool Can Help Predict Fractures in Cancer

Article Type
Changed
Wed, 10/23/2024 - 08:22

 

TOPLINE:

The Fracture Risk Assessment Tool (FRAX), with bone mineral density, predicts the risk for major osteoporotic fractures and hip fractures in patients with cancer, but FRAX without bone mineral density slightly overestimates these risks, a new analysis found.

METHODOLOGY:

  • Cancer-specific guidelines recommend using FRAX to assess fracture risk, but its applicability in patients with cancer remains unclear.
  • This retrospective cohort study included 9877 patients with cancer (mean age, 67.1 years) and 45,875 matched control individuals without cancer (mean age, 66.2 years). All participants had dual-energy x-ray absorptiometry (DXA) scans.
  • Researchers collected data on bone mineral density and fractures. The 10-year probabilities of major osteoporotic fractures and hip fractures were calculated using FRAX, and the observed 10-year probabilities of these fractures were compared with FRAX-derived probabilities.
  • Compared with individuals without cancer, patients with cancer had a shorter mean follow-up duration (8.5 vs 7.6 years), a slightly higher mean body mass index, and a higher percentage of parental hip fractures (7.0% vs 8.2%); additionally, patients with cancer were more likely to have secondary causes of osteoporosis (10% vs 38.4%) and less likely to receive osteoporosis medication (9.9% vs 4.2%).

TAKEAWAY:

  • Compared with individuals without cancer, patients with cancer had a significantly higher incidence rate of major fractures (12.9 vs 14.5 per 1000 person-years) and hip fractures (3.5 vs 4.2 per 1000 person-years).
  • FRAX with bone mineral density exhibited excellent calibration for predicting major osteoporotic fractures (slope, 1.03) and hip fractures (0.97) in patients with cancer, regardless of the site of cancer diagnosis. FRAX without bone mineral density, however, underestimated the risk for both major (0.87) and hip fractures (0.72).
  • In patients with cancer, FRAX with bone mineral density findings were associated with incident major osteoporotic fractures (hazard ratio [HR] per SD, 1.84) and hip fractures (HR per SD, 3.61).
  • When models were adjusted for FRAX with bone mineral density, patients with cancer had an increased risk for both major osteoporotic fractures (HR, 1.17) and hip fractures (HR, 1.30). No difference was found in the risk for fracture between patients with and individuals without cancer when the models were adjusted for FRAX without bone mineral density, even when considering osteoporosis medication use.

IN PRACTICE:

“This retrospective cohort study demonstrates that individuals with cancer are at higher risk of fracture than individuals without cancer and that FRAX, particularly with BMD [bone mineral density], may accurately predict fracture risk in this population. These results, along with the known mortality risk of osteoporotic fractures among cancer survivors, further emphasize the clinical importance of closing the current osteoporosis care gap among cancer survivors,” the authors wrote.

SOURCE:

This study, led by Carrie Ye, MD, MPH, University of Alberta, Edmonton, Alberta, Canada, was published online in JAMA Oncology.

LIMITATIONS:

This study cohort included a selected group of cancer survivors who were referred for DXA scans and may not represent the general cancer population. The cohort consisted predominantly of women, limiting the generalizability to men with cancer. Given the heterogeneity of the population, the findings may not be applicable to all cancer subgroups. Information on cancer stage or the presence of bone metastases at the time of fracture risk assessment was lacking, which could have affected the findings.

DISCLOSURES:

This study was funded by the CancerCare Manitoba Foundation. Three authors reported having ties with various sources, including two who received grants from various organizations.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

The Fracture Risk Assessment Tool (FRAX), with bone mineral density, predicts the risk for major osteoporotic fractures and hip fractures in patients with cancer, but FRAX without bone mineral density slightly overestimates these risks, a new analysis found.

METHODOLOGY:

  • Cancer-specific guidelines recommend using FRAX to assess fracture risk, but its applicability in patients with cancer remains unclear.
  • This retrospective cohort study included 9877 patients with cancer (mean age, 67.1 years) and 45,875 matched control individuals without cancer (mean age, 66.2 years). All participants had dual-energy x-ray absorptiometry (DXA) scans.
  • Researchers collected data on bone mineral density and fractures. The 10-year probabilities of major osteoporotic fractures and hip fractures were calculated using FRAX, and the observed 10-year probabilities of these fractures were compared with FRAX-derived probabilities.
  • Compared with individuals without cancer, patients with cancer had a shorter mean follow-up duration (8.5 vs 7.6 years), a slightly higher mean body mass index, and a higher percentage of parental hip fractures (7.0% vs 8.2%); additionally, patients with cancer were more likely to have secondary causes of osteoporosis (10% vs 38.4%) and less likely to receive osteoporosis medication (9.9% vs 4.2%).

TAKEAWAY:

  • Compared with individuals without cancer, patients with cancer had a significantly higher incidence rate of major fractures (12.9 vs 14.5 per 1000 person-years) and hip fractures (3.5 vs 4.2 per 1000 person-years).
  • FRAX with bone mineral density exhibited excellent calibration for predicting major osteoporotic fractures (slope, 1.03) and hip fractures (0.97) in patients with cancer, regardless of the site of cancer diagnosis. FRAX without bone mineral density, however, underestimated the risk for both major (0.87) and hip fractures (0.72).
  • In patients with cancer, FRAX with bone mineral density findings were associated with incident major osteoporotic fractures (hazard ratio [HR] per SD, 1.84) and hip fractures (HR per SD, 3.61).
  • When models were adjusted for FRAX with bone mineral density, patients with cancer had an increased risk for both major osteoporotic fractures (HR, 1.17) and hip fractures (HR, 1.30). No difference was found in the risk for fracture between patients with and individuals without cancer when the models were adjusted for FRAX without bone mineral density, even when considering osteoporosis medication use.

IN PRACTICE:

“This retrospective cohort study demonstrates that individuals with cancer are at higher risk of fracture than individuals without cancer and that FRAX, particularly with BMD [bone mineral density], may accurately predict fracture risk in this population. These results, along with the known mortality risk of osteoporotic fractures among cancer survivors, further emphasize the clinical importance of closing the current osteoporosis care gap among cancer survivors,” the authors wrote.

SOURCE:

This study, led by Carrie Ye, MD, MPH, University of Alberta, Edmonton, Alberta, Canada, was published online in JAMA Oncology.

LIMITATIONS:

This study cohort included a selected group of cancer survivors who were referred for DXA scans and may not represent the general cancer population. The cohort consisted predominantly of women, limiting the generalizability to men with cancer. Given the heterogeneity of the population, the findings may not be applicable to all cancer subgroups. Information on cancer stage or the presence of bone metastases at the time of fracture risk assessment was lacking, which could have affected the findings.

DISCLOSURES:

This study was funded by the CancerCare Manitoba Foundation. Three authors reported having ties with various sources, including two who received grants from various organizations.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

The Fracture Risk Assessment Tool (FRAX), with bone mineral density, predicts the risk for major osteoporotic fractures and hip fractures in patients with cancer, but FRAX without bone mineral density slightly overestimates these risks, a new analysis found.

METHODOLOGY:

  • Cancer-specific guidelines recommend using FRAX to assess fracture risk, but its applicability in patients with cancer remains unclear.
  • This retrospective cohort study included 9877 patients with cancer (mean age, 67.1 years) and 45,875 matched control individuals without cancer (mean age, 66.2 years). All participants had dual-energy x-ray absorptiometry (DXA) scans.
  • Researchers collected data on bone mineral density and fractures. The 10-year probabilities of major osteoporotic fractures and hip fractures were calculated using FRAX, and the observed 10-year probabilities of these fractures were compared with FRAX-derived probabilities.
  • Compared with individuals without cancer, patients with cancer had a shorter mean follow-up duration (8.5 vs 7.6 years), a slightly higher mean body mass index, and a higher percentage of parental hip fractures (7.0% vs 8.2%); additionally, patients with cancer were more likely to have secondary causes of osteoporosis (10% vs 38.4%) and less likely to receive osteoporosis medication (9.9% vs 4.2%).

TAKEAWAY:

  • Compared with individuals without cancer, patients with cancer had a significantly higher incidence rate of major fractures (12.9 vs 14.5 per 1000 person-years) and hip fractures (3.5 vs 4.2 per 1000 person-years).
  • FRAX with bone mineral density exhibited excellent calibration for predicting major osteoporotic fractures (slope, 1.03) and hip fractures (0.97) in patients with cancer, regardless of the site of cancer diagnosis. FRAX without bone mineral density, however, underestimated the risk for both major (0.87) and hip fractures (0.72).
  • In patients with cancer, FRAX with bone mineral density findings were associated with incident major osteoporotic fractures (hazard ratio [HR] per SD, 1.84) and hip fractures (HR per SD, 3.61).
  • When models were adjusted for FRAX with bone mineral density, patients with cancer had an increased risk for both major osteoporotic fractures (HR, 1.17) and hip fractures (HR, 1.30). No difference was found in the risk for fracture between patients with and individuals without cancer when the models were adjusted for FRAX without bone mineral density, even when considering osteoporosis medication use.

IN PRACTICE:

“This retrospective cohort study demonstrates that individuals with cancer are at higher risk of fracture than individuals without cancer and that FRAX, particularly with BMD [bone mineral density], may accurately predict fracture risk in this population. These results, along with the known mortality risk of osteoporotic fractures among cancer survivors, further emphasize the clinical importance of closing the current osteoporosis care gap among cancer survivors,” the authors wrote.

SOURCE:

This study, led by Carrie Ye, MD, MPH, University of Alberta, Edmonton, Alberta, Canada, was published online in JAMA Oncology.

LIMITATIONS:

This study cohort included a selected group of cancer survivors who were referred for DXA scans and may not represent the general cancer population. The cohort consisted predominantly of women, limiting the generalizability to men with cancer. Given the heterogeneity of the population, the findings may not be applicable to all cancer subgroups. Information on cancer stage or the presence of bone metastases at the time of fracture risk assessment was lacking, which could have affected the findings.

DISCLOSURES:

This study was funded by the CancerCare Manitoba Foundation. Three authors reported having ties with various sources, including two who received grants from various organizations.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

These Patients May Be Less Adherent to nAMD Treatment

Article Type
Changed
Wed, 10/23/2024 - 08:29

 

TOPLINE:

— Patients who receive a diagnosis of neovascular age-related macular degeneration (nAMD) from their primary care clinician may be less likely to adhere to treatment than those who receive the diagnosis from a specialist who provides anti–vascular endothelial growth factor (anti-VEGF) therapy, according to global survey results presented at the European Society of Retina Specialists (EURETINA) 2024. Likewise, patients who self-pay for the medication or who have bilateral nAMD may be less adherent to therapy, researchers found.

METHODOLOGY:

  • Researchers analyzed data from 4558 patients with nAMD who participated in the Barometer Global Survey, which involved 77 clinics in 24 countries, including Canada, Mexico, Brazil, Germany, and France.
  • The survey included multiple-choice questions on personal characteristics, disease awareness, experiences with treatment, and logistical challenges with getting to appointments.
  • An exploratory statistical analysis identified 19 variables that influenced patient adherence to anti-VEGF therapy.
  • The researchers classified 670 patients who missed two or more appointments during a 12-month period as nonadherent.

TAKEAWAY:

  • Patients with nAMD diagnosed by their family doctor or general practitioner had a threefold higher risk for nonadherence than those diagnosed by the physician treating their nAMD.
  • Self-pay was associated with more than twice the odds of nonadherence compared with having insurance coverage (odds ratio [OR], 2.5).
  • Compared with unilateral nAMD, bilateral nAMD was associated with higher odds of multiple missed appointments (OR, 1.7).
  • Nonadherence increased with the number of anti-VEGF injections, which may show that “longer treatment durations could permit more opportunities for absenteeism,” the investigators noted.

IN PRACTICE:

“Identifying patient characteristics and challenges that may be associated with nonadherence allows clinicians to recognize patients at risk for nonadherence and provide further support before these patients begin to miss appointments,” the study authors wrote.

SOURCE:

This study was led by Laurent Kodjikian, MD, PhD, with Croix-Rousse University Hospital and the University of Lyon in France. The findings were presented in a poster at EURETINA 2024 (September 19-22).

LIMITATIONS:

The survey relied on participant responses using Likert scales and single-choice questions. Patients from the United States were not included in the study. 

DISCLOSURES:

The survey and medical writing support for the study were funded by Bayer Consumer Care. Kodjikian and co-authors disclosed consulting work for Bayer and other pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

— Patients who receive a diagnosis of neovascular age-related macular degeneration (nAMD) from their primary care clinician may be less likely to adhere to treatment than those who receive the diagnosis from a specialist who provides anti–vascular endothelial growth factor (anti-VEGF) therapy, according to global survey results presented at the European Society of Retina Specialists (EURETINA) 2024. Likewise, patients who self-pay for the medication or who have bilateral nAMD may be less adherent to therapy, researchers found.

METHODOLOGY:

  • Researchers analyzed data from 4558 patients with nAMD who participated in the Barometer Global Survey, which involved 77 clinics in 24 countries, including Canada, Mexico, Brazil, Germany, and France.
  • The survey included multiple-choice questions on personal characteristics, disease awareness, experiences with treatment, and logistical challenges with getting to appointments.
  • An exploratory statistical analysis identified 19 variables that influenced patient adherence to anti-VEGF therapy.
  • The researchers classified 670 patients who missed two or more appointments during a 12-month period as nonadherent.

TAKEAWAY:

  • Patients with nAMD diagnosed by their family doctor or general practitioner had a threefold higher risk for nonadherence than those diagnosed by the physician treating their nAMD.
  • Self-pay was associated with more than twice the odds of nonadherence compared with having insurance coverage (odds ratio [OR], 2.5).
  • Compared with unilateral nAMD, bilateral nAMD was associated with higher odds of multiple missed appointments (OR, 1.7).
  • Nonadherence increased with the number of anti-VEGF injections, which may show that “longer treatment durations could permit more opportunities for absenteeism,” the investigators noted.

IN PRACTICE:

“Identifying patient characteristics and challenges that may be associated with nonadherence allows clinicians to recognize patients at risk for nonadherence and provide further support before these patients begin to miss appointments,” the study authors wrote.

SOURCE:

This study was led by Laurent Kodjikian, MD, PhD, with Croix-Rousse University Hospital and the University of Lyon in France. The findings were presented in a poster at EURETINA 2024 (September 19-22).

LIMITATIONS:

The survey relied on participant responses using Likert scales and single-choice questions. Patients from the United States were not included in the study. 

DISCLOSURES:

The survey and medical writing support for the study were funded by Bayer Consumer Care. Kodjikian and co-authors disclosed consulting work for Bayer and other pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

— Patients who receive a diagnosis of neovascular age-related macular degeneration (nAMD) from their primary care clinician may be less likely to adhere to treatment than those who receive the diagnosis from a specialist who provides anti–vascular endothelial growth factor (anti-VEGF) therapy, according to global survey results presented at the European Society of Retina Specialists (EURETINA) 2024. Likewise, patients who self-pay for the medication or who have bilateral nAMD may be less adherent to therapy, researchers found.

METHODOLOGY:

  • Researchers analyzed data from 4558 patients with nAMD who participated in the Barometer Global Survey, which involved 77 clinics in 24 countries, including Canada, Mexico, Brazil, Germany, and France.
  • The survey included multiple-choice questions on personal characteristics, disease awareness, experiences with treatment, and logistical challenges with getting to appointments.
  • An exploratory statistical analysis identified 19 variables that influenced patient adherence to anti-VEGF therapy.
  • The researchers classified 670 patients who missed two or more appointments during a 12-month period as nonadherent.

TAKEAWAY:

  • Patients with nAMD diagnosed by their family doctor or general practitioner had a threefold higher risk for nonadherence than those diagnosed by the physician treating their nAMD.
  • Self-pay was associated with more than twice the odds of nonadherence compared with having insurance coverage (odds ratio [OR], 2.5).
  • Compared with unilateral nAMD, bilateral nAMD was associated with higher odds of multiple missed appointments (OR, 1.7).
  • Nonadherence increased with the number of anti-VEGF injections, which may show that “longer treatment durations could permit more opportunities for absenteeism,” the investigators noted.

IN PRACTICE:

“Identifying patient characteristics and challenges that may be associated with nonadherence allows clinicians to recognize patients at risk for nonadherence and provide further support before these patients begin to miss appointments,” the study authors wrote.

SOURCE:

This study was led by Laurent Kodjikian, MD, PhD, with Croix-Rousse University Hospital and the University of Lyon in France. The findings were presented in a poster at EURETINA 2024 (September 19-22).

LIMITATIONS:

The survey relied on participant responses using Likert scales and single-choice questions. Patients from the United States were not included in the study. 

DISCLOSURES:

The survey and medical writing support for the study were funded by Bayer Consumer Care. Kodjikian and co-authors disclosed consulting work for Bayer and other pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is BMI Underestimating Breast Cancer Risk in Postmenopausal Women?

Article Type
Changed
Wed, 10/16/2024 - 12:40

 

TOPLINE:

Excess body fat in postmenopausal women is linked to a higher risk for breast cancer, with the Clínica Universidad de Navarra-Body Adiposity Estimator (CUN-BAE) showing a stronger association than body mass index (BMI). Accurate body fat measures are crucial for effective cancer prevention.

METHODOLOGY:

  • Researchers conducted a case-control study including 1033 breast cancer cases and 1143 postmenopausal population controls from the MCC-Spain study.
  • Participants were aged 20-85 years. BMI was calculated as the ratio of weight to height squared and categorized using World Health Organization standards: < 25, 25-29.9, 30-34.9, and ≥ 35.
  • CUN-BAE was calculated using a specific equation and categorized according to the estimated percentage of body fat: < 35%, 35%-39.9%, 40%-44.9%, and ≥ 45%.
  • Odds ratios (ORs) were estimated with 95% CIs for both measures (BMI and CUN-BAE) for breast cancer cases using unconditional logistic regression.

TAKEAWAY:

  • Excess body weight attributable to the risk for breast cancer was 23% when assessed using a BMI value > 30 and 38% when assessed using a CUN-BAE value > 40% body fat.
  • Hormone receptor stratification showed that these differences in population-attributable fractions were only observed in hormone receptor–positive cases, with an estimated burden of 19.9% for BMI and 41.9% for CUN-BAE.
  • The highest categories of CUN-BAE showed an increase in the risk for postmenopausal breast cancer (OR, 2.13 for body fat ≥ 45% compared with the reference category < 35%).
  • No similar trend was observed for BMI, as the gradient declined after a BMI ≥ 35.

IN PRACTICE:

“The results of our study indicate that excess body fat is a significant risk factor for hormone receptor–positive breast cancer in postmenopausal women. Our findings suggest that the population impact could be underestimated when using traditional BMI estimates, and that more accurate measures of body fat, such as CUN-BAE, should be considered,” the authors of the study wrote.

SOURCE:

This study was led by Verónica Dávila-Batista, University of Las Palmas de Gran Canaria in Las Palmas de Gran Canaria, Spain. It was published online in Journal of Epidemiology and Community Health.

LIMITATIONS:

The case-control design of the study may have limited the ability to establish causal relationships. BMI was self-reported at the time of the interview for controls and 1 year before diagnosis for cancer cases, which may have introduced recall bias. The formula for CUN-BAE was calculated from a sedentary convenience sample, which may not have been representative of the general population. The small sample size of cases that did not express hormone receptors was another limitation. The study’s findings may not be generalizable to non-White populations as non-White participants were excluded.

DISCLOSURES:

Dávila-Batista disclosed receiving grants from the Carlos III Health Institute. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Excess body fat in postmenopausal women is linked to a higher risk for breast cancer, with the Clínica Universidad de Navarra-Body Adiposity Estimator (CUN-BAE) showing a stronger association than body mass index (BMI). Accurate body fat measures are crucial for effective cancer prevention.

METHODOLOGY:

  • Researchers conducted a case-control study including 1033 breast cancer cases and 1143 postmenopausal population controls from the MCC-Spain study.
  • Participants were aged 20-85 years. BMI was calculated as the ratio of weight to height squared and categorized using World Health Organization standards: < 25, 25-29.9, 30-34.9, and ≥ 35.
  • CUN-BAE was calculated using a specific equation and categorized according to the estimated percentage of body fat: < 35%, 35%-39.9%, 40%-44.9%, and ≥ 45%.
  • Odds ratios (ORs) were estimated with 95% CIs for both measures (BMI and CUN-BAE) for breast cancer cases using unconditional logistic regression.

TAKEAWAY:

  • Excess body weight attributable to the risk for breast cancer was 23% when assessed using a BMI value > 30 and 38% when assessed using a CUN-BAE value > 40% body fat.
  • Hormone receptor stratification showed that these differences in population-attributable fractions were only observed in hormone receptor–positive cases, with an estimated burden of 19.9% for BMI and 41.9% for CUN-BAE.
  • The highest categories of CUN-BAE showed an increase in the risk for postmenopausal breast cancer (OR, 2.13 for body fat ≥ 45% compared with the reference category < 35%).
  • No similar trend was observed for BMI, as the gradient declined after a BMI ≥ 35.

IN PRACTICE:

“The results of our study indicate that excess body fat is a significant risk factor for hormone receptor–positive breast cancer in postmenopausal women. Our findings suggest that the population impact could be underestimated when using traditional BMI estimates, and that more accurate measures of body fat, such as CUN-BAE, should be considered,” the authors of the study wrote.

SOURCE:

This study was led by Verónica Dávila-Batista, University of Las Palmas de Gran Canaria in Las Palmas de Gran Canaria, Spain. It was published online in Journal of Epidemiology and Community Health.

LIMITATIONS:

The case-control design of the study may have limited the ability to establish causal relationships. BMI was self-reported at the time of the interview for controls and 1 year before diagnosis for cancer cases, which may have introduced recall bias. The formula for CUN-BAE was calculated from a sedentary convenience sample, which may not have been representative of the general population. The small sample size of cases that did not express hormone receptors was another limitation. The study’s findings may not be generalizable to non-White populations as non-White participants were excluded.

DISCLOSURES:

Dávila-Batista disclosed receiving grants from the Carlos III Health Institute. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

Excess body fat in postmenopausal women is linked to a higher risk for breast cancer, with the Clínica Universidad de Navarra-Body Adiposity Estimator (CUN-BAE) showing a stronger association than body mass index (BMI). Accurate body fat measures are crucial for effective cancer prevention.

METHODOLOGY:

  • Researchers conducted a case-control study including 1033 breast cancer cases and 1143 postmenopausal population controls from the MCC-Spain study.
  • Participants were aged 20-85 years. BMI was calculated as the ratio of weight to height squared and categorized using World Health Organization standards: < 25, 25-29.9, 30-34.9, and ≥ 35.
  • CUN-BAE was calculated using a specific equation and categorized according to the estimated percentage of body fat: < 35%, 35%-39.9%, 40%-44.9%, and ≥ 45%.
  • Odds ratios (ORs) were estimated with 95% CIs for both measures (BMI and CUN-BAE) for breast cancer cases using unconditional logistic regression.

TAKEAWAY:

  • Excess body weight attributable to the risk for breast cancer was 23% when assessed using a BMI value > 30 and 38% when assessed using a CUN-BAE value > 40% body fat.
  • Hormone receptor stratification showed that these differences in population-attributable fractions were only observed in hormone receptor–positive cases, with an estimated burden of 19.9% for BMI and 41.9% for CUN-BAE.
  • The highest categories of CUN-BAE showed an increase in the risk for postmenopausal breast cancer (OR, 2.13 for body fat ≥ 45% compared with the reference category < 35%).
  • No similar trend was observed for BMI, as the gradient declined after a BMI ≥ 35.

IN PRACTICE:

“The results of our study indicate that excess body fat is a significant risk factor for hormone receptor–positive breast cancer in postmenopausal women. Our findings suggest that the population impact could be underestimated when using traditional BMI estimates, and that more accurate measures of body fat, such as CUN-BAE, should be considered,” the authors of the study wrote.

SOURCE:

This study was led by Verónica Dávila-Batista, University of Las Palmas de Gran Canaria in Las Palmas de Gran Canaria, Spain. It was published online in Journal of Epidemiology and Community Health.

LIMITATIONS:

The case-control design of the study may have limited the ability to establish causal relationships. BMI was self-reported at the time of the interview for controls and 1 year before diagnosis for cancer cases, which may have introduced recall bias. The formula for CUN-BAE was calculated from a sedentary convenience sample, which may not have been representative of the general population. The small sample size of cases that did not express hormone receptors was another limitation. The study’s findings may not be generalizable to non-White populations as non-White participants were excluded.

DISCLOSURES:

Dávila-Batista disclosed receiving grants from the Carlos III Health Institute. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Unseen Cost of Weight Loss and Aging: Tackling Sarcopenia

Article Type
Changed
Wed, 10/16/2024 - 11:25

Losses of muscle and strength are inescapable effects of the aging process. Left unchecked, these progressive losses will start to impair physical function. 

Once a certain level of impairment occurs, an individual can be diagnosed with sarcopenia, which comes from the Greek words “sarco” (flesh) and “penia” (poverty). Individuals with sarcopenia have a significant increase in the risk for falls and death, as well as diminished quality of life.

Muscle mass losses generally occur with weight loss, and the increasing use of glucagon-like peptide 1 (GLP-1) medications may lead to greater incidence and prevalence of sarcopenia in the years to come.

A recent meta-analysis of 56 studies (mean participant age, 50 years) found a twofold greater risk for mortality in those with sarcopenia vs those without. Despite its health consequences, sarcopenia tends to be underdiagnosed and, consequently, undertreated at a population and individual level. Part of the reason probably stems from the lack of health insurance reimbursement for individual clinicians and hospital systems to perform sarcopenia screening assessments. 

In aging and obesity, it appears justified to include and emphasize a recommendation for sarcopenia screening in medical society guidelines; however, individual patients and clinicians do not need to wait for updated guidelines to implement sarcopenia screening, treatment, and prevention strategies in their own lives and/or clinical practice. 
 

Simple Prevention and Treatment Strategy

Much can be done to help prevent sarcopenia. The primary strategy, unsurprisingly, is engaging in frequent strength training. But that doesn’t mean hours in the gym every week. 

With just one session per week over 10 weeks, lean body mass (LBM), a common proxy for muscle mass, increased by 0.33 kg, according to a study which evaluated LBM improvements across different strength training frequencies. Adding a second weekly session was significantly better. In the twice-weekly group, LBM increased by 1.4 kg over 10 weeks, resulting in an increase in LBM more than four times greater than the once-a-week group. (There was no greater improvement in LBM by adding a third weekly session vs two weekly sessions.) 

Although that particular study didn’t identify greater benefit at three times a week, compared with twice a week, the specific training routines and lack of a protein consumption assessment may have played a role in that finding. 

Underlying the diminishing benefits, a different study found a marginally greater benefit in favor of performing ≥ five sets per major muscle group per week, compared with < five sets per week for increasing muscle in the legs, arms, back, chest, and shoulders. 

Expensive gym memberships and fancy equipment are not necessary. While the use of strength training machines and free weights have been viewed by many as the optimal approach, a recent systematic review and meta-analysis found that comparable improvements to strength can be achieved with workouts using resistance bands. For those who struggle to find the time to go to a gym, or for whom gym fees are not financially affordable, resistance bands are a cheaper and more convenient alternative. 

Lucas, Assistant Professor of Clinical Medicine, Comprehensive Weight Control Center, Weill Cornell Medicine, New York City, disclosed ties with Measured (Better Health Labs).

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Losses of muscle and strength are inescapable effects of the aging process. Left unchecked, these progressive losses will start to impair physical function. 

Once a certain level of impairment occurs, an individual can be diagnosed with sarcopenia, which comes from the Greek words “sarco” (flesh) and “penia” (poverty). Individuals with sarcopenia have a significant increase in the risk for falls and death, as well as diminished quality of life.

Muscle mass losses generally occur with weight loss, and the increasing use of glucagon-like peptide 1 (GLP-1) medications may lead to greater incidence and prevalence of sarcopenia in the years to come.

A recent meta-analysis of 56 studies (mean participant age, 50 years) found a twofold greater risk for mortality in those with sarcopenia vs those without. Despite its health consequences, sarcopenia tends to be underdiagnosed and, consequently, undertreated at a population and individual level. Part of the reason probably stems from the lack of health insurance reimbursement for individual clinicians and hospital systems to perform sarcopenia screening assessments. 

In aging and obesity, it appears justified to include and emphasize a recommendation for sarcopenia screening in medical society guidelines; however, individual patients and clinicians do not need to wait for updated guidelines to implement sarcopenia screening, treatment, and prevention strategies in their own lives and/or clinical practice. 
 

Simple Prevention and Treatment Strategy

Much can be done to help prevent sarcopenia. The primary strategy, unsurprisingly, is engaging in frequent strength training. But that doesn’t mean hours in the gym every week. 

With just one session per week over 10 weeks, lean body mass (LBM), a common proxy for muscle mass, increased by 0.33 kg, according to a study which evaluated LBM improvements across different strength training frequencies. Adding a second weekly session was significantly better. In the twice-weekly group, LBM increased by 1.4 kg over 10 weeks, resulting in an increase in LBM more than four times greater than the once-a-week group. (There was no greater improvement in LBM by adding a third weekly session vs two weekly sessions.) 

Although that particular study didn’t identify greater benefit at three times a week, compared with twice a week, the specific training routines and lack of a protein consumption assessment may have played a role in that finding. 

Underlying the diminishing benefits, a different study found a marginally greater benefit in favor of performing ≥ five sets per major muscle group per week, compared with < five sets per week for increasing muscle in the legs, arms, back, chest, and shoulders. 

Expensive gym memberships and fancy equipment are not necessary. While the use of strength training machines and free weights have been viewed by many as the optimal approach, a recent systematic review and meta-analysis found that comparable improvements to strength can be achieved with workouts using resistance bands. For those who struggle to find the time to go to a gym, or for whom gym fees are not financially affordable, resistance bands are a cheaper and more convenient alternative. 

Lucas, Assistant Professor of Clinical Medicine, Comprehensive Weight Control Center, Weill Cornell Medicine, New York City, disclosed ties with Measured (Better Health Labs).

A version of this article appeared on Medscape.com.

Losses of muscle and strength are inescapable effects of the aging process. Left unchecked, these progressive losses will start to impair physical function. 

Once a certain level of impairment occurs, an individual can be diagnosed with sarcopenia, which comes from the Greek words “sarco” (flesh) and “penia” (poverty). Individuals with sarcopenia have a significant increase in the risk for falls and death, as well as diminished quality of life.

Muscle mass losses generally occur with weight loss, and the increasing use of glucagon-like peptide 1 (GLP-1) medications may lead to greater incidence and prevalence of sarcopenia in the years to come.

A recent meta-analysis of 56 studies (mean participant age, 50 years) found a twofold greater risk for mortality in those with sarcopenia vs those without. Despite its health consequences, sarcopenia tends to be underdiagnosed and, consequently, undertreated at a population and individual level. Part of the reason probably stems from the lack of health insurance reimbursement for individual clinicians and hospital systems to perform sarcopenia screening assessments. 

In aging and obesity, it appears justified to include and emphasize a recommendation for sarcopenia screening in medical society guidelines; however, individual patients and clinicians do not need to wait for updated guidelines to implement sarcopenia screening, treatment, and prevention strategies in their own lives and/or clinical practice. 
 

Simple Prevention and Treatment Strategy

Much can be done to help prevent sarcopenia. The primary strategy, unsurprisingly, is engaging in frequent strength training. But that doesn’t mean hours in the gym every week. 

With just one session per week over 10 weeks, lean body mass (LBM), a common proxy for muscle mass, increased by 0.33 kg, according to a study which evaluated LBM improvements across different strength training frequencies. Adding a second weekly session was significantly better. In the twice-weekly group, LBM increased by 1.4 kg over 10 weeks, resulting in an increase in LBM more than four times greater than the once-a-week group. (There was no greater improvement in LBM by adding a third weekly session vs two weekly sessions.) 

Although that particular study didn’t identify greater benefit at three times a week, compared with twice a week, the specific training routines and lack of a protein consumption assessment may have played a role in that finding. 

Underlying the diminishing benefits, a different study found a marginally greater benefit in favor of performing ≥ five sets per major muscle group per week, compared with < five sets per week for increasing muscle in the legs, arms, back, chest, and shoulders. 

Expensive gym memberships and fancy equipment are not necessary. While the use of strength training machines and free weights have been viewed by many as the optimal approach, a recent systematic review and meta-analysis found that comparable improvements to strength can be achieved with workouts using resistance bands. For those who struggle to find the time to go to a gym, or for whom gym fees are not financially affordable, resistance bands are a cheaper and more convenient alternative. 

Lucas, Assistant Professor of Clinical Medicine, Comprehensive Weight Control Center, Weill Cornell Medicine, New York City, disclosed ties with Measured (Better Health Labs).

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Smartphone Data Flag Early Dementia Risk in Older Adults

Article Type
Changed
Mon, 10/14/2024 - 10:10

Older adults at risk for dementia can be identified using mobile data obtained during a wayfinding task, a novel real-world study suggested.

During a smartphone-assisted scavenger hunt on a university campus, researchers observed that older adults with subjective cognitive decline (SCD) paused more frequently, likely to reorient themselves, than those without SCD. This behavior served as an identifier of individuals with SCD.

“Deficits in spatial navigation are one of the first signs of Alzheimer’s disease,” said study investigator Nadine Diersch, PhD, guest researcher with the German Center for Neurodegenerative Diseases (DZNE), Tübingen.

This study, said Diersch, provides “first evidence of how a digital footprint for early dementia-related cognitive decline might look like in real-world settings during a short (less than 30 minutes) and remotely performed wayfinding task.” 

The study was published online  in PLOS Digital Health.
 

Trouble With Orientation

A total of 72 men and women in their mid-20s to mid-60s participated in the study; 23 of the 48 older adults had SCD but still scored normally on neuropsychological assessments.

All study participants were instructed to independently find five buildings on the medical campus of the Otto-von-Guericke-University Magdeburg in Germany, guided by a smartphone app developed by the study team. Their patterns of movement were tracked by GPS.

All participants had similar knowledge of the campus, and all were experienced in using smartphones. They also practiced using the app beforehand.

In most cases, participants reached the five destinations in less than half an hour. The younger participants performed better than the older ones; on average, the younger adults walked shorter distances and generally did not use the help function on the app as often as the older ones.

In the older adults, the number of orientation stops was predictive of SCD status. The adults with SCD tended to hesitate more at intersections. A decline in executive functioning might explain this finding, Diersch said.

“Intact executive functioning is an important component of efficient navigation, for example, when switching between different navigation strategies or planning a route. However, since this was the first study on that subject, more research is needed to determine the precise contribution of different cognitive processes on digital wayfinding data,” said Diersch.

With more study, “we think that such a smartphone-assisted wayfinding task, performed in the immediate surroundings, could be used as a low-threshold screening tool — for example, to stratify subjects with regard to the need of extended cognitive and clinical diagnostics in specialized care,” she added.
 

‘A Game Changer’

Commenting on the research, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, Florida, who wasn’t involved in the research, said the findings have the potential to “revolutionize” dementia care.

“We’ve seen smartphones transform everything from banking to dating — now they’re set to reshape brain health monitoring. This ingenious digital scavenger hunt detects cognitive decline in real-world scenarios, bypassing costly, complex tests. It’s a game changer,” said Lakhan.

“Just as we track our steps and calories, we could soon track our cognitive health with a tap. This isn’t just innovation; it’s the future of dementia prevention and care unfolding on our smartphone screens. We’re not just talking about convenience. We’re talking about catching Alzheimer’s before it catches us,” he added.

The next phase, Lakhan noted, would be to develop smartphone apps as digital therapeutics, not just to detect cognitive decline but to treat or even prevent it.

“Imagine your phone not only flagging potential issues but also providing personalized brain training exercises to keep your mind sharp and resilient against dementia,” Lakhan said.

This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) within the Collaborative Research Center “Neural Resources of Cognition” and a DZNE Innovation-2-Application Award. Diersch is now a full-time employee of neotiv. Lakhan had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Older adults at risk for dementia can be identified using mobile data obtained during a wayfinding task, a novel real-world study suggested.

During a smartphone-assisted scavenger hunt on a university campus, researchers observed that older adults with subjective cognitive decline (SCD) paused more frequently, likely to reorient themselves, than those without SCD. This behavior served as an identifier of individuals with SCD.

“Deficits in spatial navigation are one of the first signs of Alzheimer’s disease,” said study investigator Nadine Diersch, PhD, guest researcher with the German Center for Neurodegenerative Diseases (DZNE), Tübingen.

This study, said Diersch, provides “first evidence of how a digital footprint for early dementia-related cognitive decline might look like in real-world settings during a short (less than 30 minutes) and remotely performed wayfinding task.” 

The study was published online  in PLOS Digital Health.
 

Trouble With Orientation

A total of 72 men and women in their mid-20s to mid-60s participated in the study; 23 of the 48 older adults had SCD but still scored normally on neuropsychological assessments.

All study participants were instructed to independently find five buildings on the medical campus of the Otto-von-Guericke-University Magdeburg in Germany, guided by a smartphone app developed by the study team. Their patterns of movement were tracked by GPS.

All participants had similar knowledge of the campus, and all were experienced in using smartphones. They also practiced using the app beforehand.

In most cases, participants reached the five destinations in less than half an hour. The younger participants performed better than the older ones; on average, the younger adults walked shorter distances and generally did not use the help function on the app as often as the older ones.

In the older adults, the number of orientation stops was predictive of SCD status. The adults with SCD tended to hesitate more at intersections. A decline in executive functioning might explain this finding, Diersch said.

“Intact executive functioning is an important component of efficient navigation, for example, when switching between different navigation strategies or planning a route. However, since this was the first study on that subject, more research is needed to determine the precise contribution of different cognitive processes on digital wayfinding data,” said Diersch.

With more study, “we think that such a smartphone-assisted wayfinding task, performed in the immediate surroundings, could be used as a low-threshold screening tool — for example, to stratify subjects with regard to the need of extended cognitive and clinical diagnostics in specialized care,” she added.
 

‘A Game Changer’

Commenting on the research, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, Florida, who wasn’t involved in the research, said the findings have the potential to “revolutionize” dementia care.

“We’ve seen smartphones transform everything from banking to dating — now they’re set to reshape brain health monitoring. This ingenious digital scavenger hunt detects cognitive decline in real-world scenarios, bypassing costly, complex tests. It’s a game changer,” said Lakhan.

“Just as we track our steps and calories, we could soon track our cognitive health with a tap. This isn’t just innovation; it’s the future of dementia prevention and care unfolding on our smartphone screens. We’re not just talking about convenience. We’re talking about catching Alzheimer’s before it catches us,” he added.

The next phase, Lakhan noted, would be to develop smartphone apps as digital therapeutics, not just to detect cognitive decline but to treat or even prevent it.

“Imagine your phone not only flagging potential issues but also providing personalized brain training exercises to keep your mind sharp and resilient against dementia,” Lakhan said.

This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) within the Collaborative Research Center “Neural Resources of Cognition” and a DZNE Innovation-2-Application Award. Diersch is now a full-time employee of neotiv. Lakhan had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Older adults at risk for dementia can be identified using mobile data obtained during a wayfinding task, a novel real-world study suggested.

During a smartphone-assisted scavenger hunt on a university campus, researchers observed that older adults with subjective cognitive decline (SCD) paused more frequently, likely to reorient themselves, than those without SCD. This behavior served as an identifier of individuals with SCD.

“Deficits in spatial navigation are one of the first signs of Alzheimer’s disease,” said study investigator Nadine Diersch, PhD, guest researcher with the German Center for Neurodegenerative Diseases (DZNE), Tübingen.

This study, said Diersch, provides “first evidence of how a digital footprint for early dementia-related cognitive decline might look like in real-world settings during a short (less than 30 minutes) and remotely performed wayfinding task.” 

The study was published online  in PLOS Digital Health.
 

Trouble With Orientation

A total of 72 men and women in their mid-20s to mid-60s participated in the study; 23 of the 48 older adults had SCD but still scored normally on neuropsychological assessments.

All study participants were instructed to independently find five buildings on the medical campus of the Otto-von-Guericke-University Magdeburg in Germany, guided by a smartphone app developed by the study team. Their patterns of movement were tracked by GPS.

All participants had similar knowledge of the campus, and all were experienced in using smartphones. They also practiced using the app beforehand.

In most cases, participants reached the five destinations in less than half an hour. The younger participants performed better than the older ones; on average, the younger adults walked shorter distances and generally did not use the help function on the app as often as the older ones.

In the older adults, the number of orientation stops was predictive of SCD status. The adults with SCD tended to hesitate more at intersections. A decline in executive functioning might explain this finding, Diersch said.

“Intact executive functioning is an important component of efficient navigation, for example, when switching between different navigation strategies or planning a route. However, since this was the first study on that subject, more research is needed to determine the precise contribution of different cognitive processes on digital wayfinding data,” said Diersch.

With more study, “we think that such a smartphone-assisted wayfinding task, performed in the immediate surroundings, could be used as a low-threshold screening tool — for example, to stratify subjects with regard to the need of extended cognitive and clinical diagnostics in specialized care,” she added.
 

‘A Game Changer’

Commenting on the research, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, Florida, who wasn’t involved in the research, said the findings have the potential to “revolutionize” dementia care.

“We’ve seen smartphones transform everything from banking to dating — now they’re set to reshape brain health monitoring. This ingenious digital scavenger hunt detects cognitive decline in real-world scenarios, bypassing costly, complex tests. It’s a game changer,” said Lakhan.

“Just as we track our steps and calories, we could soon track our cognitive health with a tap. This isn’t just innovation; it’s the future of dementia prevention and care unfolding on our smartphone screens. We’re not just talking about convenience. We’re talking about catching Alzheimer’s before it catches us,” he added.

The next phase, Lakhan noted, would be to develop smartphone apps as digital therapeutics, not just to detect cognitive decline but to treat or even prevent it.

“Imagine your phone not only flagging potential issues but also providing personalized brain training exercises to keep your mind sharp and resilient against dementia,” Lakhan said.

This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) within the Collaborative Research Center “Neural Resources of Cognition” and a DZNE Innovation-2-Application Award. Diersch is now a full-time employee of neotiv. Lakhan had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PLOS DIGITAL HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Long-Term Cognitive Monitoring Warranted After First Stroke

Article Type
Changed
Fri, 10/11/2024 - 12:42

A first stroke in older adults is associated with substantial immediate and accelerated long-term cognitive decline, suggested a new study that underscores the need for continuous cognitive monitoring in this patient population.

Results from the study, which included 14 international cohorts of older adults, showed that stroke was associated with a significant acute decline in global cognition and a small, but significant, acceleration in the rate of cognitive decline over time.

Cognitive assessments in primary care are “crucial, especially since cognitive impairment is frequently missed or undiagnosed in hospitals,” lead author Jessica Lo, MSc, biostatistician and research associate with the Center for Healthy Brain Aging, University of New South Wales, Sydney, Australia, told this news organization.

She suggested clinicians incorporate long-term cognitive assessments into care plans, using more sensitive neuropsychological tests in primary care to detect early signs of cognitive impairment. “Early detection would enable timely interventions to improve outcomes,” Lo said.

She also noted that poststroke care typically includes physical rehabilitation but not cognitive rehabilitation, which many rehabilitation centers aren’t equipped to provide.

The study was published online in JAMA Network Open.
 

Mapping Cognitive Decline Trajectory

Cognitive impairment after stroke is common, but the trajectory of cognitive decline following a first stroke, relative to prestroke cognitive function, remains unclear.

The investigators leveraged data from 14 population-based cohort studies of 20,860 adults (mean age, 73 years; 59% women) to map the trajectory of cognitive function before and after a first stroke.

The primary outcome was global cognition, defined as the standardized average of four cognitive domains (language, memory, processing speed, and executive function).

During a mean follow-up of 7.5 years, 1041 (5%) adults (mean age, 79 years) experienced a first stroke, a mean of 4.5 years after study entry.

In adjusted analyses, stroke was associated with a significant acute decline of 0.25 SD in global cognition and a “small but significant” acceleration in the rate of decline of −0.038 SD per year, the authors reported.

Stroke was also associated with acute decline in all individual cognitive domains except for memory, with effect sizes ranging from −0.17 to −0.22 SD. Poststroke declines in Mini-Mental State Examination scores (−0.36 SD) were also noted.

In terms of cognitive trajectory, the rate of decline before stroke in survivors was similar to that seen in peers who didn’t have a stroke (−0.048 and −0.049 SD per year in global cognition, respectively).

The researchers did not identify any vascular risk factors moderating cognitive decline following a stroke, consistent with prior research. However, cognitive decline was significantly more rapid in individuals without stroke, regardless of any future stroke, who had a history of diabetes, hypertension, high cholesterol, cardiovascular disease, depression, smoking, or were APOE4 carriers.

“Targeting modifiable vascular risk factors at an early stage may reduce the risk of stroke but also subsequent risk of stroke-related cognitive decline and cognitive impairment,” the researchers noted.
 

A ‘Major Step’ in the Right Direction

As previously reported by this news organization, in 2023 the American Heart Association (AHA) issued a statement noting that screening for cognitive impairment should be part of multidisciplinary care for stroke survivors.

Commenting for this news organization, Mitchell Elkind, MD, MS, AHA chief clinical science officer, said these new data are consistent with current AHA guidelines and statements that “support screening for cognitive and functional decline in patients both acutely and over the long term after stroke.”

Elkind noted that the 2022 guideline for intracerebral hemorrhage states that cognitive screening should occur “across the continuum of inpatient care and at intervals in the outpatient setting” and provides recommendations for cognitive therapy.

“Our 2021 scientific statement on the primary care of patients after stroke also recommends screening for both depression and cognitive impairment over both the short- and long-term,” said Elkind, professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City.

“These documents recognize the fact that function and cognition can continue to decline years after stroke and that patients’ rehabilitation and support needs may therefore change over time after stroke,” Elkind added.

The authors of an accompanying commentary called it a “major step” in the right direction for the future of long-term stroke outcome assessment.

“As we develop new devices, indications, and time windows for stroke treatment, it may perhaps be wise to ensure trials steer away from simpler outcomes to more complex, granular ones,” wrote Yasmin Sadigh, MSc, and Victor Volovici, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands.

The study had no commercial funding. The authors and commentary writers and Elkind have declared no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A first stroke in older adults is associated with substantial immediate and accelerated long-term cognitive decline, suggested a new study that underscores the need for continuous cognitive monitoring in this patient population.

Results from the study, which included 14 international cohorts of older adults, showed that stroke was associated with a significant acute decline in global cognition and a small, but significant, acceleration in the rate of cognitive decline over time.

Cognitive assessments in primary care are “crucial, especially since cognitive impairment is frequently missed or undiagnosed in hospitals,” lead author Jessica Lo, MSc, biostatistician and research associate with the Center for Healthy Brain Aging, University of New South Wales, Sydney, Australia, told this news organization.

She suggested clinicians incorporate long-term cognitive assessments into care plans, using more sensitive neuropsychological tests in primary care to detect early signs of cognitive impairment. “Early detection would enable timely interventions to improve outcomes,” Lo said.

She also noted that poststroke care typically includes physical rehabilitation but not cognitive rehabilitation, which many rehabilitation centers aren’t equipped to provide.

The study was published online in JAMA Network Open.
 

Mapping Cognitive Decline Trajectory

Cognitive impairment after stroke is common, but the trajectory of cognitive decline following a first stroke, relative to prestroke cognitive function, remains unclear.

The investigators leveraged data from 14 population-based cohort studies of 20,860 adults (mean age, 73 years; 59% women) to map the trajectory of cognitive function before and after a first stroke.

The primary outcome was global cognition, defined as the standardized average of four cognitive domains (language, memory, processing speed, and executive function).

During a mean follow-up of 7.5 years, 1041 (5%) adults (mean age, 79 years) experienced a first stroke, a mean of 4.5 years after study entry.

In adjusted analyses, stroke was associated with a significant acute decline of 0.25 SD in global cognition and a “small but significant” acceleration in the rate of decline of −0.038 SD per year, the authors reported.

Stroke was also associated with acute decline in all individual cognitive domains except for memory, with effect sizes ranging from −0.17 to −0.22 SD. Poststroke declines in Mini-Mental State Examination scores (−0.36 SD) were also noted.

In terms of cognitive trajectory, the rate of decline before stroke in survivors was similar to that seen in peers who didn’t have a stroke (−0.048 and −0.049 SD per year in global cognition, respectively).

The researchers did not identify any vascular risk factors moderating cognitive decline following a stroke, consistent with prior research. However, cognitive decline was significantly more rapid in individuals without stroke, regardless of any future stroke, who had a history of diabetes, hypertension, high cholesterol, cardiovascular disease, depression, smoking, or were APOE4 carriers.

“Targeting modifiable vascular risk factors at an early stage may reduce the risk of stroke but also subsequent risk of stroke-related cognitive decline and cognitive impairment,” the researchers noted.
 

A ‘Major Step’ in the Right Direction

As previously reported by this news organization, in 2023 the American Heart Association (AHA) issued a statement noting that screening for cognitive impairment should be part of multidisciplinary care for stroke survivors.

Commenting for this news organization, Mitchell Elkind, MD, MS, AHA chief clinical science officer, said these new data are consistent with current AHA guidelines and statements that “support screening for cognitive and functional decline in patients both acutely and over the long term after stroke.”

Elkind noted that the 2022 guideline for intracerebral hemorrhage states that cognitive screening should occur “across the continuum of inpatient care and at intervals in the outpatient setting” and provides recommendations for cognitive therapy.

“Our 2021 scientific statement on the primary care of patients after stroke also recommends screening for both depression and cognitive impairment over both the short- and long-term,” said Elkind, professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City.

“These documents recognize the fact that function and cognition can continue to decline years after stroke and that patients’ rehabilitation and support needs may therefore change over time after stroke,” Elkind added.

The authors of an accompanying commentary called it a “major step” in the right direction for the future of long-term stroke outcome assessment.

“As we develop new devices, indications, and time windows for stroke treatment, it may perhaps be wise to ensure trials steer away from simpler outcomes to more complex, granular ones,” wrote Yasmin Sadigh, MSc, and Victor Volovici, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands.

The study had no commercial funding. The authors and commentary writers and Elkind have declared no conflicts of interest.

A version of this article first appeared on Medscape.com.

A first stroke in older adults is associated with substantial immediate and accelerated long-term cognitive decline, suggested a new study that underscores the need for continuous cognitive monitoring in this patient population.

Results from the study, which included 14 international cohorts of older adults, showed that stroke was associated with a significant acute decline in global cognition and a small, but significant, acceleration in the rate of cognitive decline over time.

Cognitive assessments in primary care are “crucial, especially since cognitive impairment is frequently missed or undiagnosed in hospitals,” lead author Jessica Lo, MSc, biostatistician and research associate with the Center for Healthy Brain Aging, University of New South Wales, Sydney, Australia, told this news organization.

She suggested clinicians incorporate long-term cognitive assessments into care plans, using more sensitive neuropsychological tests in primary care to detect early signs of cognitive impairment. “Early detection would enable timely interventions to improve outcomes,” Lo said.

She also noted that poststroke care typically includes physical rehabilitation but not cognitive rehabilitation, which many rehabilitation centers aren’t equipped to provide.

The study was published online in JAMA Network Open.
 

Mapping Cognitive Decline Trajectory

Cognitive impairment after stroke is common, but the trajectory of cognitive decline following a first stroke, relative to prestroke cognitive function, remains unclear.

The investigators leveraged data from 14 population-based cohort studies of 20,860 adults (mean age, 73 years; 59% women) to map the trajectory of cognitive function before and after a first stroke.

The primary outcome was global cognition, defined as the standardized average of four cognitive domains (language, memory, processing speed, and executive function).

During a mean follow-up of 7.5 years, 1041 (5%) adults (mean age, 79 years) experienced a first stroke, a mean of 4.5 years after study entry.

In adjusted analyses, stroke was associated with a significant acute decline of 0.25 SD in global cognition and a “small but significant” acceleration in the rate of decline of −0.038 SD per year, the authors reported.

Stroke was also associated with acute decline in all individual cognitive domains except for memory, with effect sizes ranging from −0.17 to −0.22 SD. Poststroke declines in Mini-Mental State Examination scores (−0.36 SD) were also noted.

In terms of cognitive trajectory, the rate of decline before stroke in survivors was similar to that seen in peers who didn’t have a stroke (−0.048 and −0.049 SD per year in global cognition, respectively).

The researchers did not identify any vascular risk factors moderating cognitive decline following a stroke, consistent with prior research. However, cognitive decline was significantly more rapid in individuals without stroke, regardless of any future stroke, who had a history of diabetes, hypertension, high cholesterol, cardiovascular disease, depression, smoking, or were APOE4 carriers.

“Targeting modifiable vascular risk factors at an early stage may reduce the risk of stroke but also subsequent risk of stroke-related cognitive decline and cognitive impairment,” the researchers noted.
 

A ‘Major Step’ in the Right Direction

As previously reported by this news organization, in 2023 the American Heart Association (AHA) issued a statement noting that screening for cognitive impairment should be part of multidisciplinary care for stroke survivors.

Commenting for this news organization, Mitchell Elkind, MD, MS, AHA chief clinical science officer, said these new data are consistent with current AHA guidelines and statements that “support screening for cognitive and functional decline in patients both acutely and over the long term after stroke.”

Elkind noted that the 2022 guideline for intracerebral hemorrhage states that cognitive screening should occur “across the continuum of inpatient care and at intervals in the outpatient setting” and provides recommendations for cognitive therapy.

“Our 2021 scientific statement on the primary care of patients after stroke also recommends screening for both depression and cognitive impairment over both the short- and long-term,” said Elkind, professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City.

“These documents recognize the fact that function and cognition can continue to decline years after stroke and that patients’ rehabilitation and support needs may therefore change over time after stroke,” Elkind added.

The authors of an accompanying commentary called it a “major step” in the right direction for the future of long-term stroke outcome assessment.

“As we develop new devices, indications, and time windows for stroke treatment, it may perhaps be wise to ensure trials steer away from simpler outcomes to more complex, granular ones,” wrote Yasmin Sadigh, MSc, and Victor Volovici, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands.

The study had no commercial funding. The authors and commentary writers and Elkind have declared no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article