User login
Intensive Lifestyle Changes May Counter Early Alzheimer’s Symptoms
, in what authors said is the first randomized controlled trial of intensive lifestyle modification for patients diagnosed with Alzheimer’s disease. Results could help physicians address patients at risk of Alzheimer’s disease who reject relevant testing because they believe nothing can forestall development of the disease, the authors added. The study was published online in Alzheimer’s Research & Therapy.
Although technology allows probable Alzheimer’s disease diagnosis years before clinical symptoms appear, wrote investigators led by Dean Ornish, MD, of the Preventive Medicine Research Institute in Sausalito, California, “many people do not want to know if they are likely to get Alzheimer’s disease if they do not believe they can do anything about it. If intensive lifestyle changes may cause improvement in cognition and function in MCI or early dementia due to Alzheimer’s disease, then it is reasonable to think that these lifestyle changes may also help to prevent MCI or early dementia due to Alzheimer’s disease.” As with cardiovascular disease, the authors added, preventing Alzheimer’s disease might require less intensive lifestyle modifications than treating it.
Study Methodology
Investigators randomized 26 patients with Montréal Cognitive Assessment scores of 18 or higher to an intensive intervention involving nutrition, exercise, and stress management techniques. To improve adherence, the protocol included participants’ spouses or caregivers.
Two patients, both in the treatment group, withdrew over logistical concerns.
After 20 weeks, treated patients exhibited statistically significant differences in several key measures versus a 25-patient usual-care control group. Scores that improved in the intervention group and worsened among controls included the following:
- Clinical Global Impression of Change (CGIC, P = .001)
- Clinical Dementia Rating-Global (CDR-Global, -0.04, P = .037)
- Clinical Dementia Rating Sum of Boxes (CDR-SB, +0.08, P = .032)
- Alzheimer’s Disease Assessment Scale (ADAS-Cog, -1.01, P = .053)
The validity of these changes in cognition and function, and possible biological mechanisms of improvement, were supported by statistically significant improvements in several clinically relevant biomarkers versus controls, the investigators wrote. These biomarkers included Abeta42/40 ratio, HbA1c, insulin, and glycoprotein acetylation. “This information may also help in predicting which patients are more likely to show improvements in cognition and function by making these intensive lifestyle changes,” the authors added.
In primary analysis, the degree of lifestyle changes required to stop progression of MCI ranged from 71.4% (ADAS-Cog) to 120.6% (CDR-SB). “This helps to explain why other studies of less intensive lifestyle interventions may not have been sufficient to stop deterioration or improve cognition and function,” the authors wrote. Moreover, they added, variable adherence might explain why in the intervention group, 10 patients improved their CGIC scores, while the rest held static or worsened.
Caveats
Alzheimer’s Association Vice President of Medical and Scientific Relations Heather M. Snyder, PhD, said, “This is an interesting paper in an important area of research and adds to the growing body of literature on how behavior or lifestyle may be related to cognitive decline. However, because this is a small phase 2 study, it is important for this or similar work to be done in larger, more diverse populations and over a longer duration of the intervention.” She was not involved with the study but was asked to comment.
Investigators chose the 20-week duration, they explained, because control-group patients likely would not refrain from trying the lifestyle intervention beyond that timeframe. Perhaps more importantly, challenges created by the COVID-19 pandemic required researchers to cut planned enrollment in half, eliminate planned MRI and amyloid PET scans, and reduce the number of cognition and function tests.
Such shortcomings limit what neurologists can glean and generalize from the study, said Dr. Snyder. “That said,” she added, “it does demonstrate the potential of an intensive behavior/lifestyle intervention, and the importance of this sort of research in Alzheimer’s and dementia.” Although the complexity of the interventions makes these studies challenging, she added, “it is important that we continue to advance larger, longer studies in more representative study populations to develop specific recommendations.”
Further Study
The Alzheimer’s Association’s U.S. POINTER study is the first large-scale study in the United States to explore the impact of comprehensive lifestyle changes on cognitive health. About 2000 older adults at risk for cognitive decline are participating, from diverse locations across the country. More than 25% of participants come from groups typically underrepresented in dementia research, said Dr. Snyder. Initial results are expected in summer 2025.
Future research also should explore reasons (beyond adherence) why some patients respond to lifestyle interventions better than others, and the potential synergy of lifestyle changes with drug therapies, wrote Dr. Ornish and colleagues.
“For now,” said Dr. Snyder, “there is an opportunity for providers to incorporate or expand messaging with their patients and families about the habits that they can incorporate into their daily lives. The Alzheimer’s Association offers 10 Healthy Habits for Your Brain — everyday actions that can make a difference for your brain health.”
Investigators received study funding from more than two dozen charitable foundations and other organizations. Dr. Snyder is a full-time employee of the Alzheimer’s Association and in this role, serves on the leadership team of the U.S. POINTER study. Her partner works for Abbott in an unrelated field.
, in what authors said is the first randomized controlled trial of intensive lifestyle modification for patients diagnosed with Alzheimer’s disease. Results could help physicians address patients at risk of Alzheimer’s disease who reject relevant testing because they believe nothing can forestall development of the disease, the authors added. The study was published online in Alzheimer’s Research & Therapy.
Although technology allows probable Alzheimer’s disease diagnosis years before clinical symptoms appear, wrote investigators led by Dean Ornish, MD, of the Preventive Medicine Research Institute in Sausalito, California, “many people do not want to know if they are likely to get Alzheimer’s disease if they do not believe they can do anything about it. If intensive lifestyle changes may cause improvement in cognition and function in MCI or early dementia due to Alzheimer’s disease, then it is reasonable to think that these lifestyle changes may also help to prevent MCI or early dementia due to Alzheimer’s disease.” As with cardiovascular disease, the authors added, preventing Alzheimer’s disease might require less intensive lifestyle modifications than treating it.
Study Methodology
Investigators randomized 26 patients with Montréal Cognitive Assessment scores of 18 or higher to an intensive intervention involving nutrition, exercise, and stress management techniques. To improve adherence, the protocol included participants’ spouses or caregivers.
Two patients, both in the treatment group, withdrew over logistical concerns.
After 20 weeks, treated patients exhibited statistically significant differences in several key measures versus a 25-patient usual-care control group. Scores that improved in the intervention group and worsened among controls included the following:
- Clinical Global Impression of Change (CGIC, P = .001)
- Clinical Dementia Rating-Global (CDR-Global, -0.04, P = .037)
- Clinical Dementia Rating Sum of Boxes (CDR-SB, +0.08, P = .032)
- Alzheimer’s Disease Assessment Scale (ADAS-Cog, -1.01, P = .053)
The validity of these changes in cognition and function, and possible biological mechanisms of improvement, were supported by statistically significant improvements in several clinically relevant biomarkers versus controls, the investigators wrote. These biomarkers included Abeta42/40 ratio, HbA1c, insulin, and glycoprotein acetylation. “This information may also help in predicting which patients are more likely to show improvements in cognition and function by making these intensive lifestyle changes,” the authors added.
In primary analysis, the degree of lifestyle changes required to stop progression of MCI ranged from 71.4% (ADAS-Cog) to 120.6% (CDR-SB). “This helps to explain why other studies of less intensive lifestyle interventions may not have been sufficient to stop deterioration or improve cognition and function,” the authors wrote. Moreover, they added, variable adherence might explain why in the intervention group, 10 patients improved their CGIC scores, while the rest held static or worsened.
Caveats
Alzheimer’s Association Vice President of Medical and Scientific Relations Heather M. Snyder, PhD, said, “This is an interesting paper in an important area of research and adds to the growing body of literature on how behavior or lifestyle may be related to cognitive decline. However, because this is a small phase 2 study, it is important for this or similar work to be done in larger, more diverse populations and over a longer duration of the intervention.” She was not involved with the study but was asked to comment.
Investigators chose the 20-week duration, they explained, because control-group patients likely would not refrain from trying the lifestyle intervention beyond that timeframe. Perhaps more importantly, challenges created by the COVID-19 pandemic required researchers to cut planned enrollment in half, eliminate planned MRI and amyloid PET scans, and reduce the number of cognition and function tests.
Such shortcomings limit what neurologists can glean and generalize from the study, said Dr. Snyder. “That said,” she added, “it does demonstrate the potential of an intensive behavior/lifestyle intervention, and the importance of this sort of research in Alzheimer’s and dementia.” Although the complexity of the interventions makes these studies challenging, she added, “it is important that we continue to advance larger, longer studies in more representative study populations to develop specific recommendations.”
Further Study
The Alzheimer’s Association’s U.S. POINTER study is the first large-scale study in the United States to explore the impact of comprehensive lifestyle changes on cognitive health. About 2000 older adults at risk for cognitive decline are participating, from diverse locations across the country. More than 25% of participants come from groups typically underrepresented in dementia research, said Dr. Snyder. Initial results are expected in summer 2025.
Future research also should explore reasons (beyond adherence) why some patients respond to lifestyle interventions better than others, and the potential synergy of lifestyle changes with drug therapies, wrote Dr. Ornish and colleagues.
“For now,” said Dr. Snyder, “there is an opportunity for providers to incorporate or expand messaging with their patients and families about the habits that they can incorporate into their daily lives. The Alzheimer’s Association offers 10 Healthy Habits for Your Brain — everyday actions that can make a difference for your brain health.”
Investigators received study funding from more than two dozen charitable foundations and other organizations. Dr. Snyder is a full-time employee of the Alzheimer’s Association and in this role, serves on the leadership team of the U.S. POINTER study. Her partner works for Abbott in an unrelated field.
, in what authors said is the first randomized controlled trial of intensive lifestyle modification for patients diagnosed with Alzheimer’s disease. Results could help physicians address patients at risk of Alzheimer’s disease who reject relevant testing because they believe nothing can forestall development of the disease, the authors added. The study was published online in Alzheimer’s Research & Therapy.
Although technology allows probable Alzheimer’s disease diagnosis years before clinical symptoms appear, wrote investigators led by Dean Ornish, MD, of the Preventive Medicine Research Institute in Sausalito, California, “many people do not want to know if they are likely to get Alzheimer’s disease if they do not believe they can do anything about it. If intensive lifestyle changes may cause improvement in cognition and function in MCI or early dementia due to Alzheimer’s disease, then it is reasonable to think that these lifestyle changes may also help to prevent MCI or early dementia due to Alzheimer’s disease.” As with cardiovascular disease, the authors added, preventing Alzheimer’s disease might require less intensive lifestyle modifications than treating it.
Study Methodology
Investigators randomized 26 patients with Montréal Cognitive Assessment scores of 18 or higher to an intensive intervention involving nutrition, exercise, and stress management techniques. To improve adherence, the protocol included participants’ spouses or caregivers.
Two patients, both in the treatment group, withdrew over logistical concerns.
After 20 weeks, treated patients exhibited statistically significant differences in several key measures versus a 25-patient usual-care control group. Scores that improved in the intervention group and worsened among controls included the following:
- Clinical Global Impression of Change (CGIC, P = .001)
- Clinical Dementia Rating-Global (CDR-Global, -0.04, P = .037)
- Clinical Dementia Rating Sum of Boxes (CDR-SB, +0.08, P = .032)
- Alzheimer’s Disease Assessment Scale (ADAS-Cog, -1.01, P = .053)
The validity of these changes in cognition and function, and possible biological mechanisms of improvement, were supported by statistically significant improvements in several clinically relevant biomarkers versus controls, the investigators wrote. These biomarkers included Abeta42/40 ratio, HbA1c, insulin, and glycoprotein acetylation. “This information may also help in predicting which patients are more likely to show improvements in cognition and function by making these intensive lifestyle changes,” the authors added.
In primary analysis, the degree of lifestyle changes required to stop progression of MCI ranged from 71.4% (ADAS-Cog) to 120.6% (CDR-SB). “This helps to explain why other studies of less intensive lifestyle interventions may not have been sufficient to stop deterioration or improve cognition and function,” the authors wrote. Moreover, they added, variable adherence might explain why in the intervention group, 10 patients improved their CGIC scores, while the rest held static or worsened.
Caveats
Alzheimer’s Association Vice President of Medical and Scientific Relations Heather M. Snyder, PhD, said, “This is an interesting paper in an important area of research and adds to the growing body of literature on how behavior or lifestyle may be related to cognitive decline. However, because this is a small phase 2 study, it is important for this or similar work to be done in larger, more diverse populations and over a longer duration of the intervention.” She was not involved with the study but was asked to comment.
Investigators chose the 20-week duration, they explained, because control-group patients likely would not refrain from trying the lifestyle intervention beyond that timeframe. Perhaps more importantly, challenges created by the COVID-19 pandemic required researchers to cut planned enrollment in half, eliminate planned MRI and amyloid PET scans, and reduce the number of cognition and function tests.
Such shortcomings limit what neurologists can glean and generalize from the study, said Dr. Snyder. “That said,” she added, “it does demonstrate the potential of an intensive behavior/lifestyle intervention, and the importance of this sort of research in Alzheimer’s and dementia.” Although the complexity of the interventions makes these studies challenging, she added, “it is important that we continue to advance larger, longer studies in more representative study populations to develop specific recommendations.”
Further Study
The Alzheimer’s Association’s U.S. POINTER study is the first large-scale study in the United States to explore the impact of comprehensive lifestyle changes on cognitive health. About 2000 older adults at risk for cognitive decline are participating, from diverse locations across the country. More than 25% of participants come from groups typically underrepresented in dementia research, said Dr. Snyder. Initial results are expected in summer 2025.
Future research also should explore reasons (beyond adherence) why some patients respond to lifestyle interventions better than others, and the potential synergy of lifestyle changes with drug therapies, wrote Dr. Ornish and colleagues.
“For now,” said Dr. Snyder, “there is an opportunity for providers to incorporate or expand messaging with their patients and families about the habits that they can incorporate into their daily lives. The Alzheimer’s Association offers 10 Healthy Habits for Your Brain — everyday actions that can make a difference for your brain health.”
Investigators received study funding from more than two dozen charitable foundations and other organizations. Dr. Snyder is a full-time employee of the Alzheimer’s Association and in this role, serves on the leadership team of the U.S. POINTER study. Her partner works for Abbott in an unrelated field.
FROM ALZHEIMER’S RESEARCH & THERAPY
Lidocaine Effective Against Pediatric Migraine
SAN DIEGO — The treatment has long been used in adults, and frequently in children on the strength of observational evidence.
Prior Research
Most of the studies have been conducted in adults, and these were often in specific settings like the emergency department for status migrainosus, while outpatient studies were generally conducted in chronic migraine, according to presenting author Christina Szperka, MD. “The assumptions were a little bit different,” Dr. Szperka, director of the pediatric headache program at Children’s Hospital of Philadelphia, said in an interview.
Retrospective studies are also fraught with bias. “We’ve tried to look at retrospective data. People don’t necessarily report how they’re doing unless they come back, and so you lose a huge portion of kids,” said Dr. Szperka, who presented the research at the annual meeting of the American Headache Society.
“From a clinical perspective, I think it gives us additional evidence that what we’re doing makes a difference, and I think that will help us in terms of insurance coverage, because that’s really been a major barrier,” said Dr. Szperka.
The study also opens other avenues for research. “Just doing the greater occipital nerves only reduces the pain so much. So what’s the next step? Do I study additional injections? Do I do a study where I compare different medications?”
She previously conducted a study of how providers were using lidocaine injections, and “there was a large amount of variability, both in terms of what nerves are being injected, what medications they were using, the patient population, et cetera,” said Dr. Szperka. Previous observational studies have suggested efficacy in pediatric populations for transition and prevention of migraine, new daily persistent headache, posttraumatic headache, and post-shunt occipital neuralgia.
A Randomized, Controlled Trial
In the new study, 58 adolescents aged 7 to 21 (mean age, 16.0 years; 44 female) were initially treated with lidocaine cream. The patients were “relatively refractory,” said Dr. Szperka, with 25 having received intravenous medications and 6 having been inpatients. After 30 minutes, if they still had pain and consented to further treatment, Dr. Szperka performed bilateral greater occipital nerve injections with lidocaine or a saline placebo, and did additional injections after 30 minutes if there wasn’t sufficient improvement.
There was no significant change in pain after the lidocaine cream treatment, and all patients proceeded to be randomized to lidocaine or placebo injections. The primary outcome of 30-minute reduction in pain score ranked 0-10 favored the lidocaine group (2.3 vs 1.1; P = .013). There was a 2-point reduction in pain scores in 69% of the lidocaine group and 34% of the saline group (P = .009) and a higher frequency of pain relief from moderate/severe to no pain or mild (52% versus 24%; P = .03). There was no significant difference in pain freedom.
After 24 hours, the treatment group was more likely to experience pain relief from moderate/severe to no pain or mild (24% vs 3%; P = .05) and to be free from associated symptoms (48% vs 21%; P = .027). Pain at the injection site was significantly higher in the placebo group (5.4 vs 3.2), prompting a change in plans for future trials. “I don’t think I would do saline again, because I think it hurt them, and I don’t want to cause them harm,” said Dr. Szperka.
Adverse events were common, with all but one patient in the study experiencing at least one. “I think this is a couple of things: One, kids don’t like needles in their head. Nerve blocks hurt. And so it was not surprising in some ways that we had a very high rate of adverse events. We also consented them, and that had a long wait period, and there’s a lot of anxiety in the room. However, most of the adverse events were mild,” said Dr. Szperka.
Important Research in an Understudied Population
Laine Greene, MD, who moderated the session, was asked for comment. “I think it’s an important study. Occipital nerve blocks have been used for a long period of time in management of migraine and other headache disorders. The quality of the evidence has always been brought into question, especially from payers, but also a very important aspect to this is that a lot of clinical trials over time have not specifically been done in children or adolescents, so any work that is done in that age category is significantly helpful to advancing therapeutics,” said Dr. Greene, associate professor of neurology at Mayo Clinic Arizona.
Dr. Szperka has consulted for AbbVie and Teva, and serves on data safety and monitoring boards for Eli Lilly and Upsher-Smith. She has been a principal investigator in trials sponsored by Abbvie, Amgen, Biohaven/Pfizer, Teva, and Theranica. Dr. Greene has no relevant financial disclosures.
SAN DIEGO — The treatment has long been used in adults, and frequently in children on the strength of observational evidence.
Prior Research
Most of the studies have been conducted in adults, and these were often in specific settings like the emergency department for status migrainosus, while outpatient studies were generally conducted in chronic migraine, according to presenting author Christina Szperka, MD. “The assumptions were a little bit different,” Dr. Szperka, director of the pediatric headache program at Children’s Hospital of Philadelphia, said in an interview.
Retrospective studies are also fraught with bias. “We’ve tried to look at retrospective data. People don’t necessarily report how they’re doing unless they come back, and so you lose a huge portion of kids,” said Dr. Szperka, who presented the research at the annual meeting of the American Headache Society.
“From a clinical perspective, I think it gives us additional evidence that what we’re doing makes a difference, and I think that will help us in terms of insurance coverage, because that’s really been a major barrier,” said Dr. Szperka.
The study also opens other avenues for research. “Just doing the greater occipital nerves only reduces the pain so much. So what’s the next step? Do I study additional injections? Do I do a study where I compare different medications?”
She previously conducted a study of how providers were using lidocaine injections, and “there was a large amount of variability, both in terms of what nerves are being injected, what medications they were using, the patient population, et cetera,” said Dr. Szperka. Previous observational studies have suggested efficacy in pediatric populations for transition and prevention of migraine, new daily persistent headache, posttraumatic headache, and post-shunt occipital neuralgia.
A Randomized, Controlled Trial
In the new study, 58 adolescents aged 7 to 21 (mean age, 16.0 years; 44 female) were initially treated with lidocaine cream. The patients were “relatively refractory,” said Dr. Szperka, with 25 having received intravenous medications and 6 having been inpatients. After 30 minutes, if they still had pain and consented to further treatment, Dr. Szperka performed bilateral greater occipital nerve injections with lidocaine or a saline placebo, and did additional injections after 30 minutes if there wasn’t sufficient improvement.
There was no significant change in pain after the lidocaine cream treatment, and all patients proceeded to be randomized to lidocaine or placebo injections. The primary outcome of 30-minute reduction in pain score ranked 0-10 favored the lidocaine group (2.3 vs 1.1; P = .013). There was a 2-point reduction in pain scores in 69% of the lidocaine group and 34% of the saline group (P = .009) and a higher frequency of pain relief from moderate/severe to no pain or mild (52% versus 24%; P = .03). There was no significant difference in pain freedom.
After 24 hours, the treatment group was more likely to experience pain relief from moderate/severe to no pain or mild (24% vs 3%; P = .05) and to be free from associated symptoms (48% vs 21%; P = .027). Pain at the injection site was significantly higher in the placebo group (5.4 vs 3.2), prompting a change in plans for future trials. “I don’t think I would do saline again, because I think it hurt them, and I don’t want to cause them harm,” said Dr. Szperka.
Adverse events were common, with all but one patient in the study experiencing at least one. “I think this is a couple of things: One, kids don’t like needles in their head. Nerve blocks hurt. And so it was not surprising in some ways that we had a very high rate of adverse events. We also consented them, and that had a long wait period, and there’s a lot of anxiety in the room. However, most of the adverse events were mild,” said Dr. Szperka.
Important Research in an Understudied Population
Laine Greene, MD, who moderated the session, was asked for comment. “I think it’s an important study. Occipital nerve blocks have been used for a long period of time in management of migraine and other headache disorders. The quality of the evidence has always been brought into question, especially from payers, but also a very important aspect to this is that a lot of clinical trials over time have not specifically been done in children or adolescents, so any work that is done in that age category is significantly helpful to advancing therapeutics,” said Dr. Greene, associate professor of neurology at Mayo Clinic Arizona.
Dr. Szperka has consulted for AbbVie and Teva, and serves on data safety and monitoring boards for Eli Lilly and Upsher-Smith. She has been a principal investigator in trials sponsored by Abbvie, Amgen, Biohaven/Pfizer, Teva, and Theranica. Dr. Greene has no relevant financial disclosures.
SAN DIEGO — The treatment has long been used in adults, and frequently in children on the strength of observational evidence.
Prior Research
Most of the studies have been conducted in adults, and these were often in specific settings like the emergency department for status migrainosus, while outpatient studies were generally conducted in chronic migraine, according to presenting author Christina Szperka, MD. “The assumptions were a little bit different,” Dr. Szperka, director of the pediatric headache program at Children’s Hospital of Philadelphia, said in an interview.
Retrospective studies are also fraught with bias. “We’ve tried to look at retrospective data. People don’t necessarily report how they’re doing unless they come back, and so you lose a huge portion of kids,” said Dr. Szperka, who presented the research at the annual meeting of the American Headache Society.
“From a clinical perspective, I think it gives us additional evidence that what we’re doing makes a difference, and I think that will help us in terms of insurance coverage, because that’s really been a major barrier,” said Dr. Szperka.
The study also opens other avenues for research. “Just doing the greater occipital nerves only reduces the pain so much. So what’s the next step? Do I study additional injections? Do I do a study where I compare different medications?”
She previously conducted a study of how providers were using lidocaine injections, and “there was a large amount of variability, both in terms of what nerves are being injected, what medications they were using, the patient population, et cetera,” said Dr. Szperka. Previous observational studies have suggested efficacy in pediatric populations for transition and prevention of migraine, new daily persistent headache, posttraumatic headache, and post-shunt occipital neuralgia.
A Randomized, Controlled Trial
In the new study, 58 adolescents aged 7 to 21 (mean age, 16.0 years; 44 female) were initially treated with lidocaine cream. The patients were “relatively refractory,” said Dr. Szperka, with 25 having received intravenous medications and 6 having been inpatients. After 30 minutes, if they still had pain and consented to further treatment, Dr. Szperka performed bilateral greater occipital nerve injections with lidocaine or a saline placebo, and did additional injections after 30 minutes if there wasn’t sufficient improvement.
There was no significant change in pain after the lidocaine cream treatment, and all patients proceeded to be randomized to lidocaine or placebo injections. The primary outcome of 30-minute reduction in pain score ranked 0-10 favored the lidocaine group (2.3 vs 1.1; P = .013). There was a 2-point reduction in pain scores in 69% of the lidocaine group and 34% of the saline group (P = .009) and a higher frequency of pain relief from moderate/severe to no pain or mild (52% versus 24%; P = .03). There was no significant difference in pain freedom.
After 24 hours, the treatment group was more likely to experience pain relief from moderate/severe to no pain or mild (24% vs 3%; P = .05) and to be free from associated symptoms (48% vs 21%; P = .027). Pain at the injection site was significantly higher in the placebo group (5.4 vs 3.2), prompting a change in plans for future trials. “I don’t think I would do saline again, because I think it hurt them, and I don’t want to cause them harm,” said Dr. Szperka.
Adverse events were common, with all but one patient in the study experiencing at least one. “I think this is a couple of things: One, kids don’t like needles in their head. Nerve blocks hurt. And so it was not surprising in some ways that we had a very high rate of adverse events. We also consented them, and that had a long wait period, and there’s a lot of anxiety in the room. However, most of the adverse events were mild,” said Dr. Szperka.
Important Research in an Understudied Population
Laine Greene, MD, who moderated the session, was asked for comment. “I think it’s an important study. Occipital nerve blocks have been used for a long period of time in management of migraine and other headache disorders. The quality of the evidence has always been brought into question, especially from payers, but also a very important aspect to this is that a lot of clinical trials over time have not specifically been done in children or adolescents, so any work that is done in that age category is significantly helpful to advancing therapeutics,” said Dr. Greene, associate professor of neurology at Mayo Clinic Arizona.
Dr. Szperka has consulted for AbbVie and Teva, and serves on data safety and monitoring boards for Eli Lilly and Upsher-Smith. She has been a principal investigator in trials sponsored by Abbvie, Amgen, Biohaven/Pfizer, Teva, and Theranica. Dr. Greene has no relevant financial disclosures.
FROM AHS 2024
Genetic Test Combo May Help Identify Global Development Delay
, a new study suggests.
Researchers, led by Jiamei Zhang, MS, Department of Rehabilitation Medicine, Third Affiliated Hospital of Zhengzhou University, Zhengzhou, China, in a multicenter, prospective cohort study enrolled patients ages 12 to 60 months with GDD from six centers in China from July 2020 through August 2023. Participants underwent trio whole exome sequencing (trio-WES) paired with copy number variation sequencing (CNV-seq).
“To the best of our knowledge, this study represents the largest prospective examination of combined genetic testing methods in a GDD cohort,” the authors reported in JAMA Network Open.
GDD is a common neurodevelopmental disorder, marked by cognitive impairment, and affects about 1% of children, the paper states. Most children with GDD develop intellectual disability (ID) after 5 years of age, with implications for quality of life, their physical abilities, and social functioning. Early and accurate diagnosis followed by appropriately targeted treatment is critical, but lacking. Researchers note that there is lack of consensus among health care professionals on whether genetic testing is necessary.
Genetics are known to play a significant role in pathogenesis of GDD, but definitive biomarkers have been elusive.
Positive Detection Rate of 61%
In this study, the combined use of trio-WES with CNV-seq in children with early-stage GDD resulted in a positive detection rate of 61%, a significant improvement over performing individual tests, “enhancing the positive detection rate by 18%-40%,” the researchers wrote. The combined approach also saves families time and costs, they note, while leading to more comprehensive genetic analysis and fewer missed diagnoses.
The combined approach also addressed the limitations of trio-WES and CNV-seq used alone, the authors wrote. Because of technological constraints, trio-WES may miss 55% of CNV variations, and CNV-seq has a missed diagnosis rate of 3%.
The study included 434 patients with GDD (60% male; average age, 25 months) with diverse degrees of cognitive impairment: mild (23%); moderate (32%); severe (28%); and profound (17%).
Three characteristics were linked with higher likelihood of having genetic variants: Craniofacial abnormalities (odds ratio [OR], 2.27; 95% confidence interval [CI], 1.45-3.56); moderate or severe cognitive impairment (OR, 1.69; 95% CI, 1.05-2.70); and age between 12 and 24 months (OR, 1.57; 95% CI, 1.05-2.35).
Dopaminergic Pathway Promising for Treatment
Researchers also discovered that GDD-related genes were primarily enriched in lysosome, dopaminergic synapse, and lysine degradation pathways. Dopaminergic synapse emerged as a significant pathway linked with GDD.
“In this cohort study, our findings support the correlation between dopaminergic synapse and cognitive impairment, as substantiated by prior research and animal models. Therefore, targeting the dopaminergic pathway holds promise for treating GDD and ID,” the authors wrote.
However, the authors note in the limitations that they used only a subset of 100 patients with GDD to measure dopamine concentration.
“Expanding the sample size and conducting in vivo and in vitro experiments are necessary steps to verify whether dopamine can be targeted for clinical precision medical intervention in patients with GDD,” they wrote.
The authors reported no relevant financial relationships.
, a new study suggests.
Researchers, led by Jiamei Zhang, MS, Department of Rehabilitation Medicine, Third Affiliated Hospital of Zhengzhou University, Zhengzhou, China, in a multicenter, prospective cohort study enrolled patients ages 12 to 60 months with GDD from six centers in China from July 2020 through August 2023. Participants underwent trio whole exome sequencing (trio-WES) paired with copy number variation sequencing (CNV-seq).
“To the best of our knowledge, this study represents the largest prospective examination of combined genetic testing methods in a GDD cohort,” the authors reported in JAMA Network Open.
GDD is a common neurodevelopmental disorder, marked by cognitive impairment, and affects about 1% of children, the paper states. Most children with GDD develop intellectual disability (ID) after 5 years of age, with implications for quality of life, their physical abilities, and social functioning. Early and accurate diagnosis followed by appropriately targeted treatment is critical, but lacking. Researchers note that there is lack of consensus among health care professionals on whether genetic testing is necessary.
Genetics are known to play a significant role in pathogenesis of GDD, but definitive biomarkers have been elusive.
Positive Detection Rate of 61%
In this study, the combined use of trio-WES with CNV-seq in children with early-stage GDD resulted in a positive detection rate of 61%, a significant improvement over performing individual tests, “enhancing the positive detection rate by 18%-40%,” the researchers wrote. The combined approach also saves families time and costs, they note, while leading to more comprehensive genetic analysis and fewer missed diagnoses.
The combined approach also addressed the limitations of trio-WES and CNV-seq used alone, the authors wrote. Because of technological constraints, trio-WES may miss 55% of CNV variations, and CNV-seq has a missed diagnosis rate of 3%.
The study included 434 patients with GDD (60% male; average age, 25 months) with diverse degrees of cognitive impairment: mild (23%); moderate (32%); severe (28%); and profound (17%).
Three characteristics were linked with higher likelihood of having genetic variants: Craniofacial abnormalities (odds ratio [OR], 2.27; 95% confidence interval [CI], 1.45-3.56); moderate or severe cognitive impairment (OR, 1.69; 95% CI, 1.05-2.70); and age between 12 and 24 months (OR, 1.57; 95% CI, 1.05-2.35).
Dopaminergic Pathway Promising for Treatment
Researchers also discovered that GDD-related genes were primarily enriched in lysosome, dopaminergic synapse, and lysine degradation pathways. Dopaminergic synapse emerged as a significant pathway linked with GDD.
“In this cohort study, our findings support the correlation between dopaminergic synapse and cognitive impairment, as substantiated by prior research and animal models. Therefore, targeting the dopaminergic pathway holds promise for treating GDD and ID,” the authors wrote.
However, the authors note in the limitations that they used only a subset of 100 patients with GDD to measure dopamine concentration.
“Expanding the sample size and conducting in vivo and in vitro experiments are necessary steps to verify whether dopamine can be targeted for clinical precision medical intervention in patients with GDD,” they wrote.
The authors reported no relevant financial relationships.
, a new study suggests.
Researchers, led by Jiamei Zhang, MS, Department of Rehabilitation Medicine, Third Affiliated Hospital of Zhengzhou University, Zhengzhou, China, in a multicenter, prospective cohort study enrolled patients ages 12 to 60 months with GDD from six centers in China from July 2020 through August 2023. Participants underwent trio whole exome sequencing (trio-WES) paired with copy number variation sequencing (CNV-seq).
“To the best of our knowledge, this study represents the largest prospective examination of combined genetic testing methods in a GDD cohort,” the authors reported in JAMA Network Open.
GDD is a common neurodevelopmental disorder, marked by cognitive impairment, and affects about 1% of children, the paper states. Most children with GDD develop intellectual disability (ID) after 5 years of age, with implications for quality of life, their physical abilities, and social functioning. Early and accurate diagnosis followed by appropriately targeted treatment is critical, but lacking. Researchers note that there is lack of consensus among health care professionals on whether genetic testing is necessary.
Genetics are known to play a significant role in pathogenesis of GDD, but definitive biomarkers have been elusive.
Positive Detection Rate of 61%
In this study, the combined use of trio-WES with CNV-seq in children with early-stage GDD resulted in a positive detection rate of 61%, a significant improvement over performing individual tests, “enhancing the positive detection rate by 18%-40%,” the researchers wrote. The combined approach also saves families time and costs, they note, while leading to more comprehensive genetic analysis and fewer missed diagnoses.
The combined approach also addressed the limitations of trio-WES and CNV-seq used alone, the authors wrote. Because of technological constraints, trio-WES may miss 55% of CNV variations, and CNV-seq has a missed diagnosis rate of 3%.
The study included 434 patients with GDD (60% male; average age, 25 months) with diverse degrees of cognitive impairment: mild (23%); moderate (32%); severe (28%); and profound (17%).
Three characteristics were linked with higher likelihood of having genetic variants: Craniofacial abnormalities (odds ratio [OR], 2.27; 95% confidence interval [CI], 1.45-3.56); moderate or severe cognitive impairment (OR, 1.69; 95% CI, 1.05-2.70); and age between 12 and 24 months (OR, 1.57; 95% CI, 1.05-2.35).
Dopaminergic Pathway Promising for Treatment
Researchers also discovered that GDD-related genes were primarily enriched in lysosome, dopaminergic synapse, and lysine degradation pathways. Dopaminergic synapse emerged as a significant pathway linked with GDD.
“In this cohort study, our findings support the correlation between dopaminergic synapse and cognitive impairment, as substantiated by prior research and animal models. Therefore, targeting the dopaminergic pathway holds promise for treating GDD and ID,” the authors wrote.
However, the authors note in the limitations that they used only a subset of 100 patients with GDD to measure dopamine concentration.
“Expanding the sample size and conducting in vivo and in vitro experiments are necessary steps to verify whether dopamine can be targeted for clinical precision medical intervention in patients with GDD,” they wrote.
The authors reported no relevant financial relationships.
FROM JAMA NETWORK OPEN
What Toxic Stress Can Do to Health
We recently shared a clinical case drawn from a family medicine practice about the effect of adverse childhood experiences (ACEs) on health. The widespread epidemiology and significant health consequences require a focus on the prevention and management of ACEs.
The Centers for Disease Control and Prevention published an important monograph on ACEs in 2019. Although it is evidence based, most of the interventions recommended to reduce ACEs and their sequelae are larger policy and public health efforts that go well beyond the clinician’s office. Important highlights from these recommended strategies to reduce ACEs include:
- Strengthen economic support for families through policies such as the earned income tax credit and child tax credit.
- Establish routine parental work/shift times to optimize cognitive outcomes in children.
- Promote social norms for healthy families through public health campaigns and legislative efforts to reduce corporal punishment of children. Bystander training that targets boys and men has also proven effective in reducing sexual violence.
- Facilitate early in-home visitation for at-risk families as well as high-quality childcare.
- Employ social-emotional learning approaches for children and adolescents, which can improve aggressive or violent behavior, rates of substance use, and academic success.
- Connect youth to after-school programs featuring caring adults.
But clinicians still play a vital role in the prevention and management of ACEs among their patients. Akin to gathering a patient’s past medical history or family history is initiating universal ACE screening in practice and exploring related topics in conversation.
The ACEs Aware initiative in California provides a comprehensive ACE screening clinical workflow to help implement these conversations in practice, including the assessment of associated health conditions and their appropriate clinical follow-up. While it is encouraged to universally screen patients, the key screenings to prioritize for the pediatric population are “parental depression, severe stress, unhealthy drug use, domestic violence, harsh punishment, [and] food insecurity.” Moreover, a systematic review by Steen and colleagues shared insight into newer interpretations of ACE screening which relate trauma to “[...] community violence, poverty, housing instability, structural racism, environmental blight, and climate change.”
These exposures are now being investigated for a connection to the toxic stress response. In the long term, this genetic regulatory mechanism can be affected by “high doses of cumulative adversity experienced during critical and sensitive periods of early life development — without the buffering protections of trusted, nurturing caregivers and safe, stable environments.” This micro and macro lens fosters a deeper clinician understanding of a patient’s trauma origin and can better guide appropriate clinical follow-up.
ACE-associated health conditions can be neurologic, endocrine, metabolic, or immune system–related. Early diagnosis and treatment of these conditions can help prevent long-term health care complications, costly for both patient and the health care system.
The ACEs Aware Stress Buster wheel highlights seven targets to strategize stress regulation. This wheel can be used to identify existing protective factors for patients and track treatment progress, which may buffer the negative impact of stressors and contribute to health and resilience.
The burden of universal screenings in primary care is high. Without ACE screening, however, the opportunity to address downstream health effects from toxic stress may be lost. Dubowitz and colleagues suggest ways to successfully incorporate ACE screenings in clinical workflow:
- Utilize technology to implement a streamlined referral processing/tracking system.
- Train clinicians to respond competently to positive ACE screens.
- Gather in-network and community-based resources for patients.
In addition, prioritize screening for families with children younger than 6 years of age to begin interventions as early as possible. Primary care clinicians have the unique opportunity to provide appropriate intervention over continual care. An intervention as simple as encouraging pediatric patient involvement in after-school programs may mitigate toxic stress and prevent the development of an ACE-associated health condition.
Dr. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, disclosed ties with McNeil Pharmaceuticals. Alejandra Hurtado, MD candidate, University of California, Irvine School of Medicine, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
We recently shared a clinical case drawn from a family medicine practice about the effect of adverse childhood experiences (ACEs) on health. The widespread epidemiology and significant health consequences require a focus on the prevention and management of ACEs.
The Centers for Disease Control and Prevention published an important monograph on ACEs in 2019. Although it is evidence based, most of the interventions recommended to reduce ACEs and their sequelae are larger policy and public health efforts that go well beyond the clinician’s office. Important highlights from these recommended strategies to reduce ACEs include:
- Strengthen economic support for families through policies such as the earned income tax credit and child tax credit.
- Establish routine parental work/shift times to optimize cognitive outcomes in children.
- Promote social norms for healthy families through public health campaigns and legislative efforts to reduce corporal punishment of children. Bystander training that targets boys and men has also proven effective in reducing sexual violence.
- Facilitate early in-home visitation for at-risk families as well as high-quality childcare.
- Employ social-emotional learning approaches for children and adolescents, which can improve aggressive or violent behavior, rates of substance use, and academic success.
- Connect youth to after-school programs featuring caring adults.
But clinicians still play a vital role in the prevention and management of ACEs among their patients. Akin to gathering a patient’s past medical history or family history is initiating universal ACE screening in practice and exploring related topics in conversation.
The ACEs Aware initiative in California provides a comprehensive ACE screening clinical workflow to help implement these conversations in practice, including the assessment of associated health conditions and their appropriate clinical follow-up. While it is encouraged to universally screen patients, the key screenings to prioritize for the pediatric population are “parental depression, severe stress, unhealthy drug use, domestic violence, harsh punishment, [and] food insecurity.” Moreover, a systematic review by Steen and colleagues shared insight into newer interpretations of ACE screening which relate trauma to “[...] community violence, poverty, housing instability, structural racism, environmental blight, and climate change.”
These exposures are now being investigated for a connection to the toxic stress response. In the long term, this genetic regulatory mechanism can be affected by “high doses of cumulative adversity experienced during critical and sensitive periods of early life development — without the buffering protections of trusted, nurturing caregivers and safe, stable environments.” This micro and macro lens fosters a deeper clinician understanding of a patient’s trauma origin and can better guide appropriate clinical follow-up.
ACE-associated health conditions can be neurologic, endocrine, metabolic, or immune system–related. Early diagnosis and treatment of these conditions can help prevent long-term health care complications, costly for both patient and the health care system.
The ACEs Aware Stress Buster wheel highlights seven targets to strategize stress regulation. This wheel can be used to identify existing protective factors for patients and track treatment progress, which may buffer the negative impact of stressors and contribute to health and resilience.
The burden of universal screenings in primary care is high. Without ACE screening, however, the opportunity to address downstream health effects from toxic stress may be lost. Dubowitz and colleagues suggest ways to successfully incorporate ACE screenings in clinical workflow:
- Utilize technology to implement a streamlined referral processing/tracking system.
- Train clinicians to respond competently to positive ACE screens.
- Gather in-network and community-based resources for patients.
In addition, prioritize screening for families with children younger than 6 years of age to begin interventions as early as possible. Primary care clinicians have the unique opportunity to provide appropriate intervention over continual care. An intervention as simple as encouraging pediatric patient involvement in after-school programs may mitigate toxic stress and prevent the development of an ACE-associated health condition.
Dr. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, disclosed ties with McNeil Pharmaceuticals. Alejandra Hurtado, MD candidate, University of California, Irvine School of Medicine, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
We recently shared a clinical case drawn from a family medicine practice about the effect of adverse childhood experiences (ACEs) on health. The widespread epidemiology and significant health consequences require a focus on the prevention and management of ACEs.
The Centers for Disease Control and Prevention published an important monograph on ACEs in 2019. Although it is evidence based, most of the interventions recommended to reduce ACEs and their sequelae are larger policy and public health efforts that go well beyond the clinician’s office. Important highlights from these recommended strategies to reduce ACEs include:
- Strengthen economic support for families through policies such as the earned income tax credit and child tax credit.
- Establish routine parental work/shift times to optimize cognitive outcomes in children.
- Promote social norms for healthy families through public health campaigns and legislative efforts to reduce corporal punishment of children. Bystander training that targets boys and men has also proven effective in reducing sexual violence.
- Facilitate early in-home visitation for at-risk families as well as high-quality childcare.
- Employ social-emotional learning approaches for children and adolescents, which can improve aggressive or violent behavior, rates of substance use, and academic success.
- Connect youth to after-school programs featuring caring adults.
But clinicians still play a vital role in the prevention and management of ACEs among their patients. Akin to gathering a patient’s past medical history or family history is initiating universal ACE screening in practice and exploring related topics in conversation.
The ACEs Aware initiative in California provides a comprehensive ACE screening clinical workflow to help implement these conversations in practice, including the assessment of associated health conditions and their appropriate clinical follow-up. While it is encouraged to universally screen patients, the key screenings to prioritize for the pediatric population are “parental depression, severe stress, unhealthy drug use, domestic violence, harsh punishment, [and] food insecurity.” Moreover, a systematic review by Steen and colleagues shared insight into newer interpretations of ACE screening which relate trauma to “[...] community violence, poverty, housing instability, structural racism, environmental blight, and climate change.”
These exposures are now being investigated for a connection to the toxic stress response. In the long term, this genetic regulatory mechanism can be affected by “high doses of cumulative adversity experienced during critical and sensitive periods of early life development — without the buffering protections of trusted, nurturing caregivers and safe, stable environments.” This micro and macro lens fosters a deeper clinician understanding of a patient’s trauma origin and can better guide appropriate clinical follow-up.
ACE-associated health conditions can be neurologic, endocrine, metabolic, or immune system–related. Early diagnosis and treatment of these conditions can help prevent long-term health care complications, costly for both patient and the health care system.
The ACEs Aware Stress Buster wheel highlights seven targets to strategize stress regulation. This wheel can be used to identify existing protective factors for patients and track treatment progress, which may buffer the negative impact of stressors and contribute to health and resilience.
The burden of universal screenings in primary care is high. Without ACE screening, however, the opportunity to address downstream health effects from toxic stress may be lost. Dubowitz and colleagues suggest ways to successfully incorporate ACE screenings in clinical workflow:
- Utilize technology to implement a streamlined referral processing/tracking system.
- Train clinicians to respond competently to positive ACE screens.
- Gather in-network and community-based resources for patients.
In addition, prioritize screening for families with children younger than 6 years of age to begin interventions as early as possible. Primary care clinicians have the unique opportunity to provide appropriate intervention over continual care. An intervention as simple as encouraging pediatric patient involvement in after-school programs may mitigate toxic stress and prevent the development of an ACE-associated health condition.
Dr. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, disclosed ties with McNeil Pharmaceuticals. Alejandra Hurtado, MD candidate, University of California, Irvine School of Medicine, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Anticoagulation Shows No Benefit in Preventing Second Stroke
BOSTON — Patients who have had a stroke are thought to be at a higher risk for another one, but oral anticoagulation with edoxaban led to no discernible reduction in the risk for a second stroke, and the risk for major bleeding was more than quadruple the risk with no anticoagulation, a subanalysis of a major European trial has shown.
“There is no interaction between prior stroke or TIA [transient ischemic attack] and the treatment effect, and this is true for the primary outcome and the safety outcome,” Paulus Kirchoff, MD, director of cardiology at the University Heart and Vascular Center in Hamburg, Germany, said during his presentation of a subanalysis of the NOAH-AFNET 6 trial at the annual meeting of the Heart Rhythm Society (HRS) 2024. However, “there is a signal for more safety events in patients randomized to anticoagulation with a prior stroke.”
The subanalysis involved 253 patients who had had a stroke or TIA and who had device-detected atrial fibrillation (AF) from the overall NOAH-AFNET 6 population of 2536 patients, which enrolled patients 65 years and older with at least one additional CHA2DS-VASc risk factor and patients 75 years and older with device-detected subclinical AF episodes of at least 6 minutes. Patients were randomized to either edoxaban or no anticoagulation, but 53.9% of the no-anticoagulation group was taking aspirin at trial enrollment. Anticoagulation with edoxaban was shown to have no significant impact on stroke rates or other cardiovascular outcomes.
Subanalysis Results
In the subanalysis, a composite of stroke, systemic embolism, and cardiovascular death — the primary outcome — was similar in the edoxaban and no-anticoagulation groups (14/122 patients [11.5%] vs 16/131 patients [12.2%]; 5.7% vs 6.3% per patient-year).
The rate of recurrent stroke was also similar in the edoxaban and no-anticoagulation groups (4 of 122 patients [3.3%] vs 6 of 131 patients [4.6%]; 1.6% vs 2.3% per patient-year). And there were eight cardiovascular deaths in each group.
However, edoxaban patients had significantly higher rates of major bleeding.
“This is a subanalysis, so what we see in terms of the number of patients with events is not powered for a definitive answer, but we do see that there were 10 major bleeds in the group of patients with a prior stroke or TIA in NOAH,” Dr. Kirchoff reported. “Eight of those 10 major bleeds occurred in patients randomized to edoxaban.”
Results from the NOAH-AFNET 6 trial have been compared with those from the ARTESiA trial, which compared apixaban anticoagulation with aspirin in patients with subclinical AF and was also presented at HRS 2024. ARTESiA showed that apixaban significantly lowered the risk for stroke and systemic embolism.
“In ARTESiA, everyone was on aspirin when they were randomized to no anticoagulation; in NOAH, only about half were on aspirin,” Dr. Kirchoff said.
Both studies had similar outcomes for cardiovascular death in the anticoagulation and no-anticoagulation groups. “It’s not significant; it may be chance, but it’s definitely not the reduction in death that we have seen in the anticoagulant trials,” Dr. Kirchoff said. “When you look at the meta-analyses of the early anticoagulation trials, there’s a one third reduction in death, and here we’re talking about a smaller reduction.”
This research points to a need for a better way to evaluate stroke risk. “We need new markers,” Dr. Kirchoff said. “Some of them may be in the blood or imaging, genetics maybe, and one thing that really emerges from my perspective is that we now have the first evidence to suggest that patients with a very low atrial fibrillation burden have a low stroke rate.”
More research is needed to better understand AF characteristics and stroke risk, he said.
AF Care Enters a ‘Gray Zone’
The NOAH-AFNET 6 results, coupled with those from ARTESiA, are changing the paradigm for anticoagulation in patients with stroke, said Taya Glotzer, MD, an electrophysiologist at the Hackensack University Medical Center in Hackensack, New Jersey, who compiled her own analysis of the studies’ outcomes.
“In ARTESiA, the stroke reduction was only 0.44% a year, with a number needed to treat of 250,” she said. “In the NOAH-AFNET 6 main trial, the stroke reduction was 0.2%, with the number needed to treat of 500, and in the NOAH prior stroke patients, there was a 0.7% reduction, with a number needed to treat of 143.”
None of these trials would meet the standard for a class 1 recommendation for anticoagulation with a reduction of even 1%-2% per year, she noted, but they do show that the stroke rate “is very, very low” in prior patients with stroke.
“Prior to 2024, we knew what was black and white; we knew who to anticoagulate and who not to anticoagulate. And now we are in a gray zone, trying to balance the risk of stroke and bleeding. We have to individualize or hope for substudies, perhaps using the CHA2DS-VASc score or other information about the left atrium, to help us make decisions in these patients. It’s not just going to be black and white,” she said.
Dr. Kirchoff had no relevant financial relationships to disclose. Dr. Glotzer disclosed financial relationships with Medtronic, Abbott, Boston Scientific, and MediaSphere Medical.
A version of this article first appeared on Medscape.com.
BOSTON — Patients who have had a stroke are thought to be at a higher risk for another one, but oral anticoagulation with edoxaban led to no discernible reduction in the risk for a second stroke, and the risk for major bleeding was more than quadruple the risk with no anticoagulation, a subanalysis of a major European trial has shown.
“There is no interaction between prior stroke or TIA [transient ischemic attack] and the treatment effect, and this is true for the primary outcome and the safety outcome,” Paulus Kirchoff, MD, director of cardiology at the University Heart and Vascular Center in Hamburg, Germany, said during his presentation of a subanalysis of the NOAH-AFNET 6 trial at the annual meeting of the Heart Rhythm Society (HRS) 2024. However, “there is a signal for more safety events in patients randomized to anticoagulation with a prior stroke.”
The subanalysis involved 253 patients who had had a stroke or TIA and who had device-detected atrial fibrillation (AF) from the overall NOAH-AFNET 6 population of 2536 patients, which enrolled patients 65 years and older with at least one additional CHA2DS-VASc risk factor and patients 75 years and older with device-detected subclinical AF episodes of at least 6 minutes. Patients were randomized to either edoxaban or no anticoagulation, but 53.9% of the no-anticoagulation group was taking aspirin at trial enrollment. Anticoagulation with edoxaban was shown to have no significant impact on stroke rates or other cardiovascular outcomes.
Subanalysis Results
In the subanalysis, a composite of stroke, systemic embolism, and cardiovascular death — the primary outcome — was similar in the edoxaban and no-anticoagulation groups (14/122 patients [11.5%] vs 16/131 patients [12.2%]; 5.7% vs 6.3% per patient-year).
The rate of recurrent stroke was also similar in the edoxaban and no-anticoagulation groups (4 of 122 patients [3.3%] vs 6 of 131 patients [4.6%]; 1.6% vs 2.3% per patient-year). And there were eight cardiovascular deaths in each group.
However, edoxaban patients had significantly higher rates of major bleeding.
“This is a subanalysis, so what we see in terms of the number of patients with events is not powered for a definitive answer, but we do see that there were 10 major bleeds in the group of patients with a prior stroke or TIA in NOAH,” Dr. Kirchoff reported. “Eight of those 10 major bleeds occurred in patients randomized to edoxaban.”
Results from the NOAH-AFNET 6 trial have been compared with those from the ARTESiA trial, which compared apixaban anticoagulation with aspirin in patients with subclinical AF and was also presented at HRS 2024. ARTESiA showed that apixaban significantly lowered the risk for stroke and systemic embolism.
“In ARTESiA, everyone was on aspirin when they were randomized to no anticoagulation; in NOAH, only about half were on aspirin,” Dr. Kirchoff said.
Both studies had similar outcomes for cardiovascular death in the anticoagulation and no-anticoagulation groups. “It’s not significant; it may be chance, but it’s definitely not the reduction in death that we have seen in the anticoagulant trials,” Dr. Kirchoff said. “When you look at the meta-analyses of the early anticoagulation trials, there’s a one third reduction in death, and here we’re talking about a smaller reduction.”
This research points to a need for a better way to evaluate stroke risk. “We need new markers,” Dr. Kirchoff said. “Some of them may be in the blood or imaging, genetics maybe, and one thing that really emerges from my perspective is that we now have the first evidence to suggest that patients with a very low atrial fibrillation burden have a low stroke rate.”
More research is needed to better understand AF characteristics and stroke risk, he said.
AF Care Enters a ‘Gray Zone’
The NOAH-AFNET 6 results, coupled with those from ARTESiA, are changing the paradigm for anticoagulation in patients with stroke, said Taya Glotzer, MD, an electrophysiologist at the Hackensack University Medical Center in Hackensack, New Jersey, who compiled her own analysis of the studies’ outcomes.
“In ARTESiA, the stroke reduction was only 0.44% a year, with a number needed to treat of 250,” she said. “In the NOAH-AFNET 6 main trial, the stroke reduction was 0.2%, with the number needed to treat of 500, and in the NOAH prior stroke patients, there was a 0.7% reduction, with a number needed to treat of 143.”
None of these trials would meet the standard for a class 1 recommendation for anticoagulation with a reduction of even 1%-2% per year, she noted, but they do show that the stroke rate “is very, very low” in prior patients with stroke.
“Prior to 2024, we knew what was black and white; we knew who to anticoagulate and who not to anticoagulate. And now we are in a gray zone, trying to balance the risk of stroke and bleeding. We have to individualize or hope for substudies, perhaps using the CHA2DS-VASc score or other information about the left atrium, to help us make decisions in these patients. It’s not just going to be black and white,” she said.
Dr. Kirchoff had no relevant financial relationships to disclose. Dr. Glotzer disclosed financial relationships with Medtronic, Abbott, Boston Scientific, and MediaSphere Medical.
A version of this article first appeared on Medscape.com.
BOSTON — Patients who have had a stroke are thought to be at a higher risk for another one, but oral anticoagulation with edoxaban led to no discernible reduction in the risk for a second stroke, and the risk for major bleeding was more than quadruple the risk with no anticoagulation, a subanalysis of a major European trial has shown.
“There is no interaction between prior stroke or TIA [transient ischemic attack] and the treatment effect, and this is true for the primary outcome and the safety outcome,” Paulus Kirchoff, MD, director of cardiology at the University Heart and Vascular Center in Hamburg, Germany, said during his presentation of a subanalysis of the NOAH-AFNET 6 trial at the annual meeting of the Heart Rhythm Society (HRS) 2024. However, “there is a signal for more safety events in patients randomized to anticoagulation with a prior stroke.”
The subanalysis involved 253 patients who had had a stroke or TIA and who had device-detected atrial fibrillation (AF) from the overall NOAH-AFNET 6 population of 2536 patients, which enrolled patients 65 years and older with at least one additional CHA2DS-VASc risk factor and patients 75 years and older with device-detected subclinical AF episodes of at least 6 minutes. Patients were randomized to either edoxaban or no anticoagulation, but 53.9% of the no-anticoagulation group was taking aspirin at trial enrollment. Anticoagulation with edoxaban was shown to have no significant impact on stroke rates or other cardiovascular outcomes.
Subanalysis Results
In the subanalysis, a composite of stroke, systemic embolism, and cardiovascular death — the primary outcome — was similar in the edoxaban and no-anticoagulation groups (14/122 patients [11.5%] vs 16/131 patients [12.2%]; 5.7% vs 6.3% per patient-year).
The rate of recurrent stroke was also similar in the edoxaban and no-anticoagulation groups (4 of 122 patients [3.3%] vs 6 of 131 patients [4.6%]; 1.6% vs 2.3% per patient-year). And there were eight cardiovascular deaths in each group.
However, edoxaban patients had significantly higher rates of major bleeding.
“This is a subanalysis, so what we see in terms of the number of patients with events is not powered for a definitive answer, but we do see that there were 10 major bleeds in the group of patients with a prior stroke or TIA in NOAH,” Dr. Kirchoff reported. “Eight of those 10 major bleeds occurred in patients randomized to edoxaban.”
Results from the NOAH-AFNET 6 trial have been compared with those from the ARTESiA trial, which compared apixaban anticoagulation with aspirin in patients with subclinical AF and was also presented at HRS 2024. ARTESiA showed that apixaban significantly lowered the risk for stroke and systemic embolism.
“In ARTESiA, everyone was on aspirin when they were randomized to no anticoagulation; in NOAH, only about half were on aspirin,” Dr. Kirchoff said.
Both studies had similar outcomes for cardiovascular death in the anticoagulation and no-anticoagulation groups. “It’s not significant; it may be chance, but it’s definitely not the reduction in death that we have seen in the anticoagulant trials,” Dr. Kirchoff said. “When you look at the meta-analyses of the early anticoagulation trials, there’s a one third reduction in death, and here we’re talking about a smaller reduction.”
This research points to a need for a better way to evaluate stroke risk. “We need new markers,” Dr. Kirchoff said. “Some of them may be in the blood or imaging, genetics maybe, and one thing that really emerges from my perspective is that we now have the first evidence to suggest that patients with a very low atrial fibrillation burden have a low stroke rate.”
More research is needed to better understand AF characteristics and stroke risk, he said.
AF Care Enters a ‘Gray Zone’
The NOAH-AFNET 6 results, coupled with those from ARTESiA, are changing the paradigm for anticoagulation in patients with stroke, said Taya Glotzer, MD, an electrophysiologist at the Hackensack University Medical Center in Hackensack, New Jersey, who compiled her own analysis of the studies’ outcomes.
“In ARTESiA, the stroke reduction was only 0.44% a year, with a number needed to treat of 250,” she said. “In the NOAH-AFNET 6 main trial, the stroke reduction was 0.2%, with the number needed to treat of 500, and in the NOAH prior stroke patients, there was a 0.7% reduction, with a number needed to treat of 143.”
None of these trials would meet the standard for a class 1 recommendation for anticoagulation with a reduction of even 1%-2% per year, she noted, but they do show that the stroke rate “is very, very low” in prior patients with stroke.
“Prior to 2024, we knew what was black and white; we knew who to anticoagulate and who not to anticoagulate. And now we are in a gray zone, trying to balance the risk of stroke and bleeding. We have to individualize or hope for substudies, perhaps using the CHA2DS-VASc score or other information about the left atrium, to help us make decisions in these patients. It’s not just going to be black and white,” she said.
Dr. Kirchoff had no relevant financial relationships to disclose. Dr. Glotzer disclosed financial relationships with Medtronic, Abbott, Boston Scientific, and MediaSphere Medical.
A version of this article first appeared on Medscape.com.
FROM HRS 2024
GLP-1s Reduced Secondary Stroke Risk in Patients With Diabetes, Obesity
, according to authors of a recent meta-analysis. With benefits across administration routes, dosing regimens, type 2 diabetes status, and total and nonfatal strokes, the findings could improve GLP-1 RA implementation by stroke specialists in patients with stroke history and concurrent type 2 diabetes or obesity, authors said. The study was published online in the International Journal of Stoke.
Extending Longevity
Agents including GLP-1 RAs that have been found to reduce cardiovascular events among patients with type 2 diabetes and patients who are overweight or obese also reduce risk of recurrent stroke among patients with a history of stroke who are overweight, obese, or have metabolic disease, said American Heart Association (AHA) Chief Clinical Science Officer Mitchell S. V. Elkind, MD, who was not involved with the study but was asked to comment.
“Stroke is a leading cause of mortality and the leading cause of serious long-term disability,” he added, “so medications that help to reduce that risk can play an important role in improving overall health and well-being and hopefully reducing premature mortality.”
Investigators Anastasia Adamou, MD, an internal medicine resident at AHEPA University Hospital in Thessaloniki, Greece, and colleagues searched MEDLINE and Scopus for cardiovascular outcome trials involving adults randomly assigned to GLP-1 RAs or placebo through November 2023, ultimately analyzing 11 randomized controlled trials (RCTs).
Among 60,380 participants in the nine studies that assessed total strokes, 2.5% of the GLP-1 RA group experienced strokes during follow-up, versus 3% in the placebo group (relative risk [RR] 0.85, 95% confidence interval [CI] 0.77-0.93). Regarding secondary outcomes, the GLP-1 RA group showed a significantly lower rate of nonfatal strokes versus patients on placebo (RR 0.87, 95% CI 0.79-0.95). Conversely, investigators observed no significant risk difference among the groups regarding fatal strokes, probably due to the low rate of events — 0.3% and 0.4% for treated and untreated patients, respectively.
Subgroup analyses revealed no interaction between dosing frequency and total, nonfatal, or fatal strokes. The investigators observed no difference in nonfatal strokes among participants by type 2 diabetes status and medication administration route (oral versus subcutaneous).
“The oral administration route could provide the advantage of lower local ecchymoses and allergic reactions due to subcutaneous infusions,” Dr. Adamou said in an interview. But because oral administration demands daily intake, she added, treatment adherence might be affected. “For this reason, our team performed another subgroup analysis to compare the once-a-day to the once-a-month administration. No interaction effect was again presented between the two subgroups. This outcome allows for personalization of the administration method for each patient.”
Addressing Underutilization
Despite more than 2 decades of widespread use and well-established effects on body weight, HbA1c, and cardiovascular risk, GLP-1 RAs remain underutilized, authors wrote. This is especially true in primary care, noted one study published in Clinical Diabetes.
“GLP-1 RAs have been used for many years to treat diabetic patients,” said Dr. Adamou. But because their impact on cardiovascular health regardless of diabetic status is only recently known, she said, physicians are exercising caution when prescribing this medication to patients without diabetes. “This is why more studies need to be available, especially RCTs.”
Most neurologists traditionally have left management of type 2 diabetes and other metabolic disorders to primary care doctors, said Dr. Elkind. “However, these medications are increasingly important to vascular risk reduction and should be considered part of the stroke specialist’s armamentarium.”
Vascular neurologists can play an important role in managing metabolic disease and obesity by recommending GLP-1 RAs for patients with a history of stroke, or by initiating these medications themselves, Dr. Elkind said. “These drugs are likely to become an important part of stroke patients’ medication regimens, along with antithrombotic agents, blood pressure control, and statins. Neurologists are well-positioned to educate other physicians about the important connections among brain, heart, and metabolic health.”
To that end, he said, the AHA will update guidelines for both primary and secondary stroke prevention as warranted by evidence supporting GLP-1 RAs and other medications that could impact stroke risk in type 2 diabetes and related metabolic disorders. However, no guidelines concerning use of GLP-1 RAs for secondary stroke prevention in obesity exist. Here, said Dr. Elkind, the AHA will continue building on its innovative Cardiovascular-Kidney Metabolic Health program, which includes clinical suggestions and may include more formal clinical practice guidelines as the evidence evolves.
Among the main drivers of the initiative, he said, is the recognition that cardiovascular disease — including stroke — is the major cause of death and morbidity among patients with obesity, type 2 diabetes, and metabolic disorders. “Stroke should be considered an important part of overall cardiovascular risk, and the findings that these drugs can help to reduce the risk of stroke specifically is an important additional reason for their use.”
Dr. Elkind and Dr. Adamou reported no conflicting interests. The authors received no financial support for the study.
, according to authors of a recent meta-analysis. With benefits across administration routes, dosing regimens, type 2 diabetes status, and total and nonfatal strokes, the findings could improve GLP-1 RA implementation by stroke specialists in patients with stroke history and concurrent type 2 diabetes or obesity, authors said. The study was published online in the International Journal of Stoke.
Extending Longevity
Agents including GLP-1 RAs that have been found to reduce cardiovascular events among patients with type 2 diabetes and patients who are overweight or obese also reduce risk of recurrent stroke among patients with a history of stroke who are overweight, obese, or have metabolic disease, said American Heart Association (AHA) Chief Clinical Science Officer Mitchell S. V. Elkind, MD, who was not involved with the study but was asked to comment.
“Stroke is a leading cause of mortality and the leading cause of serious long-term disability,” he added, “so medications that help to reduce that risk can play an important role in improving overall health and well-being and hopefully reducing premature mortality.”
Investigators Anastasia Adamou, MD, an internal medicine resident at AHEPA University Hospital in Thessaloniki, Greece, and colleagues searched MEDLINE and Scopus for cardiovascular outcome trials involving adults randomly assigned to GLP-1 RAs or placebo through November 2023, ultimately analyzing 11 randomized controlled trials (RCTs).
Among 60,380 participants in the nine studies that assessed total strokes, 2.5% of the GLP-1 RA group experienced strokes during follow-up, versus 3% in the placebo group (relative risk [RR] 0.85, 95% confidence interval [CI] 0.77-0.93). Regarding secondary outcomes, the GLP-1 RA group showed a significantly lower rate of nonfatal strokes versus patients on placebo (RR 0.87, 95% CI 0.79-0.95). Conversely, investigators observed no significant risk difference among the groups regarding fatal strokes, probably due to the low rate of events — 0.3% and 0.4% for treated and untreated patients, respectively.
Subgroup analyses revealed no interaction between dosing frequency and total, nonfatal, or fatal strokes. The investigators observed no difference in nonfatal strokes among participants by type 2 diabetes status and medication administration route (oral versus subcutaneous).
“The oral administration route could provide the advantage of lower local ecchymoses and allergic reactions due to subcutaneous infusions,” Dr. Adamou said in an interview. But because oral administration demands daily intake, she added, treatment adherence might be affected. “For this reason, our team performed another subgroup analysis to compare the once-a-day to the once-a-month administration. No interaction effect was again presented between the two subgroups. This outcome allows for personalization of the administration method for each patient.”
Addressing Underutilization
Despite more than 2 decades of widespread use and well-established effects on body weight, HbA1c, and cardiovascular risk, GLP-1 RAs remain underutilized, authors wrote. This is especially true in primary care, noted one study published in Clinical Diabetes.
“GLP-1 RAs have been used for many years to treat diabetic patients,” said Dr. Adamou. But because their impact on cardiovascular health regardless of diabetic status is only recently known, she said, physicians are exercising caution when prescribing this medication to patients without diabetes. “This is why more studies need to be available, especially RCTs.”
Most neurologists traditionally have left management of type 2 diabetes and other metabolic disorders to primary care doctors, said Dr. Elkind. “However, these medications are increasingly important to vascular risk reduction and should be considered part of the stroke specialist’s armamentarium.”
Vascular neurologists can play an important role in managing metabolic disease and obesity by recommending GLP-1 RAs for patients with a history of stroke, or by initiating these medications themselves, Dr. Elkind said. “These drugs are likely to become an important part of stroke patients’ medication regimens, along with antithrombotic agents, blood pressure control, and statins. Neurologists are well-positioned to educate other physicians about the important connections among brain, heart, and metabolic health.”
To that end, he said, the AHA will update guidelines for both primary and secondary stroke prevention as warranted by evidence supporting GLP-1 RAs and other medications that could impact stroke risk in type 2 diabetes and related metabolic disorders. However, no guidelines concerning use of GLP-1 RAs for secondary stroke prevention in obesity exist. Here, said Dr. Elkind, the AHA will continue building on its innovative Cardiovascular-Kidney Metabolic Health program, which includes clinical suggestions and may include more formal clinical practice guidelines as the evidence evolves.
Among the main drivers of the initiative, he said, is the recognition that cardiovascular disease — including stroke — is the major cause of death and morbidity among patients with obesity, type 2 diabetes, and metabolic disorders. “Stroke should be considered an important part of overall cardiovascular risk, and the findings that these drugs can help to reduce the risk of stroke specifically is an important additional reason for their use.”
Dr. Elkind and Dr. Adamou reported no conflicting interests. The authors received no financial support for the study.
, according to authors of a recent meta-analysis. With benefits across administration routes, dosing regimens, type 2 diabetes status, and total and nonfatal strokes, the findings could improve GLP-1 RA implementation by stroke specialists in patients with stroke history and concurrent type 2 diabetes or obesity, authors said. The study was published online in the International Journal of Stoke.
Extending Longevity
Agents including GLP-1 RAs that have been found to reduce cardiovascular events among patients with type 2 diabetes and patients who are overweight or obese also reduce risk of recurrent stroke among patients with a history of stroke who are overweight, obese, or have metabolic disease, said American Heart Association (AHA) Chief Clinical Science Officer Mitchell S. V. Elkind, MD, who was not involved with the study but was asked to comment.
“Stroke is a leading cause of mortality and the leading cause of serious long-term disability,” he added, “so medications that help to reduce that risk can play an important role in improving overall health and well-being and hopefully reducing premature mortality.”
Investigators Anastasia Adamou, MD, an internal medicine resident at AHEPA University Hospital in Thessaloniki, Greece, and colleagues searched MEDLINE and Scopus for cardiovascular outcome trials involving adults randomly assigned to GLP-1 RAs or placebo through November 2023, ultimately analyzing 11 randomized controlled trials (RCTs).
Among 60,380 participants in the nine studies that assessed total strokes, 2.5% of the GLP-1 RA group experienced strokes during follow-up, versus 3% in the placebo group (relative risk [RR] 0.85, 95% confidence interval [CI] 0.77-0.93). Regarding secondary outcomes, the GLP-1 RA group showed a significantly lower rate of nonfatal strokes versus patients on placebo (RR 0.87, 95% CI 0.79-0.95). Conversely, investigators observed no significant risk difference among the groups regarding fatal strokes, probably due to the low rate of events — 0.3% and 0.4% for treated and untreated patients, respectively.
Subgroup analyses revealed no interaction between dosing frequency and total, nonfatal, or fatal strokes. The investigators observed no difference in nonfatal strokes among participants by type 2 diabetes status and medication administration route (oral versus subcutaneous).
“The oral administration route could provide the advantage of lower local ecchymoses and allergic reactions due to subcutaneous infusions,” Dr. Adamou said in an interview. But because oral administration demands daily intake, she added, treatment adherence might be affected. “For this reason, our team performed another subgroup analysis to compare the once-a-day to the once-a-month administration. No interaction effect was again presented between the two subgroups. This outcome allows for personalization of the administration method for each patient.”
Addressing Underutilization
Despite more than 2 decades of widespread use and well-established effects on body weight, HbA1c, and cardiovascular risk, GLP-1 RAs remain underutilized, authors wrote. This is especially true in primary care, noted one study published in Clinical Diabetes.
“GLP-1 RAs have been used for many years to treat diabetic patients,” said Dr. Adamou. But because their impact on cardiovascular health regardless of diabetic status is only recently known, she said, physicians are exercising caution when prescribing this medication to patients without diabetes. “This is why more studies need to be available, especially RCTs.”
Most neurologists traditionally have left management of type 2 diabetes and other metabolic disorders to primary care doctors, said Dr. Elkind. “However, these medications are increasingly important to vascular risk reduction and should be considered part of the stroke specialist’s armamentarium.”
Vascular neurologists can play an important role in managing metabolic disease and obesity by recommending GLP-1 RAs for patients with a history of stroke, or by initiating these medications themselves, Dr. Elkind said. “These drugs are likely to become an important part of stroke patients’ medication regimens, along with antithrombotic agents, blood pressure control, and statins. Neurologists are well-positioned to educate other physicians about the important connections among brain, heart, and metabolic health.”
To that end, he said, the AHA will update guidelines for both primary and secondary stroke prevention as warranted by evidence supporting GLP-1 RAs and other medications that could impact stroke risk in type 2 diabetes and related metabolic disorders. However, no guidelines concerning use of GLP-1 RAs for secondary stroke prevention in obesity exist. Here, said Dr. Elkind, the AHA will continue building on its innovative Cardiovascular-Kidney Metabolic Health program, which includes clinical suggestions and may include more formal clinical practice guidelines as the evidence evolves.
Among the main drivers of the initiative, he said, is the recognition that cardiovascular disease — including stroke — is the major cause of death and morbidity among patients with obesity, type 2 diabetes, and metabolic disorders. “Stroke should be considered an important part of overall cardiovascular risk, and the findings that these drugs can help to reduce the risk of stroke specifically is an important additional reason for their use.”
Dr. Elkind and Dr. Adamou reported no conflicting interests. The authors received no financial support for the study.
FROM THE INTERNATIONAL JOURNAL OF STROKE
Solving Restless Legs: Largest Genetic Study to Date May Help
For decades, scientists have been trying to unravel the mysteries of restless legs syndrome (RLS), a poorly understood and underdiagnosed neurological disorder causing itching, crawling, and aching sensations in the limbs that can only be relieved with movement.
A sweeping new genetic study, coauthored by an international team of 70 — including the world’s leading RLS experts — marks a significant advance in that pursuit. Published in Nature Genetics, it is the largest genetic study of the disease to date.
“It’s a huge step forward for patients as well as the scientific community,” said lead author Juliane Winkelmann, MD, a neurologist and geneticist with the Technical University of Munich, Munich, Germany, who’s been studying and treating patients with RLS for 30 years. “We believe it will allow us to better predict the likelihood of developing RLS and investigate new ways to prevent and modify it.”
The common condition, affecting about 1 in 10 adults, was first described centuries ago — by English physician Thomas Willis in the late 1600s. And while we know a lot more about it today — it’s familial in about half of all patients and has been linked to iron deficiency, among other conditions — its exact cause remains unknown.
With preferred drugs long prescribed to quell symptoms shown in recent years to actually worsen the disorder over time, doctors and patients are hungry for alternatives to treat or prevent the sleep-sabotaging condition.
“The main treatments that everybody continues to use are actually making people worse,” said Andrew Berkowski, MD, a Michigan-based neurologist and RLS specialist not involved in the study. These drugs — dopamine agonists such as levodopa and pramipexole — can also potentially cause drug dependence, Dr. Berkowski said.
How This Could Lead to New Treatments
In the new study, the group analyzed three genome-wide association studies, collectively including genetic information from 116,647 patients with RLS and more than 1.5 million people without it.
They identified 161 gene regions believed to contribute to RLS, about a dozen of which are already targets for existing drugs for other conditions. Previously, scientists knew of only 22 associated genes.
“It’s useful in that it identifies new genes we haven’t looked at yet and reinforces the science behind some of the older genes,” said Dr. Berkowski. “It’s given us some ideas for different things we should look into more closely.”
Among the top candidates are genes that influence glutamate — a key chemical messenger that helps move signals between nerve cells in the brain.
Several anticonvulsant and antiseizure drugs, including perampanel, lamotrigine, and gabapentin, target glutamate receptors. And at least one small study has shown perampanel prescribed off-label can improve RLS symptoms.
“Compared to starting at the beginning and developing an entirely new chemical entity, we could run clinical trials using these alternatives in RLS patients,” said the study’s first author, Steven Bell, PhD, an epidemiologist with the University of Cambridge, Cambridge, England.
The study also confirmed the MIES1 gene, which is related to dopamine expression and iron homeostasis, as a key genetic contributor to RLS risk. Low levels of iron in the blood have long been thought to trigger RLS.
The Role of Gene-Environment Interactions
Through additional data analysis, the team confirmed that many of the genes associated with RLS play a role in development of the central nervous system.
“This strongly supports the hypothesis that restless legs syndrome is a neurodevelopmental disorder that develops during the embryo stage but doesn’t clinically manifest until later in life,” said Dr. Winkelmann.
About half of people with RLS report some family history of it.
But not all with a genetic predisposition will develop symptoms.
For instance, the study found that while the same gene regions seem to be associated with risk in both men and women, in practice, RLS is twice as common among women. This suggests that something about women’s lives — menstruation, childbirth, metabolism — may switch a preexisting risk into a reality.
“We know that genetic factors play an important role in making people susceptible to the disease,” said Dr. Winkelmann, “but in the end, it is the interaction between genetic and environmental factors that may lead to its manifestation.”
The study also found associations between RLS and depression and suggests that RLS may increase the risk for type 2 diabetes.
Improving RLS Care
A potentially useful tool coming out of the study was a “polygenic risk score,” which the researchers developed based on the genes identified. When they tested how accurately the score could predict whether someone would develop RLS within the next 5 years, the model got it right about 90% of the time.
Dr. Winkelmann imagines a day when someone could use such a polygenic risk score to flag the high risk for RLS early enough to take action to try to prevent it. More research is necessary to determine precisely what that action would be.
As for treatments, Dr. Berkowski thinks it’s unlikely that doctors will suddenly begin using existing, glutamate-targeting drugs off-label to treat RLS, as many are prohibitively expensive and wouldn’t be covered by insurance. But he’s optimistic that the study can spawn new research that could ultimately help fill the treatment gap.
Shalini Paruthi, MD, an adjunct professor at Saint Louis University, St. Louis, Missouri, and chair of the Restless Legs Syndrome Foundation’s board of directors, sees another benefit.
“The associations found in this study between RLS and other medical disorders may help patients and their physicians take RLS more seriously,” Dr. Paruthi said, “as treating RLS can lead to multiple other downstream improvements in their health.”
A version of this article appeared on Medscape.com.
For decades, scientists have been trying to unravel the mysteries of restless legs syndrome (RLS), a poorly understood and underdiagnosed neurological disorder causing itching, crawling, and aching sensations in the limbs that can only be relieved with movement.
A sweeping new genetic study, coauthored by an international team of 70 — including the world’s leading RLS experts — marks a significant advance in that pursuit. Published in Nature Genetics, it is the largest genetic study of the disease to date.
“It’s a huge step forward for patients as well as the scientific community,” said lead author Juliane Winkelmann, MD, a neurologist and geneticist with the Technical University of Munich, Munich, Germany, who’s been studying and treating patients with RLS for 30 years. “We believe it will allow us to better predict the likelihood of developing RLS and investigate new ways to prevent and modify it.”
The common condition, affecting about 1 in 10 adults, was first described centuries ago — by English physician Thomas Willis in the late 1600s. And while we know a lot more about it today — it’s familial in about half of all patients and has been linked to iron deficiency, among other conditions — its exact cause remains unknown.
With preferred drugs long prescribed to quell symptoms shown in recent years to actually worsen the disorder over time, doctors and patients are hungry for alternatives to treat or prevent the sleep-sabotaging condition.
“The main treatments that everybody continues to use are actually making people worse,” said Andrew Berkowski, MD, a Michigan-based neurologist and RLS specialist not involved in the study. These drugs — dopamine agonists such as levodopa and pramipexole — can also potentially cause drug dependence, Dr. Berkowski said.
How This Could Lead to New Treatments
In the new study, the group analyzed three genome-wide association studies, collectively including genetic information from 116,647 patients with RLS and more than 1.5 million people without it.
They identified 161 gene regions believed to contribute to RLS, about a dozen of which are already targets for existing drugs for other conditions. Previously, scientists knew of only 22 associated genes.
“It’s useful in that it identifies new genes we haven’t looked at yet and reinforces the science behind some of the older genes,” said Dr. Berkowski. “It’s given us some ideas for different things we should look into more closely.”
Among the top candidates are genes that influence glutamate — a key chemical messenger that helps move signals between nerve cells in the brain.
Several anticonvulsant and antiseizure drugs, including perampanel, lamotrigine, and gabapentin, target glutamate receptors. And at least one small study has shown perampanel prescribed off-label can improve RLS symptoms.
“Compared to starting at the beginning and developing an entirely new chemical entity, we could run clinical trials using these alternatives in RLS patients,” said the study’s first author, Steven Bell, PhD, an epidemiologist with the University of Cambridge, Cambridge, England.
The study also confirmed the MIES1 gene, which is related to dopamine expression and iron homeostasis, as a key genetic contributor to RLS risk. Low levels of iron in the blood have long been thought to trigger RLS.
The Role of Gene-Environment Interactions
Through additional data analysis, the team confirmed that many of the genes associated with RLS play a role in development of the central nervous system.
“This strongly supports the hypothesis that restless legs syndrome is a neurodevelopmental disorder that develops during the embryo stage but doesn’t clinically manifest until later in life,” said Dr. Winkelmann.
About half of people with RLS report some family history of it.
But not all with a genetic predisposition will develop symptoms.
For instance, the study found that while the same gene regions seem to be associated with risk in both men and women, in practice, RLS is twice as common among women. This suggests that something about women’s lives — menstruation, childbirth, metabolism — may switch a preexisting risk into a reality.
“We know that genetic factors play an important role in making people susceptible to the disease,” said Dr. Winkelmann, “but in the end, it is the interaction between genetic and environmental factors that may lead to its manifestation.”
The study also found associations between RLS and depression and suggests that RLS may increase the risk for type 2 diabetes.
Improving RLS Care
A potentially useful tool coming out of the study was a “polygenic risk score,” which the researchers developed based on the genes identified. When they tested how accurately the score could predict whether someone would develop RLS within the next 5 years, the model got it right about 90% of the time.
Dr. Winkelmann imagines a day when someone could use such a polygenic risk score to flag the high risk for RLS early enough to take action to try to prevent it. More research is necessary to determine precisely what that action would be.
As for treatments, Dr. Berkowski thinks it’s unlikely that doctors will suddenly begin using existing, glutamate-targeting drugs off-label to treat RLS, as many are prohibitively expensive and wouldn’t be covered by insurance. But he’s optimistic that the study can spawn new research that could ultimately help fill the treatment gap.
Shalini Paruthi, MD, an adjunct professor at Saint Louis University, St. Louis, Missouri, and chair of the Restless Legs Syndrome Foundation’s board of directors, sees another benefit.
“The associations found in this study between RLS and other medical disorders may help patients and their physicians take RLS more seriously,” Dr. Paruthi said, “as treating RLS can lead to multiple other downstream improvements in their health.”
A version of this article appeared on Medscape.com.
For decades, scientists have been trying to unravel the mysteries of restless legs syndrome (RLS), a poorly understood and underdiagnosed neurological disorder causing itching, crawling, and aching sensations in the limbs that can only be relieved with movement.
A sweeping new genetic study, coauthored by an international team of 70 — including the world’s leading RLS experts — marks a significant advance in that pursuit. Published in Nature Genetics, it is the largest genetic study of the disease to date.
“It’s a huge step forward for patients as well as the scientific community,” said lead author Juliane Winkelmann, MD, a neurologist and geneticist with the Technical University of Munich, Munich, Germany, who’s been studying and treating patients with RLS for 30 years. “We believe it will allow us to better predict the likelihood of developing RLS and investigate new ways to prevent and modify it.”
The common condition, affecting about 1 in 10 adults, was first described centuries ago — by English physician Thomas Willis in the late 1600s. And while we know a lot more about it today — it’s familial in about half of all patients and has been linked to iron deficiency, among other conditions — its exact cause remains unknown.
With preferred drugs long prescribed to quell symptoms shown in recent years to actually worsen the disorder over time, doctors and patients are hungry for alternatives to treat or prevent the sleep-sabotaging condition.
“The main treatments that everybody continues to use are actually making people worse,” said Andrew Berkowski, MD, a Michigan-based neurologist and RLS specialist not involved in the study. These drugs — dopamine agonists such as levodopa and pramipexole — can also potentially cause drug dependence, Dr. Berkowski said.
How This Could Lead to New Treatments
In the new study, the group analyzed three genome-wide association studies, collectively including genetic information from 116,647 patients with RLS and more than 1.5 million people without it.
They identified 161 gene regions believed to contribute to RLS, about a dozen of which are already targets for existing drugs for other conditions. Previously, scientists knew of only 22 associated genes.
“It’s useful in that it identifies new genes we haven’t looked at yet and reinforces the science behind some of the older genes,” said Dr. Berkowski. “It’s given us some ideas for different things we should look into more closely.”
Among the top candidates are genes that influence glutamate — a key chemical messenger that helps move signals between nerve cells in the brain.
Several anticonvulsant and antiseizure drugs, including perampanel, lamotrigine, and gabapentin, target glutamate receptors. And at least one small study has shown perampanel prescribed off-label can improve RLS symptoms.
“Compared to starting at the beginning and developing an entirely new chemical entity, we could run clinical trials using these alternatives in RLS patients,” said the study’s first author, Steven Bell, PhD, an epidemiologist with the University of Cambridge, Cambridge, England.
The study also confirmed the MIES1 gene, which is related to dopamine expression and iron homeostasis, as a key genetic contributor to RLS risk. Low levels of iron in the blood have long been thought to trigger RLS.
The Role of Gene-Environment Interactions
Through additional data analysis, the team confirmed that many of the genes associated with RLS play a role in development of the central nervous system.
“This strongly supports the hypothesis that restless legs syndrome is a neurodevelopmental disorder that develops during the embryo stage but doesn’t clinically manifest until later in life,” said Dr. Winkelmann.
About half of people with RLS report some family history of it.
But not all with a genetic predisposition will develop symptoms.
For instance, the study found that while the same gene regions seem to be associated with risk in both men and women, in practice, RLS is twice as common among women. This suggests that something about women’s lives — menstruation, childbirth, metabolism — may switch a preexisting risk into a reality.
“We know that genetic factors play an important role in making people susceptible to the disease,” said Dr. Winkelmann, “but in the end, it is the interaction between genetic and environmental factors that may lead to its manifestation.”
The study also found associations between RLS and depression and suggests that RLS may increase the risk for type 2 diabetes.
Improving RLS Care
A potentially useful tool coming out of the study was a “polygenic risk score,” which the researchers developed based on the genes identified. When they tested how accurately the score could predict whether someone would develop RLS within the next 5 years, the model got it right about 90% of the time.
Dr. Winkelmann imagines a day when someone could use such a polygenic risk score to flag the high risk for RLS early enough to take action to try to prevent it. More research is necessary to determine precisely what that action would be.
As for treatments, Dr. Berkowski thinks it’s unlikely that doctors will suddenly begin using existing, glutamate-targeting drugs off-label to treat RLS, as many are prohibitively expensive and wouldn’t be covered by insurance. But he’s optimistic that the study can spawn new research that could ultimately help fill the treatment gap.
Shalini Paruthi, MD, an adjunct professor at Saint Louis University, St. Louis, Missouri, and chair of the Restless Legs Syndrome Foundation’s board of directors, sees another benefit.
“The associations found in this study between RLS and other medical disorders may help patients and their physicians take RLS more seriously,” Dr. Paruthi said, “as treating RLS can lead to multiple other downstream improvements in their health.”
A version of this article appeared on Medscape.com.
‘Shockingly High’ Rate of TBI in Older Adults
TOPLINE:
, a new study showed.
METHODOLOGY:
- Researchers analyzed data from approximately 9200 Medicare enrollees who were part of the Health and Retirement Study (HRS), aged 65 years and older, from 2000 to 2018.
- The baseline date was the date of the first age eligible HRS core interview in the community in 2000 or later.
- Incident TBI cases came from an updated list of the International Classification of Diseases (ICD), 9th and 10th edition codes, from the Defense and Veterans Brain Injury Center and the Armed Forces Health Surveillance Branch for TBI surveillance.
- Codes corresponded with emergency department, CT, and/or fMRI visits.
TAKEAWAY:
- Almost 13% of older individuals (n = 797) experienced TBI during the study, highlighting its significant prevalence in this population.
- Older adults (mean age at baseline, 75 years) who experienced TBI during the study period were more likely to be women and White individuals as well as individuals having higher levels of education and normal cognition (P < .001), challenging previous assumptions about risk factors.
- The study underscored the need for targeted interventions and research focused on TBI prevention and postdischarge care in older adults.
IN PRACTICE:
“The number of people 65 and older with TBI is shockingly high,” senior author Raquel Gardner, MD, said in a press release. “We need evidence-based guidelines to inform postdischarge care of this very large Medicare population and more research on post-TBI dementia prevention and repeat injury prevention.”
SOURCE:
The study was led by Erica Kornblith, PhD, of the University of California, San Francisco. It was published online in JAMA Network Open.
LIMITATIONS:
The study’s reliance on ICD codes for TBI identification may not capture the full spectrum of TBI severity. Self-reported data on sociodemographic factors may have introduced bias, affecting the accuracy of associations with TBI incidence. In addition, the findings’ generalizability may be limited due to the study’s focus on Medicare enrollees, potentially excluding those from diverse socioeconomic backgrounds.
DISCLOSURES:
The study was funded by the Alzheimer’s Association, the US Department of Veterans Affairs, the National Institute on Aging, and the Department of Defense. Disclosures are noted in the original study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
, a new study showed.
METHODOLOGY:
- Researchers analyzed data from approximately 9200 Medicare enrollees who were part of the Health and Retirement Study (HRS), aged 65 years and older, from 2000 to 2018.
- The baseline date was the date of the first age eligible HRS core interview in the community in 2000 or later.
- Incident TBI cases came from an updated list of the International Classification of Diseases (ICD), 9th and 10th edition codes, from the Defense and Veterans Brain Injury Center and the Armed Forces Health Surveillance Branch for TBI surveillance.
- Codes corresponded with emergency department, CT, and/or fMRI visits.
TAKEAWAY:
- Almost 13% of older individuals (n = 797) experienced TBI during the study, highlighting its significant prevalence in this population.
- Older adults (mean age at baseline, 75 years) who experienced TBI during the study period were more likely to be women and White individuals as well as individuals having higher levels of education and normal cognition (P < .001), challenging previous assumptions about risk factors.
- The study underscored the need for targeted interventions and research focused on TBI prevention and postdischarge care in older adults.
IN PRACTICE:
“The number of people 65 and older with TBI is shockingly high,” senior author Raquel Gardner, MD, said in a press release. “We need evidence-based guidelines to inform postdischarge care of this very large Medicare population and more research on post-TBI dementia prevention and repeat injury prevention.”
SOURCE:
The study was led by Erica Kornblith, PhD, of the University of California, San Francisco. It was published online in JAMA Network Open.
LIMITATIONS:
The study’s reliance on ICD codes for TBI identification may not capture the full spectrum of TBI severity. Self-reported data on sociodemographic factors may have introduced bias, affecting the accuracy of associations with TBI incidence. In addition, the findings’ generalizability may be limited due to the study’s focus on Medicare enrollees, potentially excluding those from diverse socioeconomic backgrounds.
DISCLOSURES:
The study was funded by the Alzheimer’s Association, the US Department of Veterans Affairs, the National Institute on Aging, and the Department of Defense. Disclosures are noted in the original study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
, a new study showed.
METHODOLOGY:
- Researchers analyzed data from approximately 9200 Medicare enrollees who were part of the Health and Retirement Study (HRS), aged 65 years and older, from 2000 to 2018.
- The baseline date was the date of the first age eligible HRS core interview in the community in 2000 or later.
- Incident TBI cases came from an updated list of the International Classification of Diseases (ICD), 9th and 10th edition codes, from the Defense and Veterans Brain Injury Center and the Armed Forces Health Surveillance Branch for TBI surveillance.
- Codes corresponded with emergency department, CT, and/or fMRI visits.
TAKEAWAY:
- Almost 13% of older individuals (n = 797) experienced TBI during the study, highlighting its significant prevalence in this population.
- Older adults (mean age at baseline, 75 years) who experienced TBI during the study period were more likely to be women and White individuals as well as individuals having higher levels of education and normal cognition (P < .001), challenging previous assumptions about risk factors.
- The study underscored the need for targeted interventions and research focused on TBI prevention and postdischarge care in older adults.
IN PRACTICE:
“The number of people 65 and older with TBI is shockingly high,” senior author Raquel Gardner, MD, said in a press release. “We need evidence-based guidelines to inform postdischarge care of this very large Medicare population and more research on post-TBI dementia prevention and repeat injury prevention.”
SOURCE:
The study was led by Erica Kornblith, PhD, of the University of California, San Francisco. It was published online in JAMA Network Open.
LIMITATIONS:
The study’s reliance on ICD codes for TBI identification may not capture the full spectrum of TBI severity. Self-reported data on sociodemographic factors may have introduced bias, affecting the accuracy of associations with TBI incidence. In addition, the findings’ generalizability may be limited due to the study’s focus on Medicare enrollees, potentially excluding those from diverse socioeconomic backgrounds.
DISCLOSURES:
The study was funded by the Alzheimer’s Association, the US Department of Veterans Affairs, the National Institute on Aging, and the Department of Defense. Disclosures are noted in the original study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
Early-Life Excess Weight Tied to Subsequent Stroke Risk
, new research suggested.
An analysis of more than five decades of health data on 10,000 adults revealed that close to 5% experienced a stroke during the follow-up period, with the risk for ischemic stroke being more than twice as high in women who had obesity as teens or young adults. The risk was even higher for hemorrhagic stroke in both men and women with a history of obesity in youth.
“Our findings suggest that being overweight may have long-term health effects, even if the excess weight is temporary,” lead author Ursula Mikkola, BM, an investigator in the Research Unit of Population Health at the University of Oulu, Oulu, Finland, said in a news release.
“Health care professionals should pay attention to overweight and obesity in young people and work with them to develop healthier eating patterns and physical activity — however, conversations with teens and young adults about weight should be approached in a nonjudgmental and nonstigmatizing manner,” she added.
The study was published online in Stroke.
Gender Differences
Childhood obesity has been associated with a heightened risk for cerebrovascular disease later in life, but most studies have focused on body mass index (BMI) at a single time point without considering its fluctuations throughout life, the investigators noted.
For the study, investigators used data from the Northern Finland Birth Cohort 1966, a prospective, general population-based birth cohort that followed 10,491 individuals (5185 women) until 2020 or the first stroke, death, or moving abroad, whichever came first.
Mean (SD) follow-up for each participant was 39 years from age 14 onward and 23 years from age 31 onward. The analysis was conducted between 1980 and 2020.
BMI data were collected from participants at the age of 14 and 31 years. Age 14 covariates included smoking, parental socioeconomic status, and age at menarche (for girls). Age 31 covariates included smoking and participants’ educational level.
During the follow-up period, 4.7% of participants experienced stroke. Of these events, 31% were ischemic strokes and 40% were transient ischemic attacks. The remainder were hemorrhagic or other cerebrovascular events.
Using normal weight as a reference, researchers found that the risk for ischemic stroke was over twice as high for women who had been overweight at ages 14 (hazard ratio [HR], 2.49; 95% confidence interval [CI], 1.44-4.31) and 31 (HR, 2.13; 95% CI, 1.14-3.97) years. The risk was also considerably higher for women who had obesity at ages 14 (HR, 1.87; 95% CI, 0.76-4.58) and 31 (HR, 2.67; 95% CI, 1.26-5.65) years.
The risk for hemorrhagic stroke was even higher, both among women (HR, 3.49; 95% CI, 1.13-10.7) and men (HR, 5.75; 95% CI, 1.43-23.1) who had obesity at age 31.
No similar associations were found among men, and the findings were independent of earlier or later BMI.
The risk for any cerebrovascular disease related to overweight at age 14 was twice as high among girls vs boys (HR, 2.09; 95% CI, 1.06-4.15), and the risk for ischemic stroke related to obesity at age 31 was nearly seven times higher among women vs men (HR, 6.96; 95% CI, 1.36-35.7).
“Stroke at a young age is rare, so the difference of just a few strokes could have an outsized impact on the risk estimates,” the study authors said. “Also, BMI relies solely on a person’s height and weight; therefore, a high BMI may be a misleading way to define obesity, especially in muscular people who may carry little fat even while weighing more.”
Caveats
In an accompanying editorial, Larry Goldstein, MD, chair of the Department of Neurology, University of Kentucky, Lexington, Kentucky, and codirector of the Kentucky Neuroscience Institute, said the study “provides additional evidence of an association between overweight/obesity and stroke in young adults.”
However, Dr. Goldstein added that “while it is tempting to assume that reductions in overweight/obesity in younger populations would translate to lower stroke rates in young adults, this remains to be proven.”
Moreover, it is “always important to acknowledge that associations found in observational studies may not reflect causality.”
This study was supported by Orion Research Foundation, Päivikki and Sakari Sohlberg Foundation, and Paulo Foundation. Dr. Mikkola reported no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Goldstein reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
, new research suggested.
An analysis of more than five decades of health data on 10,000 adults revealed that close to 5% experienced a stroke during the follow-up period, with the risk for ischemic stroke being more than twice as high in women who had obesity as teens or young adults. The risk was even higher for hemorrhagic stroke in both men and women with a history of obesity in youth.
“Our findings suggest that being overweight may have long-term health effects, even if the excess weight is temporary,” lead author Ursula Mikkola, BM, an investigator in the Research Unit of Population Health at the University of Oulu, Oulu, Finland, said in a news release.
“Health care professionals should pay attention to overweight and obesity in young people and work with them to develop healthier eating patterns and physical activity — however, conversations with teens and young adults about weight should be approached in a nonjudgmental and nonstigmatizing manner,” she added.
The study was published online in Stroke.
Gender Differences
Childhood obesity has been associated with a heightened risk for cerebrovascular disease later in life, but most studies have focused on body mass index (BMI) at a single time point without considering its fluctuations throughout life, the investigators noted.
For the study, investigators used data from the Northern Finland Birth Cohort 1966, a prospective, general population-based birth cohort that followed 10,491 individuals (5185 women) until 2020 or the first stroke, death, or moving abroad, whichever came first.
Mean (SD) follow-up for each participant was 39 years from age 14 onward and 23 years from age 31 onward. The analysis was conducted between 1980 and 2020.
BMI data were collected from participants at the age of 14 and 31 years. Age 14 covariates included smoking, parental socioeconomic status, and age at menarche (for girls). Age 31 covariates included smoking and participants’ educational level.
During the follow-up period, 4.7% of participants experienced stroke. Of these events, 31% were ischemic strokes and 40% were transient ischemic attacks. The remainder were hemorrhagic or other cerebrovascular events.
Using normal weight as a reference, researchers found that the risk for ischemic stroke was over twice as high for women who had been overweight at ages 14 (hazard ratio [HR], 2.49; 95% confidence interval [CI], 1.44-4.31) and 31 (HR, 2.13; 95% CI, 1.14-3.97) years. The risk was also considerably higher for women who had obesity at ages 14 (HR, 1.87; 95% CI, 0.76-4.58) and 31 (HR, 2.67; 95% CI, 1.26-5.65) years.
The risk for hemorrhagic stroke was even higher, both among women (HR, 3.49; 95% CI, 1.13-10.7) and men (HR, 5.75; 95% CI, 1.43-23.1) who had obesity at age 31.
No similar associations were found among men, and the findings were independent of earlier or later BMI.
The risk for any cerebrovascular disease related to overweight at age 14 was twice as high among girls vs boys (HR, 2.09; 95% CI, 1.06-4.15), and the risk for ischemic stroke related to obesity at age 31 was nearly seven times higher among women vs men (HR, 6.96; 95% CI, 1.36-35.7).
“Stroke at a young age is rare, so the difference of just a few strokes could have an outsized impact on the risk estimates,” the study authors said. “Also, BMI relies solely on a person’s height and weight; therefore, a high BMI may be a misleading way to define obesity, especially in muscular people who may carry little fat even while weighing more.”
Caveats
In an accompanying editorial, Larry Goldstein, MD, chair of the Department of Neurology, University of Kentucky, Lexington, Kentucky, and codirector of the Kentucky Neuroscience Institute, said the study “provides additional evidence of an association between overweight/obesity and stroke in young adults.”
However, Dr. Goldstein added that “while it is tempting to assume that reductions in overweight/obesity in younger populations would translate to lower stroke rates in young adults, this remains to be proven.”
Moreover, it is “always important to acknowledge that associations found in observational studies may not reflect causality.”
This study was supported by Orion Research Foundation, Päivikki and Sakari Sohlberg Foundation, and Paulo Foundation. Dr. Mikkola reported no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Goldstein reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
, new research suggested.
An analysis of more than five decades of health data on 10,000 adults revealed that close to 5% experienced a stroke during the follow-up period, with the risk for ischemic stroke being more than twice as high in women who had obesity as teens or young adults. The risk was even higher for hemorrhagic stroke in both men and women with a history of obesity in youth.
“Our findings suggest that being overweight may have long-term health effects, even if the excess weight is temporary,” lead author Ursula Mikkola, BM, an investigator in the Research Unit of Population Health at the University of Oulu, Oulu, Finland, said in a news release.
“Health care professionals should pay attention to overweight and obesity in young people and work with them to develop healthier eating patterns and physical activity — however, conversations with teens and young adults about weight should be approached in a nonjudgmental and nonstigmatizing manner,” she added.
The study was published online in Stroke.
Gender Differences
Childhood obesity has been associated with a heightened risk for cerebrovascular disease later in life, but most studies have focused on body mass index (BMI) at a single time point without considering its fluctuations throughout life, the investigators noted.
For the study, investigators used data from the Northern Finland Birth Cohort 1966, a prospective, general population-based birth cohort that followed 10,491 individuals (5185 women) until 2020 or the first stroke, death, or moving abroad, whichever came first.
Mean (SD) follow-up for each participant was 39 years from age 14 onward and 23 years from age 31 onward. The analysis was conducted between 1980 and 2020.
BMI data were collected from participants at the age of 14 and 31 years. Age 14 covariates included smoking, parental socioeconomic status, and age at menarche (for girls). Age 31 covariates included smoking and participants’ educational level.
During the follow-up period, 4.7% of participants experienced stroke. Of these events, 31% were ischemic strokes and 40% were transient ischemic attacks. The remainder were hemorrhagic or other cerebrovascular events.
Using normal weight as a reference, researchers found that the risk for ischemic stroke was over twice as high for women who had been overweight at ages 14 (hazard ratio [HR], 2.49; 95% confidence interval [CI], 1.44-4.31) and 31 (HR, 2.13; 95% CI, 1.14-3.97) years. The risk was also considerably higher for women who had obesity at ages 14 (HR, 1.87; 95% CI, 0.76-4.58) and 31 (HR, 2.67; 95% CI, 1.26-5.65) years.
The risk for hemorrhagic stroke was even higher, both among women (HR, 3.49; 95% CI, 1.13-10.7) and men (HR, 5.75; 95% CI, 1.43-23.1) who had obesity at age 31.
No similar associations were found among men, and the findings were independent of earlier or later BMI.
The risk for any cerebrovascular disease related to overweight at age 14 was twice as high among girls vs boys (HR, 2.09; 95% CI, 1.06-4.15), and the risk for ischemic stroke related to obesity at age 31 was nearly seven times higher among women vs men (HR, 6.96; 95% CI, 1.36-35.7).
“Stroke at a young age is rare, so the difference of just a few strokes could have an outsized impact on the risk estimates,” the study authors said. “Also, BMI relies solely on a person’s height and weight; therefore, a high BMI may be a misleading way to define obesity, especially in muscular people who may carry little fat even while weighing more.”
Caveats
In an accompanying editorial, Larry Goldstein, MD, chair of the Department of Neurology, University of Kentucky, Lexington, Kentucky, and codirector of the Kentucky Neuroscience Institute, said the study “provides additional evidence of an association between overweight/obesity and stroke in young adults.”
However, Dr. Goldstein added that “while it is tempting to assume that reductions in overweight/obesity in younger populations would translate to lower stroke rates in young adults, this remains to be proven.”
Moreover, it is “always important to acknowledge that associations found in observational studies may not reflect causality.”
This study was supported by Orion Research Foundation, Päivikki and Sakari Sohlberg Foundation, and Paulo Foundation. Dr. Mikkola reported no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Goldstein reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
Novel Method Able to Predict if, When, Dementia Will Develop
Novel, noninvasive testing is able to predict dementia onset with 80% accuracy up to 9 years before clinical diagnosis.
The results suggest resting-state functional MRI (rs-fMRI) could be used to identify a neural network signature of dementia risk early in the pathological course of the disease, an important advance as disease-modifying drugs such as those targeting amyloid beta are now becoming available.
“The brain has been changing for a long time before people get symptoms of dementia, and if we’re very precise about how we do it, we can actually, in principle, detect those changes, which could be really exciting,” study investigator Charles R. Marshall, PhD, professor of clinical neurology, Centre for Preventive Neurology, Wolfson Institute of Population Health, Queen Mary University of London, London, England, told this news organization.
“This could become a platform for screening people for risk status in the future, and it could one day make all the difference in terms of being able to prevent dementia,” he added.
The findings were published online in Nature Mental Health.
The rs-fMRI measures fluctuations in blood oxygen level–dependent signals across the brain, which reflect functional connectivity.
Brain regions commonly implicated in altered functional connectivity in Alzheimer’s disease (AD) are within the default-mode network (DMN). This is the group of regions “connecting with each other and communicating with each other when someone is just lying in an MRI scanner doing nothing, which is how it came to be called the default-mode network,” explained Dr. Marshall.
The DMN encompasses the medial prefrontal cortex, posterior cingulate cortex or precuneus, and bilateral inferior parietal cortices, as well as supplementary brain regions including the medial temporal lobes and temporal poles.
This network is believed to be selectively vulnerable to AD neuropathology. “Something about that network starts to be disrupted in the very earliest stages of Alzheimer’s disease,” said Dr. Marshall.
While this has been known for some time, “what we’ve not been able to do before is build a precise enough model of how the network is connected to be able to tell whether individual participants were going to get dementia or not,” he added.
The investigators used data from the UK Biobank, a large-scale biomedical database and research resource containing genetic and health information from about a half a million UK volunteer participants.
The analysis included 103 individuals with dementia (22 with prevalent dementia and 81 later diagnosed with dementia over a median of 3.7 years) and 1030 matched participants without dementia. All participants had MRI imaging between 2006 and 2010.
The total sample had a mean age of 70.4 years at the time of MRI data acquisition. For each participant, researchers extracted relevant data from 10 predefined regions of interest in the brain, which together defined their DMN. This included two midline regions and four regions in each hemisphere.
Greater Predictive Power
Researchers built a model using an approach related to how brain regions communicate with each other. “The model sort of incorporates what we know about how the changes that you see on a functional MRI scan relate to changes in the firing of brain cells, in a very precise way,” said Dr. Marshall.
The researchers then used a machine learning approach to develop a model for effective connectivity, which describes the causal influence of one brain region over another. “We trained a machine learning tool to recognize what a dementia-like pattern of connectivity looks like,” said Dr. Marshall.
Investigators controlled for potential confounders, including age, sex, handedness, in-scanner head motion, and geographical location of data acquisition.
The model was able to determine the difference in brain connectivity patterns between those who would go on to develop dementia and those who would not, with an accuracy of 82% up to 9 years before an official diagnosis was made.
When the researchers trained a model to use brain connections to predict time to diagnosis, the predicted time to diagnosis and actual time to diagnosis were within about 2 years.
This effective connectivity approach has much more predictive power than memory test scores or brain structural measures, said Dr. Marshall. “We looked at brain volumes and they performed very poorly, only just better than tossing a coin, and the same with cognitive test scores, which were only just better than chance.”
As for markers of amyloid beta and tau in the brain, these are “very useful diagnostically” but only when someone has symptoms, said Dr. Marshall. He noted people live for years with these proteins without developing dementia symptoms.
“We wouldn’t necessarily want to expose somebody who has a brain full of amyloid but was not going to get symptoms for the next 20 years to a treatment, but if we knew that person was highly likely to develop symptoms of dementia in the next 5 years, then we probably would,” he said.
Dr. Marshall believes the predictive power of all these diagnostic tools could be boosted if they were used together.
Potential for Early Detection, Treatment
Researchers examined a number of modifiable dementia risk factors, including hearing loss, depression, hypertension, and physical inactivity. They found self-reported social isolation was the only variable that showed a significant association with effective connectivity, meaning those who are socially isolated were more likely to have a “dementia-like” pattern of DMN effective connectivity. This finding suggests social isolation is a cause, rather than a consequence, of dementia.
The study also revealed associations between DMN effective connectivity and AD polygenic risk score, derived from meta-analysis of multiple external genome-wide association study sources.
A predictive tool that uses rs-fMRI could also help select participants at a high risk for dementia to investigate potential treatments. “There’s good reason to think that if we could go in earlier with, for example, anti-amyloid treatments, they’re more likely to be effective,” said Dr. Marshall.
The new test might eventually have value as a population screening tool, something akin to colon cancer screening, he added. “We don’t send everyone for a colonoscopy; you do a kind of pre-screening test at home, and if that’s positive, then you get called in for a colonoscopy.”
The researchers looked at all-cause dementia and not just AD because dementia subtype diagnoses in the UK Biobank “are not at all reliable,” said Dr. Marshall.
Study limitations included the fact that UK Biobank participants are healthier and less socioeconomically deprived than the general population and are predominantly White. Another study limitation was that labeling of cases and controls depended on clinician coding rather than on standardized diagnostic criteria.
Kudos, Caveats
In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Sebastian Walsh, National Institute for Health and Care Research doctoral fellow in Public Health Medicine, University of Cambridge, Cambridge, England, said the results are “potentially exciting,” and he praised the way the team conducted the study.
However, he noted some caveats, including the small sample size, with only about 100 people with dementia, and the relatively short time between the brain scan and diagnosis (an average of 3.7 years).
Dr. Walsh emphasized the importance of replicating the findings “in bigger samples with a much longer delay between scan and onset of cognitive symptoms.”
He also noted the average age of study participants was 70 years, whereas the average age at which individuals in the United Kingdom develop dementia is mid to late 80s, “so we need to see these results repeated for more diverse and older samples.”
He also noted that MRI scans are expensive, and the approach used in the study needs “a high-quality scan which requires people to keep their head still.”
Also commenting, Andrew Doig, PhD, professor, Division of Neuroscience, the University of Manchester, Manchester, England, said the MRI connectivity method used in the study might form part of a broader diagnostic approach.
“Dementia is a complex condition, and it is unlikely that we will ever find one simple test that can accurately diagnose it,” Dr. Doig noted. “Within a few years, however, there is good reason to believe that we will be routinely testing for dementia in middle-aged people, using a combination of methods, such as a blood test, followed by imaging.”
“The MRI connectivity method described here could form part of this diagnostic platform. We will then have an excellent understanding of which people are likely to benefit most from the new generation of dementia drugs,” he said.
Dr. Marshall and Dr. Walsh reported no relevant disclosures. Dr. Doig reported that he is a founder, shareholder, and consultant for PharmaKure Ltd, which is developing new diagnostics for neurodegenerative diseases using blood biomarkers.
A version of this article first appeared on Medscape.com.
Novel, noninvasive testing is able to predict dementia onset with 80% accuracy up to 9 years before clinical diagnosis.
The results suggest resting-state functional MRI (rs-fMRI) could be used to identify a neural network signature of dementia risk early in the pathological course of the disease, an important advance as disease-modifying drugs such as those targeting amyloid beta are now becoming available.
“The brain has been changing for a long time before people get symptoms of dementia, and if we’re very precise about how we do it, we can actually, in principle, detect those changes, which could be really exciting,” study investigator Charles R. Marshall, PhD, professor of clinical neurology, Centre for Preventive Neurology, Wolfson Institute of Population Health, Queen Mary University of London, London, England, told this news organization.
“This could become a platform for screening people for risk status in the future, and it could one day make all the difference in terms of being able to prevent dementia,” he added.
The findings were published online in Nature Mental Health.
The rs-fMRI measures fluctuations in blood oxygen level–dependent signals across the brain, which reflect functional connectivity.
Brain regions commonly implicated in altered functional connectivity in Alzheimer’s disease (AD) are within the default-mode network (DMN). This is the group of regions “connecting with each other and communicating with each other when someone is just lying in an MRI scanner doing nothing, which is how it came to be called the default-mode network,” explained Dr. Marshall.
The DMN encompasses the medial prefrontal cortex, posterior cingulate cortex or precuneus, and bilateral inferior parietal cortices, as well as supplementary brain regions including the medial temporal lobes and temporal poles.
This network is believed to be selectively vulnerable to AD neuropathology. “Something about that network starts to be disrupted in the very earliest stages of Alzheimer’s disease,” said Dr. Marshall.
While this has been known for some time, “what we’ve not been able to do before is build a precise enough model of how the network is connected to be able to tell whether individual participants were going to get dementia or not,” he added.
The investigators used data from the UK Biobank, a large-scale biomedical database and research resource containing genetic and health information from about a half a million UK volunteer participants.
The analysis included 103 individuals with dementia (22 with prevalent dementia and 81 later diagnosed with dementia over a median of 3.7 years) and 1030 matched participants without dementia. All participants had MRI imaging between 2006 and 2010.
The total sample had a mean age of 70.4 years at the time of MRI data acquisition. For each participant, researchers extracted relevant data from 10 predefined regions of interest in the brain, which together defined their DMN. This included two midline regions and four regions in each hemisphere.
Greater Predictive Power
Researchers built a model using an approach related to how brain regions communicate with each other. “The model sort of incorporates what we know about how the changes that you see on a functional MRI scan relate to changes in the firing of brain cells, in a very precise way,” said Dr. Marshall.
The researchers then used a machine learning approach to develop a model for effective connectivity, which describes the causal influence of one brain region over another. “We trained a machine learning tool to recognize what a dementia-like pattern of connectivity looks like,” said Dr. Marshall.
Investigators controlled for potential confounders, including age, sex, handedness, in-scanner head motion, and geographical location of data acquisition.
The model was able to determine the difference in brain connectivity patterns between those who would go on to develop dementia and those who would not, with an accuracy of 82% up to 9 years before an official diagnosis was made.
When the researchers trained a model to use brain connections to predict time to diagnosis, the predicted time to diagnosis and actual time to diagnosis were within about 2 years.
This effective connectivity approach has much more predictive power than memory test scores or brain structural measures, said Dr. Marshall. “We looked at brain volumes and they performed very poorly, only just better than tossing a coin, and the same with cognitive test scores, which were only just better than chance.”
As for markers of amyloid beta and tau in the brain, these are “very useful diagnostically” but only when someone has symptoms, said Dr. Marshall. He noted people live for years with these proteins without developing dementia symptoms.
“We wouldn’t necessarily want to expose somebody who has a brain full of amyloid but was not going to get symptoms for the next 20 years to a treatment, but if we knew that person was highly likely to develop symptoms of dementia in the next 5 years, then we probably would,” he said.
Dr. Marshall believes the predictive power of all these diagnostic tools could be boosted if they were used together.
Potential for Early Detection, Treatment
Researchers examined a number of modifiable dementia risk factors, including hearing loss, depression, hypertension, and physical inactivity. They found self-reported social isolation was the only variable that showed a significant association with effective connectivity, meaning those who are socially isolated were more likely to have a “dementia-like” pattern of DMN effective connectivity. This finding suggests social isolation is a cause, rather than a consequence, of dementia.
The study also revealed associations between DMN effective connectivity and AD polygenic risk score, derived from meta-analysis of multiple external genome-wide association study sources.
A predictive tool that uses rs-fMRI could also help select participants at a high risk for dementia to investigate potential treatments. “There’s good reason to think that if we could go in earlier with, for example, anti-amyloid treatments, they’re more likely to be effective,” said Dr. Marshall.
The new test might eventually have value as a population screening tool, something akin to colon cancer screening, he added. “We don’t send everyone for a colonoscopy; you do a kind of pre-screening test at home, and if that’s positive, then you get called in for a colonoscopy.”
The researchers looked at all-cause dementia and not just AD because dementia subtype diagnoses in the UK Biobank “are not at all reliable,” said Dr. Marshall.
Study limitations included the fact that UK Biobank participants are healthier and less socioeconomically deprived than the general population and are predominantly White. Another study limitation was that labeling of cases and controls depended on clinician coding rather than on standardized diagnostic criteria.
Kudos, Caveats
In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Sebastian Walsh, National Institute for Health and Care Research doctoral fellow in Public Health Medicine, University of Cambridge, Cambridge, England, said the results are “potentially exciting,” and he praised the way the team conducted the study.
However, he noted some caveats, including the small sample size, with only about 100 people with dementia, and the relatively short time between the brain scan and diagnosis (an average of 3.7 years).
Dr. Walsh emphasized the importance of replicating the findings “in bigger samples with a much longer delay between scan and onset of cognitive symptoms.”
He also noted the average age of study participants was 70 years, whereas the average age at which individuals in the United Kingdom develop dementia is mid to late 80s, “so we need to see these results repeated for more diverse and older samples.”
He also noted that MRI scans are expensive, and the approach used in the study needs “a high-quality scan which requires people to keep their head still.”
Also commenting, Andrew Doig, PhD, professor, Division of Neuroscience, the University of Manchester, Manchester, England, said the MRI connectivity method used in the study might form part of a broader diagnostic approach.
“Dementia is a complex condition, and it is unlikely that we will ever find one simple test that can accurately diagnose it,” Dr. Doig noted. “Within a few years, however, there is good reason to believe that we will be routinely testing for dementia in middle-aged people, using a combination of methods, such as a blood test, followed by imaging.”
“The MRI connectivity method described here could form part of this diagnostic platform. We will then have an excellent understanding of which people are likely to benefit most from the new generation of dementia drugs,” he said.
Dr. Marshall and Dr. Walsh reported no relevant disclosures. Dr. Doig reported that he is a founder, shareholder, and consultant for PharmaKure Ltd, which is developing new diagnostics for neurodegenerative diseases using blood biomarkers.
A version of this article first appeared on Medscape.com.
Novel, noninvasive testing is able to predict dementia onset with 80% accuracy up to 9 years before clinical diagnosis.
The results suggest resting-state functional MRI (rs-fMRI) could be used to identify a neural network signature of dementia risk early in the pathological course of the disease, an important advance as disease-modifying drugs such as those targeting amyloid beta are now becoming available.
“The brain has been changing for a long time before people get symptoms of dementia, and if we’re very precise about how we do it, we can actually, in principle, detect those changes, which could be really exciting,” study investigator Charles R. Marshall, PhD, professor of clinical neurology, Centre for Preventive Neurology, Wolfson Institute of Population Health, Queen Mary University of London, London, England, told this news organization.
“This could become a platform for screening people for risk status in the future, and it could one day make all the difference in terms of being able to prevent dementia,” he added.
The findings were published online in Nature Mental Health.
The rs-fMRI measures fluctuations in blood oxygen level–dependent signals across the brain, which reflect functional connectivity.
Brain regions commonly implicated in altered functional connectivity in Alzheimer’s disease (AD) are within the default-mode network (DMN). This is the group of regions “connecting with each other and communicating with each other when someone is just lying in an MRI scanner doing nothing, which is how it came to be called the default-mode network,” explained Dr. Marshall.
The DMN encompasses the medial prefrontal cortex, posterior cingulate cortex or precuneus, and bilateral inferior parietal cortices, as well as supplementary brain regions including the medial temporal lobes and temporal poles.
This network is believed to be selectively vulnerable to AD neuropathology. “Something about that network starts to be disrupted in the very earliest stages of Alzheimer’s disease,” said Dr. Marshall.
While this has been known for some time, “what we’ve not been able to do before is build a precise enough model of how the network is connected to be able to tell whether individual participants were going to get dementia or not,” he added.
The investigators used data from the UK Biobank, a large-scale biomedical database and research resource containing genetic and health information from about a half a million UK volunteer participants.
The analysis included 103 individuals with dementia (22 with prevalent dementia and 81 later diagnosed with dementia over a median of 3.7 years) and 1030 matched participants without dementia. All participants had MRI imaging between 2006 and 2010.
The total sample had a mean age of 70.4 years at the time of MRI data acquisition. For each participant, researchers extracted relevant data from 10 predefined regions of interest in the brain, which together defined their DMN. This included two midline regions and four regions in each hemisphere.
Greater Predictive Power
Researchers built a model using an approach related to how brain regions communicate with each other. “The model sort of incorporates what we know about how the changes that you see on a functional MRI scan relate to changes in the firing of brain cells, in a very precise way,” said Dr. Marshall.
The researchers then used a machine learning approach to develop a model for effective connectivity, which describes the causal influence of one brain region over another. “We trained a machine learning tool to recognize what a dementia-like pattern of connectivity looks like,” said Dr. Marshall.
Investigators controlled for potential confounders, including age, sex, handedness, in-scanner head motion, and geographical location of data acquisition.
The model was able to determine the difference in brain connectivity patterns between those who would go on to develop dementia and those who would not, with an accuracy of 82% up to 9 years before an official diagnosis was made.
When the researchers trained a model to use brain connections to predict time to diagnosis, the predicted time to diagnosis and actual time to diagnosis were within about 2 years.
This effective connectivity approach has much more predictive power than memory test scores or brain structural measures, said Dr. Marshall. “We looked at brain volumes and they performed very poorly, only just better than tossing a coin, and the same with cognitive test scores, which were only just better than chance.”
As for markers of amyloid beta and tau in the brain, these are “very useful diagnostically” but only when someone has symptoms, said Dr. Marshall. He noted people live for years with these proteins without developing dementia symptoms.
“We wouldn’t necessarily want to expose somebody who has a brain full of amyloid but was not going to get symptoms for the next 20 years to a treatment, but if we knew that person was highly likely to develop symptoms of dementia in the next 5 years, then we probably would,” he said.
Dr. Marshall believes the predictive power of all these diagnostic tools could be boosted if they were used together.
Potential for Early Detection, Treatment
Researchers examined a number of modifiable dementia risk factors, including hearing loss, depression, hypertension, and physical inactivity. They found self-reported social isolation was the only variable that showed a significant association with effective connectivity, meaning those who are socially isolated were more likely to have a “dementia-like” pattern of DMN effective connectivity. This finding suggests social isolation is a cause, rather than a consequence, of dementia.
The study also revealed associations between DMN effective connectivity and AD polygenic risk score, derived from meta-analysis of multiple external genome-wide association study sources.
A predictive tool that uses rs-fMRI could also help select participants at a high risk for dementia to investigate potential treatments. “There’s good reason to think that if we could go in earlier with, for example, anti-amyloid treatments, they’re more likely to be effective,” said Dr. Marshall.
The new test might eventually have value as a population screening tool, something akin to colon cancer screening, he added. “We don’t send everyone for a colonoscopy; you do a kind of pre-screening test at home, and if that’s positive, then you get called in for a colonoscopy.”
The researchers looked at all-cause dementia and not just AD because dementia subtype diagnoses in the UK Biobank “are not at all reliable,” said Dr. Marshall.
Study limitations included the fact that UK Biobank participants are healthier and less socioeconomically deprived than the general population and are predominantly White. Another study limitation was that labeling of cases and controls depended on clinician coding rather than on standardized diagnostic criteria.
Kudos, Caveats
In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Sebastian Walsh, National Institute for Health and Care Research doctoral fellow in Public Health Medicine, University of Cambridge, Cambridge, England, said the results are “potentially exciting,” and he praised the way the team conducted the study.
However, he noted some caveats, including the small sample size, with only about 100 people with dementia, and the relatively short time between the brain scan and diagnosis (an average of 3.7 years).
Dr. Walsh emphasized the importance of replicating the findings “in bigger samples with a much longer delay between scan and onset of cognitive symptoms.”
He also noted the average age of study participants was 70 years, whereas the average age at which individuals in the United Kingdom develop dementia is mid to late 80s, “so we need to see these results repeated for more diverse and older samples.”
He also noted that MRI scans are expensive, and the approach used in the study needs “a high-quality scan which requires people to keep their head still.”
Also commenting, Andrew Doig, PhD, professor, Division of Neuroscience, the University of Manchester, Manchester, England, said the MRI connectivity method used in the study might form part of a broader diagnostic approach.
“Dementia is a complex condition, and it is unlikely that we will ever find one simple test that can accurately diagnose it,” Dr. Doig noted. “Within a few years, however, there is good reason to believe that we will be routinely testing for dementia in middle-aged people, using a combination of methods, such as a blood test, followed by imaging.”
“The MRI connectivity method described here could form part of this diagnostic platform. We will then have an excellent understanding of which people are likely to benefit most from the new generation of dementia drugs,” he said.
Dr. Marshall and Dr. Walsh reported no relevant disclosures. Dr. Doig reported that he is a founder, shareholder, and consultant for PharmaKure Ltd, which is developing new diagnostics for neurodegenerative diseases using blood biomarkers.
A version of this article first appeared on Medscape.com.