User login
The Link Between Vision Impairment and Dementia in Older Adults
TOPLINE:
METHODOLOGY:
- Researchers conducted a cross-sectional analysis using data from the National Health and Aging Trends Study (NHATS).
- The analysis included 2767 US adults aged 71 years or older (54.7% female and 45.3% male).
- Vision impairments were defined using 2019 World Health Organization criteria. Near and distance vision impairments were defined as greater than 0.30 logMAR, and contrast sensitivity impairment was identified by scores below 1.55 logCS.
- Dementia was classified using a standardized algorithm developed in NHATS, which incorporated a series of tests measuring cognition, memory and orientation, reports of Alzheimer’s disease, or a dementia diagnosis from the patient or a proxy, and an informant questionnaire (Ascertain Dementia-8 Dementia Screening Interview).
- The study analyzed data from 2021, with the primary outcome being the population attributable fraction (PAF) of dementia from vision impairment.
TAKEAWAY:
- The PAF of dementia associated with at least one vision impairment was 19% (95% CI, 8.2-29.7).
- Impairment in contrast sensitivity had the highest PAF among all other vision issues, at 15% (95% CI, 6.6-23.6). This figure was higher than that for impairment of near acuity, at 9.7% (95% CI, 2.6-17.0), or distance acuity, at 4.9% (95% CI, 0.1-9.9).
- The highest PAFs for dementia due to vision impairment was among participants aged 71-79 years (24.3%; 95% CI, 6.6-41.8), women (26.8%; 95% CI, 12.2-39.9), and non-Hispanic White participants (22.3%; 95% CI, 9.6-34.5).
IN PRACTICE:
“While not proving a cause-and-effect relationship, these findings support inclusion of multiple objective measures of vision impairments, including contrast sensitivity and visual acuity, to capture the total potential impact of addressing vision impairment on dementia,” study authors wrote.
SOURCE:
This study was led by Jason R. Smith, ScM, of the Department of Epidemiology at the Johns Hopkins Bloomberg School of Public Health in Baltimore. It was published online in JAMA Ophthalmology.
LIMITATIONS:
The limited sample sizes for American Indian, Alaska Native, Asian, and Hispanic groups prevented researchers from calculating PAFs for these populations. The cross-sectional design prevented the researchers from examining the timing of vision impairment in relation to a diagnosis of dementia. The study did not explore links between other measures of vision and dementia. Those with early cognitive impairment may not have updated glasses, affecting visual performance. The findings from the study may not apply to institutionalized older adults.
DISCLOSURES:
Jennifer A. Deal, PhD, MHS, reported receiving personal fees from Frontiers in Epidemiology, Velux Stiftung, and Medical Education Speakers Network outside the submitted work. Nicholas S. Reed, AuD, PhD, reported receiving stock options from Neosensory outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a cross-sectional analysis using data from the National Health and Aging Trends Study (NHATS).
- The analysis included 2767 US adults aged 71 years or older (54.7% female and 45.3% male).
- Vision impairments were defined using 2019 World Health Organization criteria. Near and distance vision impairments were defined as greater than 0.30 logMAR, and contrast sensitivity impairment was identified by scores below 1.55 logCS.
- Dementia was classified using a standardized algorithm developed in NHATS, which incorporated a series of tests measuring cognition, memory and orientation, reports of Alzheimer’s disease, or a dementia diagnosis from the patient or a proxy, and an informant questionnaire (Ascertain Dementia-8 Dementia Screening Interview).
- The study analyzed data from 2021, with the primary outcome being the population attributable fraction (PAF) of dementia from vision impairment.
TAKEAWAY:
- The PAF of dementia associated with at least one vision impairment was 19% (95% CI, 8.2-29.7).
- Impairment in contrast sensitivity had the highest PAF among all other vision issues, at 15% (95% CI, 6.6-23.6). This figure was higher than that for impairment of near acuity, at 9.7% (95% CI, 2.6-17.0), or distance acuity, at 4.9% (95% CI, 0.1-9.9).
- The highest PAFs for dementia due to vision impairment was among participants aged 71-79 years (24.3%; 95% CI, 6.6-41.8), women (26.8%; 95% CI, 12.2-39.9), and non-Hispanic White participants (22.3%; 95% CI, 9.6-34.5).
IN PRACTICE:
“While not proving a cause-and-effect relationship, these findings support inclusion of multiple objective measures of vision impairments, including contrast sensitivity and visual acuity, to capture the total potential impact of addressing vision impairment on dementia,” study authors wrote.
SOURCE:
This study was led by Jason R. Smith, ScM, of the Department of Epidemiology at the Johns Hopkins Bloomberg School of Public Health in Baltimore. It was published online in JAMA Ophthalmology.
LIMITATIONS:
The limited sample sizes for American Indian, Alaska Native, Asian, and Hispanic groups prevented researchers from calculating PAFs for these populations. The cross-sectional design prevented the researchers from examining the timing of vision impairment in relation to a diagnosis of dementia. The study did not explore links between other measures of vision and dementia. Those with early cognitive impairment may not have updated glasses, affecting visual performance. The findings from the study may not apply to institutionalized older adults.
DISCLOSURES:
Jennifer A. Deal, PhD, MHS, reported receiving personal fees from Frontiers in Epidemiology, Velux Stiftung, and Medical Education Speakers Network outside the submitted work. Nicholas S. Reed, AuD, PhD, reported receiving stock options from Neosensory outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a cross-sectional analysis using data from the National Health and Aging Trends Study (NHATS).
- The analysis included 2767 US adults aged 71 years or older (54.7% female and 45.3% male).
- Vision impairments were defined using 2019 World Health Organization criteria. Near and distance vision impairments were defined as greater than 0.30 logMAR, and contrast sensitivity impairment was identified by scores below 1.55 logCS.
- Dementia was classified using a standardized algorithm developed in NHATS, which incorporated a series of tests measuring cognition, memory and orientation, reports of Alzheimer’s disease, or a dementia diagnosis from the patient or a proxy, and an informant questionnaire (Ascertain Dementia-8 Dementia Screening Interview).
- The study analyzed data from 2021, with the primary outcome being the population attributable fraction (PAF) of dementia from vision impairment.
TAKEAWAY:
- The PAF of dementia associated with at least one vision impairment was 19% (95% CI, 8.2-29.7).
- Impairment in contrast sensitivity had the highest PAF among all other vision issues, at 15% (95% CI, 6.6-23.6). This figure was higher than that for impairment of near acuity, at 9.7% (95% CI, 2.6-17.0), or distance acuity, at 4.9% (95% CI, 0.1-9.9).
- The highest PAFs for dementia due to vision impairment was among participants aged 71-79 years (24.3%; 95% CI, 6.6-41.8), women (26.8%; 95% CI, 12.2-39.9), and non-Hispanic White participants (22.3%; 95% CI, 9.6-34.5).
IN PRACTICE:
“While not proving a cause-and-effect relationship, these findings support inclusion of multiple objective measures of vision impairments, including contrast sensitivity and visual acuity, to capture the total potential impact of addressing vision impairment on dementia,” study authors wrote.
SOURCE:
This study was led by Jason R. Smith, ScM, of the Department of Epidemiology at the Johns Hopkins Bloomberg School of Public Health in Baltimore. It was published online in JAMA Ophthalmology.
LIMITATIONS:
The limited sample sizes for American Indian, Alaska Native, Asian, and Hispanic groups prevented researchers from calculating PAFs for these populations. The cross-sectional design prevented the researchers from examining the timing of vision impairment in relation to a diagnosis of dementia. The study did not explore links between other measures of vision and dementia. Those with early cognitive impairment may not have updated glasses, affecting visual performance. The findings from the study may not apply to institutionalized older adults.
DISCLOSURES:
Jennifer A. Deal, PhD, MHS, reported receiving personal fees from Frontiers in Epidemiology, Velux Stiftung, and Medical Education Speakers Network outside the submitted work. Nicholas S. Reed, AuD, PhD, reported receiving stock options from Neosensory outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.A version of this article appeared on Medscape.com.
Does MS Protect Against Alzheimer’s Disease?
In a recent study, was published online in Annals of Neurology. Regarding current treatments, they added, the availability of new disease-modifying Alzheimer’s disease therapies increases the importance of early diagnosis in cognitively impaired people including those with MS.
Understanding how MS does this may drive new treatment strategies, said the authors of the study, whichConfirmatory Studies Needed
“Replication and confirmation of these findings, including in studies representative of the real-world Alzheimer’s population in race/ethnicity and sex/gender, are needed before any clinical implications can be drawn,” said Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach. She was not involved with the study but was asked to comment.
The study’s most important immediate implication, said Dr. Sexton, is that it “opens the door to questions about why MS may be associated with Alzheimer’s risk.”
Anecdotal Observation
Although life expectancy for people with MS is increasing, the authors, led by Matthew R. Brier, MD, PhD, an assistant professor at Washington University in St. Louis, Missouri, said they have seen no concomitant rise in Alzheimer’s disease dementia among their patients with MS. This anecdotal observation fueled their hypothesis that Alzheimer’s disease pathology occurs less frequently in this population.
To test their hypothesis, the investigators sequentially enrolled 100 patients with MS (age 60 years or older), along with 300 non-MS controls matched for age, sex, apolipoprotein E (apoE) proteotype, and cognitive status. All participants underwent the Mini-Mental State Examination (MMSE) and PrecivityAD2 (C2N Diagnostics) blood testing.
Overall, patients with MS had lower p-tau217 (t = 3.76, P = .00019) and amyloid probability score 2 (APS2; t = 3.83, P = .00015) ratios than did those without MS. APS2 combines p-tau217 ratio with Abeta42/40 ratio. In addition, APS2 and p-tau217 ratios were lower in patients with MS and ApoE3/apoE3 or apoE3/apoE4 proteotype. MMSE scores were also slightly lower in the MS cohort: 27.6 versus 28.44 for controls. Of 11 patients with MS who underwent Pittsburgh Compound B (PiB) positron emission tomography (PET), nine had congruent PiB PET and plasma results.
When the investigators applied clinical cutoffs, 7.1% of patients with MS were APS2-positive, versus 15.3% of controls (P = .0043). The corresponding figures for p-tau217 ratio positivity were 9% and 18.3%, respectively (P = .0024). Mean Abeta42/40 scores showed no difference between groups.
Patients with MS and positive amyloid biomarkers often had atypical MS features at diagnosis. Compared with biomarker-negative patients with MS, odds ratios for having at least two atypical MS features at diagnosis among APS2-positive and p-tau217 ratio-positive patients with MS were 23.3 and 11.38, respectively.
Data regarding the actual incidence of Alzheimer’s disease among people with MS are scarce and conflicting. An autopsy study published in Annals of Neurology in 2008 revealed the expected rate of amyloid pathology in MS brain tissue, along with extensive microglia activation. In a PET study published in Annals of Neurology in 2020, however, researchers found less amyloid pathology among patients with MS than those without, but little difference in tau pathology.
Because MS and Alzheimer’s disease can each cause cognitive impairment, the rate of co-occurrence of MS and Alzheimer’s disease has been difficult to ascertain without accurate biomarkers. But, the authors said, the advent of disease-modifying therapies makes identifying early Alzheimer’s dementia in MS patients relevant.
Possible Explanations
The authors hypothesized that the lower rate of amyloid pathology observed in their patients with MS may stem from the following possibly overlapping mechanisms:
- MS components, such as persistent perilesional immune activity, may inhibit amyloid beta deposition or facilitate its clearance.
- Exposure to MS drugs may impact Alzheimer’s disease pathology. Most study patients with MS were exposed to beta interferons or glatiramer acetate, the authors noted, and 39 had switched to high-efficacy medications such as B-cell depleting therapies and natalizumab.
- MS’s genetic signature may protect against AD.
“Investigating these ideas would advance our understanding of the relationship between MS and Alzheimer’s, and potentially inform avenues for treatment,” said Dr. Sexton. In this regard, the Alzheimer’s Association has funded an ongoing study examining a drug currently used to promote myelin formation in individuals with MS in genetically engineered Alzheimer’s-like mice. Additional Association-funded studies that examine inflammation also may improve understanding of the mechanisms that may link these diseases, said Dr. Sexton.
The study authors added that unusual cases, such as a study patient who had high amyloid burden by PET but negative APS2 and tau PET, also may shed light on interactions between MS, amyloid pathology, and tau pathology.
Limitations of the present study include the fact that plasma Alzheimer’s disease biomarkers are potentially affected by other conditions as well, according to a study published in Nature Medicine. Additional shortcomings include the MS cohort’s relatively small size and lack of diagnostic confirmation by cerebrospinal fluid. Although MMSE scores among patients with MS were slightly lower, the authors added, this disparity would lead one to expect more, not less, amyloid pathology among these patients if their cognitive impairment resulted from Alzheimer’s disease.
Dr. Sexton reported no relevant financial interests.
The study was supported by the Hope Center for Neurological Disorders at Washington University in St. Louis and by C2N Diagnostics. Washington University in St. Louis holds equity in C2N Diagnostics and may receive royalties resulting from use of PrecivityAD2.
In a recent study, was published online in Annals of Neurology. Regarding current treatments, they added, the availability of new disease-modifying Alzheimer’s disease therapies increases the importance of early diagnosis in cognitively impaired people including those with MS.
Understanding how MS does this may drive new treatment strategies, said the authors of the study, whichConfirmatory Studies Needed
“Replication and confirmation of these findings, including in studies representative of the real-world Alzheimer’s population in race/ethnicity and sex/gender, are needed before any clinical implications can be drawn,” said Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach. She was not involved with the study but was asked to comment.
The study’s most important immediate implication, said Dr. Sexton, is that it “opens the door to questions about why MS may be associated with Alzheimer’s risk.”
Anecdotal Observation
Although life expectancy for people with MS is increasing, the authors, led by Matthew R. Brier, MD, PhD, an assistant professor at Washington University in St. Louis, Missouri, said they have seen no concomitant rise in Alzheimer’s disease dementia among their patients with MS. This anecdotal observation fueled their hypothesis that Alzheimer’s disease pathology occurs less frequently in this population.
To test their hypothesis, the investigators sequentially enrolled 100 patients with MS (age 60 years or older), along with 300 non-MS controls matched for age, sex, apolipoprotein E (apoE) proteotype, and cognitive status. All participants underwent the Mini-Mental State Examination (MMSE) and PrecivityAD2 (C2N Diagnostics) blood testing.
Overall, patients with MS had lower p-tau217 (t = 3.76, P = .00019) and amyloid probability score 2 (APS2; t = 3.83, P = .00015) ratios than did those without MS. APS2 combines p-tau217 ratio with Abeta42/40 ratio. In addition, APS2 and p-tau217 ratios were lower in patients with MS and ApoE3/apoE3 or apoE3/apoE4 proteotype. MMSE scores were also slightly lower in the MS cohort: 27.6 versus 28.44 for controls. Of 11 patients with MS who underwent Pittsburgh Compound B (PiB) positron emission tomography (PET), nine had congruent PiB PET and plasma results.
When the investigators applied clinical cutoffs, 7.1% of patients with MS were APS2-positive, versus 15.3% of controls (P = .0043). The corresponding figures for p-tau217 ratio positivity were 9% and 18.3%, respectively (P = .0024). Mean Abeta42/40 scores showed no difference between groups.
Patients with MS and positive amyloid biomarkers often had atypical MS features at diagnosis. Compared with biomarker-negative patients with MS, odds ratios for having at least two atypical MS features at diagnosis among APS2-positive and p-tau217 ratio-positive patients with MS were 23.3 and 11.38, respectively.
Data regarding the actual incidence of Alzheimer’s disease among people with MS are scarce and conflicting. An autopsy study published in Annals of Neurology in 2008 revealed the expected rate of amyloid pathology in MS brain tissue, along with extensive microglia activation. In a PET study published in Annals of Neurology in 2020, however, researchers found less amyloid pathology among patients with MS than those without, but little difference in tau pathology.
Because MS and Alzheimer’s disease can each cause cognitive impairment, the rate of co-occurrence of MS and Alzheimer’s disease has been difficult to ascertain without accurate biomarkers. But, the authors said, the advent of disease-modifying therapies makes identifying early Alzheimer’s dementia in MS patients relevant.
Possible Explanations
The authors hypothesized that the lower rate of amyloid pathology observed in their patients with MS may stem from the following possibly overlapping mechanisms:
- MS components, such as persistent perilesional immune activity, may inhibit amyloid beta deposition or facilitate its clearance.
- Exposure to MS drugs may impact Alzheimer’s disease pathology. Most study patients with MS were exposed to beta interferons or glatiramer acetate, the authors noted, and 39 had switched to high-efficacy medications such as B-cell depleting therapies and natalizumab.
- MS’s genetic signature may protect against AD.
“Investigating these ideas would advance our understanding of the relationship between MS and Alzheimer’s, and potentially inform avenues for treatment,” said Dr. Sexton. In this regard, the Alzheimer’s Association has funded an ongoing study examining a drug currently used to promote myelin formation in individuals with MS in genetically engineered Alzheimer’s-like mice. Additional Association-funded studies that examine inflammation also may improve understanding of the mechanisms that may link these diseases, said Dr. Sexton.
The study authors added that unusual cases, such as a study patient who had high amyloid burden by PET but negative APS2 and tau PET, also may shed light on interactions between MS, amyloid pathology, and tau pathology.
Limitations of the present study include the fact that plasma Alzheimer’s disease biomarkers are potentially affected by other conditions as well, according to a study published in Nature Medicine. Additional shortcomings include the MS cohort’s relatively small size and lack of diagnostic confirmation by cerebrospinal fluid. Although MMSE scores among patients with MS were slightly lower, the authors added, this disparity would lead one to expect more, not less, amyloid pathology among these patients if their cognitive impairment resulted from Alzheimer’s disease.
Dr. Sexton reported no relevant financial interests.
The study was supported by the Hope Center for Neurological Disorders at Washington University in St. Louis and by C2N Diagnostics. Washington University in St. Louis holds equity in C2N Diagnostics and may receive royalties resulting from use of PrecivityAD2.
In a recent study, was published online in Annals of Neurology. Regarding current treatments, they added, the availability of new disease-modifying Alzheimer’s disease therapies increases the importance of early diagnosis in cognitively impaired people including those with MS.
Understanding how MS does this may drive new treatment strategies, said the authors of the study, whichConfirmatory Studies Needed
“Replication and confirmation of these findings, including in studies representative of the real-world Alzheimer’s population in race/ethnicity and sex/gender, are needed before any clinical implications can be drawn,” said Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach. She was not involved with the study but was asked to comment.
The study’s most important immediate implication, said Dr. Sexton, is that it “opens the door to questions about why MS may be associated with Alzheimer’s risk.”
Anecdotal Observation
Although life expectancy for people with MS is increasing, the authors, led by Matthew R. Brier, MD, PhD, an assistant professor at Washington University in St. Louis, Missouri, said they have seen no concomitant rise in Alzheimer’s disease dementia among their patients with MS. This anecdotal observation fueled their hypothesis that Alzheimer’s disease pathology occurs less frequently in this population.
To test their hypothesis, the investigators sequentially enrolled 100 patients with MS (age 60 years or older), along with 300 non-MS controls matched for age, sex, apolipoprotein E (apoE) proteotype, and cognitive status. All participants underwent the Mini-Mental State Examination (MMSE) and PrecivityAD2 (C2N Diagnostics) blood testing.
Overall, patients with MS had lower p-tau217 (t = 3.76, P = .00019) and amyloid probability score 2 (APS2; t = 3.83, P = .00015) ratios than did those without MS. APS2 combines p-tau217 ratio with Abeta42/40 ratio. In addition, APS2 and p-tau217 ratios were lower in patients with MS and ApoE3/apoE3 or apoE3/apoE4 proteotype. MMSE scores were also slightly lower in the MS cohort: 27.6 versus 28.44 for controls. Of 11 patients with MS who underwent Pittsburgh Compound B (PiB) positron emission tomography (PET), nine had congruent PiB PET and plasma results.
When the investigators applied clinical cutoffs, 7.1% of patients with MS were APS2-positive, versus 15.3% of controls (P = .0043). The corresponding figures for p-tau217 ratio positivity were 9% and 18.3%, respectively (P = .0024). Mean Abeta42/40 scores showed no difference between groups.
Patients with MS and positive amyloid biomarkers often had atypical MS features at diagnosis. Compared with biomarker-negative patients with MS, odds ratios for having at least two atypical MS features at diagnosis among APS2-positive and p-tau217 ratio-positive patients with MS were 23.3 and 11.38, respectively.
Data regarding the actual incidence of Alzheimer’s disease among people with MS are scarce and conflicting. An autopsy study published in Annals of Neurology in 2008 revealed the expected rate of amyloid pathology in MS brain tissue, along with extensive microglia activation. In a PET study published in Annals of Neurology in 2020, however, researchers found less amyloid pathology among patients with MS than those without, but little difference in tau pathology.
Because MS and Alzheimer’s disease can each cause cognitive impairment, the rate of co-occurrence of MS and Alzheimer’s disease has been difficult to ascertain without accurate biomarkers. But, the authors said, the advent of disease-modifying therapies makes identifying early Alzheimer’s dementia in MS patients relevant.
Possible Explanations
The authors hypothesized that the lower rate of amyloid pathology observed in their patients with MS may stem from the following possibly overlapping mechanisms:
- MS components, such as persistent perilesional immune activity, may inhibit amyloid beta deposition or facilitate its clearance.
- Exposure to MS drugs may impact Alzheimer’s disease pathology. Most study patients with MS were exposed to beta interferons or glatiramer acetate, the authors noted, and 39 had switched to high-efficacy medications such as B-cell depleting therapies and natalizumab.
- MS’s genetic signature may protect against AD.
“Investigating these ideas would advance our understanding of the relationship between MS and Alzheimer’s, and potentially inform avenues for treatment,” said Dr. Sexton. In this regard, the Alzheimer’s Association has funded an ongoing study examining a drug currently used to promote myelin formation in individuals with MS in genetically engineered Alzheimer’s-like mice. Additional Association-funded studies that examine inflammation also may improve understanding of the mechanisms that may link these diseases, said Dr. Sexton.
The study authors added that unusual cases, such as a study patient who had high amyloid burden by PET but negative APS2 and tau PET, also may shed light on interactions between MS, amyloid pathology, and tau pathology.
Limitations of the present study include the fact that plasma Alzheimer’s disease biomarkers are potentially affected by other conditions as well, according to a study published in Nature Medicine. Additional shortcomings include the MS cohort’s relatively small size and lack of diagnostic confirmation by cerebrospinal fluid. Although MMSE scores among patients with MS were slightly lower, the authors added, this disparity would lead one to expect more, not less, amyloid pathology among these patients if their cognitive impairment resulted from Alzheimer’s disease.
Dr. Sexton reported no relevant financial interests.
The study was supported by the Hope Center for Neurological Disorders at Washington University in St. Louis and by C2N Diagnostics. Washington University in St. Louis holds equity in C2N Diagnostics and may receive royalties resulting from use of PrecivityAD2.
FROM ANNALS OF NEUROLOGY
Brain Network Significantly Larger in People With Depression, Even in Childhood
Researchers have discovered that
Using a novel brain-mapping technique, researchers found that the frontostriatal salience network was expanded nearly twofold in the brains of most individuals studied with depression compared with controls.
“This expansion in cortex was trait-like, meaning it was stable over time and did not change as symptoms changed over time,” said lead author Charles Lynch, PhD, assistant professor of neuroscience, Department of Psychiatry, Weill Cornell Medicine in New York.
It could also be detected in children who later developed depression, suggesting it may serve as a biomarker of depression risk. Investigators said the findings could aid in prevention and early detection of depression, as well as the development of more personalized treatment.
The study was published online in Nature.
Prewired for Depression?
Precision functional mapping is a relatively new approach to brain mapping in individuals that uses large amounts of fMRI data from hours of scans per person. The technique has been used to show differences in brain networks between and in healthy individuals but had not been used to study brain networks in people with depression.
“We leveraged our large longitudinal datasets — with many hours of functional MRI scanning per subject — to construct individual-specific maps of functional brain networks in each patient using precision functional mapping, instead of relying on group average,” Dr. Lynch said.
In the primary analysis of 141 adults with major depression and 37 healthy controls, the frontostriatal salience network — which is involved in reward processing and attention to internal and external stimuli — was markedly larger in these individuals with depression.
“This is one of the first times these kinds of personalized maps have been created in individuals with depression, and this is how we first observed of the salience network being larger in individuals with depression,” Dr. Lynch said.
In four of the six individuals, the salience network was expanded more than twofold, outside the range observed in all 37 healthy controls. On average, the salience network occupied 73% more of the cortical surface relative to the average in healthy controls.
The findings were replicated using independent samples of repeatedly sampled individuals with depression and in large-scale group average data.
The expansion of the salience network did not change over time and was unaffected by changes in mood state.
“These observations led us to propose that instead of driving changes in depressive symptoms over time, salience network expansion may be a stable marker of risk for developing depression,” the study team wrote.
An analysis of brain scans from 57 children who went on to develop depressive symptoms during adolescence and an equal number of children who did not develop depressive symptoms supports this theory.
On average, the salience network occupied roughly 36% more of cortex in the children with no current or previous symptoms of depression at the time of their fMRI scans but who subsequently developed clinically significant symptoms of depression, relative to children with no depressive symptoms at any study time point, the researchers found.
Immediate Clinical Impact?
Reached for comment, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, said this research “exemplifies the promising intersection of neurology and digital health, where advanced neuroimaging and data-driven approaches can transform mental health care into a more precise and individualized practice,” Dr. Lakhan said. “By identifying this brain network expansion, we’re unlocking new possibilities for precision medicine in mental health.”
Dr. Lakhan, who wasn’t involved in this research, said identifying the expansion of the frontostriatal salience network in individuals with depression opens new avenues for developing novel therapeutics.
“By targeting this network through neuromodulation techniques like deep brain stimulation, transcranial magnetic stimulation, and prescription digital therapeutics, treatments can be more precisely tailored to individual neurobiological profiles,” Dr. Lakhan said. “Additionally, this network expansion could serve as a biomarker for early detection, allowing for preventive strategies or personalized treatment plans, particularly for those at risk of developing depression.”
In addition, a greater understanding of the mechanisms driving salience network expansion offers potential for discovering new pharmacological targets, Dr. Lakhan noted.
“Drugs that modulate synaptic plasticity or network connectivity might be developed to reverse or mitigate these neural changes. The findings also support the use of longitudinal monitoring to predict and preempt symptom emergence, improving outcomes through timely intervention. This research paves the way for more personalized, precise, and proactive approaches in treating depression,” Dr. Lakhan concluded.
Also weighing in, Teddy Akiki, MD, with the Department of Psychiatry and Behavioral Sciences at Stanford Medicine in California, noted that the effect size of the frontostriatal salience network difference in depression is “remarkably larger than typically seen in neuroimaging studies of depression, which often describe subtle differences. The consistency across multiple datasets and across time at the individual level adds significant weight to these findings, suggesting that it is a trait marker rather than a state-dependent marker.”
“The observation that this expansion is present even before the onset of depressive symptoms in adolescence suggests its potential as a biomarker for depression risk,” Dr. Akiki said. “This approach could lead to earlier identification of at-risk individuals and potentially inform the development of targeted preventive interventions.”
He cautioned that it remains to be seen whether interventions targeting the salience network can effectively prevent or treat depression.
This research was supported in part by the National Institute of Mental Health, the National Institute on Drug Addiction, the Hope for Depression Research Foundation, and the Foundation for OCD Research. Dr. Lynch and a coauthor are listed as inventors for Cornell University patent applications on neuroimaging biomarkers for depression which are pending or in preparation. Dr. Liston has served as a scientific advisor or consultant to Compass Pathways PLC, Delix Therapeutics, and Brainify.AI. Dr. Lakhan and Dr. Akiki had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Researchers have discovered that
Using a novel brain-mapping technique, researchers found that the frontostriatal salience network was expanded nearly twofold in the brains of most individuals studied with depression compared with controls.
“This expansion in cortex was trait-like, meaning it was stable over time and did not change as symptoms changed over time,” said lead author Charles Lynch, PhD, assistant professor of neuroscience, Department of Psychiatry, Weill Cornell Medicine in New York.
It could also be detected in children who later developed depression, suggesting it may serve as a biomarker of depression risk. Investigators said the findings could aid in prevention and early detection of depression, as well as the development of more personalized treatment.
The study was published online in Nature.
Prewired for Depression?
Precision functional mapping is a relatively new approach to brain mapping in individuals that uses large amounts of fMRI data from hours of scans per person. The technique has been used to show differences in brain networks between and in healthy individuals but had not been used to study brain networks in people with depression.
“We leveraged our large longitudinal datasets — with many hours of functional MRI scanning per subject — to construct individual-specific maps of functional brain networks in each patient using precision functional mapping, instead of relying on group average,” Dr. Lynch said.
In the primary analysis of 141 adults with major depression and 37 healthy controls, the frontostriatal salience network — which is involved in reward processing and attention to internal and external stimuli — was markedly larger in these individuals with depression.
“This is one of the first times these kinds of personalized maps have been created in individuals with depression, and this is how we first observed of the salience network being larger in individuals with depression,” Dr. Lynch said.
In four of the six individuals, the salience network was expanded more than twofold, outside the range observed in all 37 healthy controls. On average, the salience network occupied 73% more of the cortical surface relative to the average in healthy controls.
The findings were replicated using independent samples of repeatedly sampled individuals with depression and in large-scale group average data.
The expansion of the salience network did not change over time and was unaffected by changes in mood state.
“These observations led us to propose that instead of driving changes in depressive symptoms over time, salience network expansion may be a stable marker of risk for developing depression,” the study team wrote.
An analysis of brain scans from 57 children who went on to develop depressive symptoms during adolescence and an equal number of children who did not develop depressive symptoms supports this theory.
On average, the salience network occupied roughly 36% more of cortex in the children with no current or previous symptoms of depression at the time of their fMRI scans but who subsequently developed clinically significant symptoms of depression, relative to children with no depressive symptoms at any study time point, the researchers found.
Immediate Clinical Impact?
Reached for comment, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, said this research “exemplifies the promising intersection of neurology and digital health, where advanced neuroimaging and data-driven approaches can transform mental health care into a more precise and individualized practice,” Dr. Lakhan said. “By identifying this brain network expansion, we’re unlocking new possibilities for precision medicine in mental health.”
Dr. Lakhan, who wasn’t involved in this research, said identifying the expansion of the frontostriatal salience network in individuals with depression opens new avenues for developing novel therapeutics.
“By targeting this network through neuromodulation techniques like deep brain stimulation, transcranial magnetic stimulation, and prescription digital therapeutics, treatments can be more precisely tailored to individual neurobiological profiles,” Dr. Lakhan said. “Additionally, this network expansion could serve as a biomarker for early detection, allowing for preventive strategies or personalized treatment plans, particularly for those at risk of developing depression.”
In addition, a greater understanding of the mechanisms driving salience network expansion offers potential for discovering new pharmacological targets, Dr. Lakhan noted.
“Drugs that modulate synaptic plasticity or network connectivity might be developed to reverse or mitigate these neural changes. The findings also support the use of longitudinal monitoring to predict and preempt symptom emergence, improving outcomes through timely intervention. This research paves the way for more personalized, precise, and proactive approaches in treating depression,” Dr. Lakhan concluded.
Also weighing in, Teddy Akiki, MD, with the Department of Psychiatry and Behavioral Sciences at Stanford Medicine in California, noted that the effect size of the frontostriatal salience network difference in depression is “remarkably larger than typically seen in neuroimaging studies of depression, which often describe subtle differences. The consistency across multiple datasets and across time at the individual level adds significant weight to these findings, suggesting that it is a trait marker rather than a state-dependent marker.”
“The observation that this expansion is present even before the onset of depressive symptoms in adolescence suggests its potential as a biomarker for depression risk,” Dr. Akiki said. “This approach could lead to earlier identification of at-risk individuals and potentially inform the development of targeted preventive interventions.”
He cautioned that it remains to be seen whether interventions targeting the salience network can effectively prevent or treat depression.
This research was supported in part by the National Institute of Mental Health, the National Institute on Drug Addiction, the Hope for Depression Research Foundation, and the Foundation for OCD Research. Dr. Lynch and a coauthor are listed as inventors for Cornell University patent applications on neuroimaging biomarkers for depression which are pending or in preparation. Dr. Liston has served as a scientific advisor or consultant to Compass Pathways PLC, Delix Therapeutics, and Brainify.AI. Dr. Lakhan and Dr. Akiki had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Researchers have discovered that
Using a novel brain-mapping technique, researchers found that the frontostriatal salience network was expanded nearly twofold in the brains of most individuals studied with depression compared with controls.
“This expansion in cortex was trait-like, meaning it was stable over time and did not change as symptoms changed over time,” said lead author Charles Lynch, PhD, assistant professor of neuroscience, Department of Psychiatry, Weill Cornell Medicine in New York.
It could also be detected in children who later developed depression, suggesting it may serve as a biomarker of depression risk. Investigators said the findings could aid in prevention and early detection of depression, as well as the development of more personalized treatment.
The study was published online in Nature.
Prewired for Depression?
Precision functional mapping is a relatively new approach to brain mapping in individuals that uses large amounts of fMRI data from hours of scans per person. The technique has been used to show differences in brain networks between and in healthy individuals but had not been used to study brain networks in people with depression.
“We leveraged our large longitudinal datasets — with many hours of functional MRI scanning per subject — to construct individual-specific maps of functional brain networks in each patient using precision functional mapping, instead of relying on group average,” Dr. Lynch said.
In the primary analysis of 141 adults with major depression and 37 healthy controls, the frontostriatal salience network — which is involved in reward processing and attention to internal and external stimuli — was markedly larger in these individuals with depression.
“This is one of the first times these kinds of personalized maps have been created in individuals with depression, and this is how we first observed of the salience network being larger in individuals with depression,” Dr. Lynch said.
In four of the six individuals, the salience network was expanded more than twofold, outside the range observed in all 37 healthy controls. On average, the salience network occupied 73% more of the cortical surface relative to the average in healthy controls.
The findings were replicated using independent samples of repeatedly sampled individuals with depression and in large-scale group average data.
The expansion of the salience network did not change over time and was unaffected by changes in mood state.
“These observations led us to propose that instead of driving changes in depressive symptoms over time, salience network expansion may be a stable marker of risk for developing depression,” the study team wrote.
An analysis of brain scans from 57 children who went on to develop depressive symptoms during adolescence and an equal number of children who did not develop depressive symptoms supports this theory.
On average, the salience network occupied roughly 36% more of cortex in the children with no current or previous symptoms of depression at the time of their fMRI scans but who subsequently developed clinically significant symptoms of depression, relative to children with no depressive symptoms at any study time point, the researchers found.
Immediate Clinical Impact?
Reached for comment, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, said this research “exemplifies the promising intersection of neurology and digital health, where advanced neuroimaging and data-driven approaches can transform mental health care into a more precise and individualized practice,” Dr. Lakhan said. “By identifying this brain network expansion, we’re unlocking new possibilities for precision medicine in mental health.”
Dr. Lakhan, who wasn’t involved in this research, said identifying the expansion of the frontostriatal salience network in individuals with depression opens new avenues for developing novel therapeutics.
“By targeting this network through neuromodulation techniques like deep brain stimulation, transcranial magnetic stimulation, and prescription digital therapeutics, treatments can be more precisely tailored to individual neurobiological profiles,” Dr. Lakhan said. “Additionally, this network expansion could serve as a biomarker for early detection, allowing for preventive strategies or personalized treatment plans, particularly for those at risk of developing depression.”
In addition, a greater understanding of the mechanisms driving salience network expansion offers potential for discovering new pharmacological targets, Dr. Lakhan noted.
“Drugs that modulate synaptic plasticity or network connectivity might be developed to reverse or mitigate these neural changes. The findings also support the use of longitudinal monitoring to predict and preempt symptom emergence, improving outcomes through timely intervention. This research paves the way for more personalized, precise, and proactive approaches in treating depression,” Dr. Lakhan concluded.
Also weighing in, Teddy Akiki, MD, with the Department of Psychiatry and Behavioral Sciences at Stanford Medicine in California, noted that the effect size of the frontostriatal salience network difference in depression is “remarkably larger than typically seen in neuroimaging studies of depression, which often describe subtle differences. The consistency across multiple datasets and across time at the individual level adds significant weight to these findings, suggesting that it is a trait marker rather than a state-dependent marker.”
“The observation that this expansion is present even before the onset of depressive symptoms in adolescence suggests its potential as a biomarker for depression risk,” Dr. Akiki said. “This approach could lead to earlier identification of at-risk individuals and potentially inform the development of targeted preventive interventions.”
He cautioned that it remains to be seen whether interventions targeting the salience network can effectively prevent or treat depression.
This research was supported in part by the National Institute of Mental Health, the National Institute on Drug Addiction, the Hope for Depression Research Foundation, and the Foundation for OCD Research. Dr. Lynch and a coauthor are listed as inventors for Cornell University patent applications on neuroimaging biomarkers for depression which are pending or in preparation. Dr. Liston has served as a scientific advisor or consultant to Compass Pathways PLC, Delix Therapeutics, and Brainify.AI. Dr. Lakhan and Dr. Akiki had no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM NATURE
Nighttime Outdoor Light Pollution Linked to Alzheimer’s Risk
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
FROM FRONTIERS OF NEUROSCIENCE
Cancer Cases, Deaths in Men Predicted to Surge by 2050
TOPLINE:
— with substantial disparities in cancer cases and deaths by age and region of the world, a recent analysis found.
METHODOLOGY:
- Overall, men have higher cancer incidence and mortality rates, which can be largely attributed to a higher prevalence of modifiable risk factors such as smoking, alcohol consumption, and occupational carcinogens, as well as the underuse of cancer prevention, screening, and treatment services.
- To assess the burden of cancer in men of different ages and from different regions of the world, researchers analyzed data from the 2022 Global Cancer Observatory (GLOBOCAN), which provides national-level estimates for cancer cases and deaths.
- Study outcomes included the incidence, mortality, and prevalence of cancer among men in 2022, along with projections for 2050. Estimates were stratified by several factors, including age; region; and Human Development Index (HDI), a composite score for health, education, and standard of living.
- Researchers also calculated mortality-to-incidence ratios (MIRs) for various cancer types, where higher values indicate worse survival.
TAKEAWAY:
- The researchers reported an estimated 10.3 million cancer cases and 5.4 million deaths globally in 2022, with almost two thirds of cases and deaths occurring in men aged 65 years or older.
- By 2050, cancer cases and deaths were projected to increase by 84.3% (to 19 million) and 93.2% (to 10.5 million), respectively. The increase from 2022 to 2050 was more than twofold higher for older men and countries with low and medium HDI.
- In 2022, the estimated global cancer MIR among men was nearly 55%, with variations by cancer types, age, and HDI. The MIR was lowest for thyroid cancer (7.6%) and highest for pancreatic cancer (90.9%); among World Health Organization regions, Africa had the highest MIR (72.6%), while the Americas had the lowest MIR (39.1%); countries with the lowest HDI had the highest MIR (73.5% vs 41.1% for very high HDI).
- Lung cancer was the leading cause for cases and deaths in 2022 and was projected to remain the leading cause in 2050.
IN PRACTICE:
“Disparities in cancer incidence and mortality among men were observed across age groups, countries/territories, and HDI in 2022, with these disparities projected to widen further by 2050,” according to the authors, who called for efforts to “reduce disparities in cancer burden and ensure equity in cancer prevention and care for men across the globe.”
SOURCE:
The study, led by Habtamu Mellie Bizuayehu, PhD, School of Public Health, Faculty of Medicine, The University of Queensland, Brisbane, Australia, was published online in Cancer.
LIMITATIONS:
The findings may be influenced by the quality of GLOBOCAN data. Interpretation should be cautious as MIR may not fully reflect cancer outcome inequalities. The study did not include other measures of cancer burden, such as years of life lost or years lived with disability, which were unavailable from the data source.
DISCLOSURES:
The authors did not disclose any funding information. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
— with substantial disparities in cancer cases and deaths by age and region of the world, a recent analysis found.
METHODOLOGY:
- Overall, men have higher cancer incidence and mortality rates, which can be largely attributed to a higher prevalence of modifiable risk factors such as smoking, alcohol consumption, and occupational carcinogens, as well as the underuse of cancer prevention, screening, and treatment services.
- To assess the burden of cancer in men of different ages and from different regions of the world, researchers analyzed data from the 2022 Global Cancer Observatory (GLOBOCAN), which provides national-level estimates for cancer cases and deaths.
- Study outcomes included the incidence, mortality, and prevalence of cancer among men in 2022, along with projections for 2050. Estimates were stratified by several factors, including age; region; and Human Development Index (HDI), a composite score for health, education, and standard of living.
- Researchers also calculated mortality-to-incidence ratios (MIRs) for various cancer types, where higher values indicate worse survival.
TAKEAWAY:
- The researchers reported an estimated 10.3 million cancer cases and 5.4 million deaths globally in 2022, with almost two thirds of cases and deaths occurring in men aged 65 years or older.
- By 2050, cancer cases and deaths were projected to increase by 84.3% (to 19 million) and 93.2% (to 10.5 million), respectively. The increase from 2022 to 2050 was more than twofold higher for older men and countries with low and medium HDI.
- In 2022, the estimated global cancer MIR among men was nearly 55%, with variations by cancer types, age, and HDI. The MIR was lowest for thyroid cancer (7.6%) and highest for pancreatic cancer (90.9%); among World Health Organization regions, Africa had the highest MIR (72.6%), while the Americas had the lowest MIR (39.1%); countries with the lowest HDI had the highest MIR (73.5% vs 41.1% for very high HDI).
- Lung cancer was the leading cause for cases and deaths in 2022 and was projected to remain the leading cause in 2050.
IN PRACTICE:
“Disparities in cancer incidence and mortality among men were observed across age groups, countries/territories, and HDI in 2022, with these disparities projected to widen further by 2050,” according to the authors, who called for efforts to “reduce disparities in cancer burden and ensure equity in cancer prevention and care for men across the globe.”
SOURCE:
The study, led by Habtamu Mellie Bizuayehu, PhD, School of Public Health, Faculty of Medicine, The University of Queensland, Brisbane, Australia, was published online in Cancer.
LIMITATIONS:
The findings may be influenced by the quality of GLOBOCAN data. Interpretation should be cautious as MIR may not fully reflect cancer outcome inequalities. The study did not include other measures of cancer burden, such as years of life lost or years lived with disability, which were unavailable from the data source.
DISCLOSURES:
The authors did not disclose any funding information. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
— with substantial disparities in cancer cases and deaths by age and region of the world, a recent analysis found.
METHODOLOGY:
- Overall, men have higher cancer incidence and mortality rates, which can be largely attributed to a higher prevalence of modifiable risk factors such as smoking, alcohol consumption, and occupational carcinogens, as well as the underuse of cancer prevention, screening, and treatment services.
- To assess the burden of cancer in men of different ages and from different regions of the world, researchers analyzed data from the 2022 Global Cancer Observatory (GLOBOCAN), which provides national-level estimates for cancer cases and deaths.
- Study outcomes included the incidence, mortality, and prevalence of cancer among men in 2022, along with projections for 2050. Estimates were stratified by several factors, including age; region; and Human Development Index (HDI), a composite score for health, education, and standard of living.
- Researchers also calculated mortality-to-incidence ratios (MIRs) for various cancer types, where higher values indicate worse survival.
TAKEAWAY:
- The researchers reported an estimated 10.3 million cancer cases and 5.4 million deaths globally in 2022, with almost two thirds of cases and deaths occurring in men aged 65 years or older.
- By 2050, cancer cases and deaths were projected to increase by 84.3% (to 19 million) and 93.2% (to 10.5 million), respectively. The increase from 2022 to 2050 was more than twofold higher for older men and countries with low and medium HDI.
- In 2022, the estimated global cancer MIR among men was nearly 55%, with variations by cancer types, age, and HDI. The MIR was lowest for thyroid cancer (7.6%) and highest for pancreatic cancer (90.9%); among World Health Organization regions, Africa had the highest MIR (72.6%), while the Americas had the lowest MIR (39.1%); countries with the lowest HDI had the highest MIR (73.5% vs 41.1% for very high HDI).
- Lung cancer was the leading cause for cases and deaths in 2022 and was projected to remain the leading cause in 2050.
IN PRACTICE:
“Disparities in cancer incidence and mortality among men were observed across age groups, countries/territories, and HDI in 2022, with these disparities projected to widen further by 2050,” according to the authors, who called for efforts to “reduce disparities in cancer burden and ensure equity in cancer prevention and care for men across the globe.”
SOURCE:
The study, led by Habtamu Mellie Bizuayehu, PhD, School of Public Health, Faculty of Medicine, The University of Queensland, Brisbane, Australia, was published online in Cancer.
LIMITATIONS:
The findings may be influenced by the quality of GLOBOCAN data. Interpretation should be cautious as MIR may not fully reflect cancer outcome inequalities. The study did not include other measures of cancer burden, such as years of life lost or years lived with disability, which were unavailable from the data source.
DISCLOSURES:
The authors did not disclose any funding information. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
HIIT May Best Moderate Exercise for Poststroke Fitness
, according to a multicenter randomized controlled trial.
“We hoped that we would see improvements in cardiovascular fitness after HIIT and anticipated that these improvements would be greater than in the moderate-intensity group, but we were pleasantly surprised by the degree of improvement we observed,” said Ada Tang, PT, PhD, associate professor of health sciences at McMaster University in Hamilton, Ontario, Canada. “The improvements seen in the HIIT group were twofold higher than in the other group.”
The results were published in Stroke.
Clinically Meaningful
Researchers compared the effects of 12 weeks of short-interval HIIT with those of moderate-intensity continuous training (MICT) on peak oxygen uptake (VO2peak), cardiovascular risk factors, and mobility outcomes after stroke.
They randomly assigned participants to receive 3 days per week of HIIT or traditional moderate exercise sessions for 12 weeks. Participants’ mean age was 65 years, and 39% were women. They enrolled at a mean age of 1.8 years after sustaining a mild stroke.
A total of 42 participants were randomized to HIIT and 40 to MICT. There were no significant differences between the groups at baseline, and both groups exercised on adaptive recumbent steppers, which are suitable for stroke survivors with varying abilities.
The short-interval HIIT protocol involved 10 1-minute intervals of high-intensity exercise, interspersed with nine 1-minute low-intensity intervals, for a total of 19 minutes. HIIT intervals targeted 80% heart rate reserve (HRR) and progressed by 10% every 4 weeks up to 100% HRR. The low-intensity intervals targeted 30% HRR.
The traditional MICT protocol for stroke rehabilitation targeted 40% HRR for 20 minutes and progressed by 10% HRR and 5 minutes every 4 weeks, up to 60% HRR for 30 minutes.
The HIIT group’s cardiorespiratory fitness levels (VO2peak) improved twice as much as those of the MICT group: 3.5 mL of oxygen consumed in 1 minute per kg of body weight (mL/kg/min) compared with 1.8 mL/kg/min.
Of note, changes in VO2peak from baseline remained above the clinically important threshold of 1.0 mL/kg/min at 8-week follow-up in the HIIT group (1.71 mL/kg/min) but not in the MICT group (0.67 mL/kg/min).
Both groups increased their 6-minute walk test distances by 8.8 m at 12 weeks and by 18.5 m at 20 weeks. No between-group differences were found for cardiovascular risk or mobility outcomes, and no adverse events occurred in either group.
On average, the HIIT group spent 36% of total training time exercising at intensities above 80% HRR throughout the intervention, while the MICT group spent 42% of time at intensities of 40%-59% HRR.
The study was limited by a small sample size of high-functioning individuals who sustained a mild stroke. Enrollment was halted for 2 years due to the COVID-19 lockdowns, limiting the study’s statistical power.
Nevertheless, the authors concluded, “Given that a lack of time is a significant barrier to the implementation of aerobic exercise in stroke clinical practice, our findings suggest that short-interval HIIT may be an effective alternative to traditional MICT for improving VO2peak after stroke, with potential clinically meaningful benefits sustained in the short-term.”
“Our findings show that a short HIIT protocol is possible in people with stroke, which is exciting to see,” said Tang. “But there are different factors that clinicians should consider before recommending this training for their patients, such as their health status and their physical status. Stroke rehabilitation specialists, including stroke physical therapists, can advise on how to proceed to ensure the safety and effectiveness of HIIT.”
Selected Patients May Benefit
“Broad implementation of this intervention may be premature without further research,” said Ryan Glatt, CPT, senior brain health coach and director of the FitBrain Program at Pacific Neuroscience Institute in Santa Monica, California. “The study focused on relatively high-functioning stroke survivors, which raises questions about the applicability of the results to those with more severe impairments.” Mr. Glatt did not participate in the research.
“Additional studies are needed to confirm whether these findings are applicable to more diverse and severely affected populations and to assess the long-term sustainability of the benefits observed,” he said. “Also, the lack of significant improvements in other critical outcomes, such as mobility, suggests limitations in the broader application of HIIT for stroke rehabilitation.”
“While HIIT shows potential, it should be approached with caution,” Mr. Glatt continued. “It may benefit select patients, but replacing traditional exercise protocols with HIIT should not be done in all cases. More robust evidence and careful consideration of individual patient needs are essential.”
This study was funded by an operating grant from the Canadian Institutes of Health Research. Dr. Tang reported grants from the Canadian Institutes of Health Research, the Physiotherapy Foundation of Canada, and the Heart and Stroke Foundation of Canada. Mr. Glatt declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
, according to a multicenter randomized controlled trial.
“We hoped that we would see improvements in cardiovascular fitness after HIIT and anticipated that these improvements would be greater than in the moderate-intensity group, but we were pleasantly surprised by the degree of improvement we observed,” said Ada Tang, PT, PhD, associate professor of health sciences at McMaster University in Hamilton, Ontario, Canada. “The improvements seen in the HIIT group were twofold higher than in the other group.”
The results were published in Stroke.
Clinically Meaningful
Researchers compared the effects of 12 weeks of short-interval HIIT with those of moderate-intensity continuous training (MICT) on peak oxygen uptake (VO2peak), cardiovascular risk factors, and mobility outcomes after stroke.
They randomly assigned participants to receive 3 days per week of HIIT or traditional moderate exercise sessions for 12 weeks. Participants’ mean age was 65 years, and 39% were women. They enrolled at a mean age of 1.8 years after sustaining a mild stroke.
A total of 42 participants were randomized to HIIT and 40 to MICT. There were no significant differences between the groups at baseline, and both groups exercised on adaptive recumbent steppers, which are suitable for stroke survivors with varying abilities.
The short-interval HIIT protocol involved 10 1-minute intervals of high-intensity exercise, interspersed with nine 1-minute low-intensity intervals, for a total of 19 minutes. HIIT intervals targeted 80% heart rate reserve (HRR) and progressed by 10% every 4 weeks up to 100% HRR. The low-intensity intervals targeted 30% HRR.
The traditional MICT protocol for stroke rehabilitation targeted 40% HRR for 20 minutes and progressed by 10% HRR and 5 minutes every 4 weeks, up to 60% HRR for 30 minutes.
The HIIT group’s cardiorespiratory fitness levels (VO2peak) improved twice as much as those of the MICT group: 3.5 mL of oxygen consumed in 1 minute per kg of body weight (mL/kg/min) compared with 1.8 mL/kg/min.
Of note, changes in VO2peak from baseline remained above the clinically important threshold of 1.0 mL/kg/min at 8-week follow-up in the HIIT group (1.71 mL/kg/min) but not in the MICT group (0.67 mL/kg/min).
Both groups increased their 6-minute walk test distances by 8.8 m at 12 weeks and by 18.5 m at 20 weeks. No between-group differences were found for cardiovascular risk or mobility outcomes, and no adverse events occurred in either group.
On average, the HIIT group spent 36% of total training time exercising at intensities above 80% HRR throughout the intervention, while the MICT group spent 42% of time at intensities of 40%-59% HRR.
The study was limited by a small sample size of high-functioning individuals who sustained a mild stroke. Enrollment was halted for 2 years due to the COVID-19 lockdowns, limiting the study’s statistical power.
Nevertheless, the authors concluded, “Given that a lack of time is a significant barrier to the implementation of aerobic exercise in stroke clinical practice, our findings suggest that short-interval HIIT may be an effective alternative to traditional MICT for improving VO2peak after stroke, with potential clinically meaningful benefits sustained in the short-term.”
“Our findings show that a short HIIT protocol is possible in people with stroke, which is exciting to see,” said Tang. “But there are different factors that clinicians should consider before recommending this training for their patients, such as their health status and their physical status. Stroke rehabilitation specialists, including stroke physical therapists, can advise on how to proceed to ensure the safety and effectiveness of HIIT.”
Selected Patients May Benefit
“Broad implementation of this intervention may be premature without further research,” said Ryan Glatt, CPT, senior brain health coach and director of the FitBrain Program at Pacific Neuroscience Institute in Santa Monica, California. “The study focused on relatively high-functioning stroke survivors, which raises questions about the applicability of the results to those with more severe impairments.” Mr. Glatt did not participate in the research.
“Additional studies are needed to confirm whether these findings are applicable to more diverse and severely affected populations and to assess the long-term sustainability of the benefits observed,” he said. “Also, the lack of significant improvements in other critical outcomes, such as mobility, suggests limitations in the broader application of HIIT for stroke rehabilitation.”
“While HIIT shows potential, it should be approached with caution,” Mr. Glatt continued. “It may benefit select patients, but replacing traditional exercise protocols with HIIT should not be done in all cases. More robust evidence and careful consideration of individual patient needs are essential.”
This study was funded by an operating grant from the Canadian Institutes of Health Research. Dr. Tang reported grants from the Canadian Institutes of Health Research, the Physiotherapy Foundation of Canada, and the Heart and Stroke Foundation of Canada. Mr. Glatt declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
, according to a multicenter randomized controlled trial.
“We hoped that we would see improvements in cardiovascular fitness after HIIT and anticipated that these improvements would be greater than in the moderate-intensity group, but we were pleasantly surprised by the degree of improvement we observed,” said Ada Tang, PT, PhD, associate professor of health sciences at McMaster University in Hamilton, Ontario, Canada. “The improvements seen in the HIIT group were twofold higher than in the other group.”
The results were published in Stroke.
Clinically Meaningful
Researchers compared the effects of 12 weeks of short-interval HIIT with those of moderate-intensity continuous training (MICT) on peak oxygen uptake (VO2peak), cardiovascular risk factors, and mobility outcomes after stroke.
They randomly assigned participants to receive 3 days per week of HIIT or traditional moderate exercise sessions for 12 weeks. Participants’ mean age was 65 years, and 39% were women. They enrolled at a mean age of 1.8 years after sustaining a mild stroke.
A total of 42 participants were randomized to HIIT and 40 to MICT. There were no significant differences between the groups at baseline, and both groups exercised on adaptive recumbent steppers, which are suitable for stroke survivors with varying abilities.
The short-interval HIIT protocol involved 10 1-minute intervals of high-intensity exercise, interspersed with nine 1-minute low-intensity intervals, for a total of 19 minutes. HIIT intervals targeted 80% heart rate reserve (HRR) and progressed by 10% every 4 weeks up to 100% HRR. The low-intensity intervals targeted 30% HRR.
The traditional MICT protocol for stroke rehabilitation targeted 40% HRR for 20 minutes and progressed by 10% HRR and 5 minutes every 4 weeks, up to 60% HRR for 30 minutes.
The HIIT group’s cardiorespiratory fitness levels (VO2peak) improved twice as much as those of the MICT group: 3.5 mL of oxygen consumed in 1 minute per kg of body weight (mL/kg/min) compared with 1.8 mL/kg/min.
Of note, changes in VO2peak from baseline remained above the clinically important threshold of 1.0 mL/kg/min at 8-week follow-up in the HIIT group (1.71 mL/kg/min) but not in the MICT group (0.67 mL/kg/min).
Both groups increased their 6-minute walk test distances by 8.8 m at 12 weeks and by 18.5 m at 20 weeks. No between-group differences were found for cardiovascular risk or mobility outcomes, and no adverse events occurred in either group.
On average, the HIIT group spent 36% of total training time exercising at intensities above 80% HRR throughout the intervention, while the MICT group spent 42% of time at intensities of 40%-59% HRR.
The study was limited by a small sample size of high-functioning individuals who sustained a mild stroke. Enrollment was halted for 2 years due to the COVID-19 lockdowns, limiting the study’s statistical power.
Nevertheless, the authors concluded, “Given that a lack of time is a significant barrier to the implementation of aerobic exercise in stroke clinical practice, our findings suggest that short-interval HIIT may be an effective alternative to traditional MICT for improving VO2peak after stroke, with potential clinically meaningful benefits sustained in the short-term.”
“Our findings show that a short HIIT protocol is possible in people with stroke, which is exciting to see,” said Tang. “But there are different factors that clinicians should consider before recommending this training for their patients, such as their health status and their physical status. Stroke rehabilitation specialists, including stroke physical therapists, can advise on how to proceed to ensure the safety and effectiveness of HIIT.”
Selected Patients May Benefit
“Broad implementation of this intervention may be premature without further research,” said Ryan Glatt, CPT, senior brain health coach and director of the FitBrain Program at Pacific Neuroscience Institute in Santa Monica, California. “The study focused on relatively high-functioning stroke survivors, which raises questions about the applicability of the results to those with more severe impairments.” Mr. Glatt did not participate in the research.
“Additional studies are needed to confirm whether these findings are applicable to more diverse and severely affected populations and to assess the long-term sustainability of the benefits observed,” he said. “Also, the lack of significant improvements in other critical outcomes, such as mobility, suggests limitations in the broader application of HIIT for stroke rehabilitation.”
“While HIIT shows potential, it should be approached with caution,” Mr. Glatt continued. “It may benefit select patients, but replacing traditional exercise protocols with HIIT should not be done in all cases. More robust evidence and careful consideration of individual patient needs are essential.”
This study was funded by an operating grant from the Canadian Institutes of Health Research. Dr. Tang reported grants from the Canadian Institutes of Health Research, the Physiotherapy Foundation of Canada, and the Heart and Stroke Foundation of Canada. Mr. Glatt declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
Veterans Found Relief From Chronic Pain Through Telehealth Mindfulness
TOPLINE:
METHODOLOGY:
- Researchers conducted a randomized clinical trial of 811 veterans who had moderate to severe chronic pain and were recruited from three Veterans Affairs facilities in the United States.
- Participants were divided into three groups: Group MBI (270), self-paced MBI (271), and usual care (270), with interventions lasting 8 weeks.
- The primary outcome was pain-related function measured using a scale on interference from pain in areas like mood, walking, work, relationships, and sleep at 10 weeks, 6 months, and 1 year.
- Secondary outcomes included pain intensity, anxiety, fatigue, sleep disturbance, participation in social roles and activities, depression, and posttraumatic stress disorder (PTSD).
TAKEAWAY:
- Pain-related function significantly improved in participants in both the MBI groups versus usual care group, with a mean difference of −0.4 (95% CI, −0.7 to −0.2) for group MBI and −0.7 (95% CI, −1.0 to −0.4) for self-paced MBI (P < .001).
- Compared with the usual care group, both the MBI groups had significantly improved secondary outcomes, including pain intensity, depression, and PTSD.
- The probability of achieving 30% improvement in pain-related function was higher for group MBI at 10 weeks and 6 months and for self-paced MBI at all three timepoints.
- No significant differences were found between the MBI groups for primary and secondary outcomes.
IN PRACTICE:
“The viability and similarity of both these approaches for delivering MBIs increase patient options for meeting their individual needs and could help accelerate and improve the implementation of nonpharmacological pain treatment in health care systems,” the study authors wrote.
SOURCE:
The study was led by Diana J. Burgess, PhD, of the Center for Care Delivery and Outcomes Research, VA Health Systems Research in Minneapolis, Minnesota, and published online in JAMA Internal Medicine.
LIMITATIONS:
The trial was not designed to compare less resource-intensive MBIs with more intensive mindfulness-based stress reduction programs or in-person MBIs. The study did not address cost-effectiveness or control for time, attention, and other contextual factors. The high nonresponse rate (81%) to initial recruitment may have affected the generalizability of the findings.
DISCLOSURES:
The study was supported by the Pain Management Collaboratory–Pragmatic Clinical Trials Demonstration. Various authors reported grants from the National Center for Complementary and Integrative Health and the National Institute of Nursing Research.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a randomized clinical trial of 811 veterans who had moderate to severe chronic pain and were recruited from three Veterans Affairs facilities in the United States.
- Participants were divided into three groups: Group MBI (270), self-paced MBI (271), and usual care (270), with interventions lasting 8 weeks.
- The primary outcome was pain-related function measured using a scale on interference from pain in areas like mood, walking, work, relationships, and sleep at 10 weeks, 6 months, and 1 year.
- Secondary outcomes included pain intensity, anxiety, fatigue, sleep disturbance, participation in social roles and activities, depression, and posttraumatic stress disorder (PTSD).
TAKEAWAY:
- Pain-related function significantly improved in participants in both the MBI groups versus usual care group, with a mean difference of −0.4 (95% CI, −0.7 to −0.2) for group MBI and −0.7 (95% CI, −1.0 to −0.4) for self-paced MBI (P < .001).
- Compared with the usual care group, both the MBI groups had significantly improved secondary outcomes, including pain intensity, depression, and PTSD.
- The probability of achieving 30% improvement in pain-related function was higher for group MBI at 10 weeks and 6 months and for self-paced MBI at all three timepoints.
- No significant differences were found between the MBI groups for primary and secondary outcomes.
IN PRACTICE:
“The viability and similarity of both these approaches for delivering MBIs increase patient options for meeting their individual needs and could help accelerate and improve the implementation of nonpharmacological pain treatment in health care systems,” the study authors wrote.
SOURCE:
The study was led by Diana J. Burgess, PhD, of the Center for Care Delivery and Outcomes Research, VA Health Systems Research in Minneapolis, Minnesota, and published online in JAMA Internal Medicine.
LIMITATIONS:
The trial was not designed to compare less resource-intensive MBIs with more intensive mindfulness-based stress reduction programs or in-person MBIs. The study did not address cost-effectiveness or control for time, attention, and other contextual factors. The high nonresponse rate (81%) to initial recruitment may have affected the generalizability of the findings.
DISCLOSURES:
The study was supported by the Pain Management Collaboratory–Pragmatic Clinical Trials Demonstration. Various authors reported grants from the National Center for Complementary and Integrative Health and the National Institute of Nursing Research.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a randomized clinical trial of 811 veterans who had moderate to severe chronic pain and were recruited from three Veterans Affairs facilities in the United States.
- Participants were divided into three groups: Group MBI (270), self-paced MBI (271), and usual care (270), with interventions lasting 8 weeks.
- The primary outcome was pain-related function measured using a scale on interference from pain in areas like mood, walking, work, relationships, and sleep at 10 weeks, 6 months, and 1 year.
- Secondary outcomes included pain intensity, anxiety, fatigue, sleep disturbance, participation in social roles and activities, depression, and posttraumatic stress disorder (PTSD).
TAKEAWAY:
- Pain-related function significantly improved in participants in both the MBI groups versus usual care group, with a mean difference of −0.4 (95% CI, −0.7 to −0.2) for group MBI and −0.7 (95% CI, −1.0 to −0.4) for self-paced MBI (P < .001).
- Compared with the usual care group, both the MBI groups had significantly improved secondary outcomes, including pain intensity, depression, and PTSD.
- The probability of achieving 30% improvement in pain-related function was higher for group MBI at 10 weeks and 6 months and for self-paced MBI at all three timepoints.
- No significant differences were found between the MBI groups for primary and secondary outcomes.
IN PRACTICE:
“The viability and similarity of both these approaches for delivering MBIs increase patient options for meeting their individual needs and could help accelerate and improve the implementation of nonpharmacological pain treatment in health care systems,” the study authors wrote.
SOURCE:
The study was led by Diana J. Burgess, PhD, of the Center for Care Delivery and Outcomes Research, VA Health Systems Research in Minneapolis, Minnesota, and published online in JAMA Internal Medicine.
LIMITATIONS:
The trial was not designed to compare less resource-intensive MBIs with more intensive mindfulness-based stress reduction programs or in-person MBIs. The study did not address cost-effectiveness or control for time, attention, and other contextual factors. The high nonresponse rate (81%) to initial recruitment may have affected the generalizability of the findings.
DISCLOSURES:
The study was supported by the Pain Management Collaboratory–Pragmatic Clinical Trials Demonstration. Various authors reported grants from the National Center for Complementary and Integrative Health and the National Institute of Nursing Research.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Hearing Loss, Neuropathy Cut Survival in Older Adults
TOPLINE:
METHODOLOGY:
- Researchers analyzed 793 older adults recruited from primary care practices participating in the OKLAHOMA Studies in 1999.
- Participants completed a questionnaire and underwent a physical examination; timed gait assessments (50 ft); and tests for peripheral nerve function, balance, and hearing.
- Hearing thresholds were tested at 20, 25, and 40 dB, respectively, and at sound frequencies of 500, 1000, 2000, and 4000 Hz.
- Researchers tracked mortality data over 22 years.
TAKEAWAY:
- Overall, 83% participants experienced hearing loss. Regular use of hearing aids was low, reported in 19% and 55% of those with moderate and severe hearing loss, respectively.
- Hearing loss was linked to impaired balance (P = .0014), slower walking (P = .0024), and reduced survival time (P = .0001). Moderate to severe hearing loss was strongly associated with reduced survival time (odds ratio, 1.36; P = .001), independent of the use of hearing aids.
- Peripheral neuropathy was present in 32% participants. The condition also increased the risk for death over the study period (hazard ratio [HR], 1.32; P = .003). Participants with both hearing loss and peripheral neuropathy showed reduced balance and survival time compared with people with either condition alone (HR, 1.55; P < .0001).
IN PRACTICE:
“Like peripheral neuropathy, advanced-age hearing loss is associated with reduced life expectancy, probably mediated in part through an adverse impact on balance,” the authors wrote. “Greater appreciation for the serious impacts of hearing loss and peripheral neuropathy could lead to further efforts to understand their causes and improve prevention and treatment strategies.”
SOURCE:
The study was led by James W. Mold, MD, MPH, of the University of Oklahoma Health Sciences Center, Oklahoma City. It was published online in the Journal of the American Geriatrics Society.
LIMITATIONS:
The dataset was collected in 1999 and may not entirely represent the current cohorts of older primary care patients. The absence of soundproof rooms and the exclusion of some components of the standard audiometric evaluation may have affected low-frequency sound measurements. Furthermore, physical examination was a less accurate measure of peripheral neuropathy. Information on the duration or severity of predictors and causes of death was not available.
DISCLOSURES:
The study was funded by the Presbyterian Health Foundation. The authors did not disclose any competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers analyzed 793 older adults recruited from primary care practices participating in the OKLAHOMA Studies in 1999.
- Participants completed a questionnaire and underwent a physical examination; timed gait assessments (50 ft); and tests for peripheral nerve function, balance, and hearing.
- Hearing thresholds were tested at 20, 25, and 40 dB, respectively, and at sound frequencies of 500, 1000, 2000, and 4000 Hz.
- Researchers tracked mortality data over 22 years.
TAKEAWAY:
- Overall, 83% participants experienced hearing loss. Regular use of hearing aids was low, reported in 19% and 55% of those with moderate and severe hearing loss, respectively.
- Hearing loss was linked to impaired balance (P = .0014), slower walking (P = .0024), and reduced survival time (P = .0001). Moderate to severe hearing loss was strongly associated with reduced survival time (odds ratio, 1.36; P = .001), independent of the use of hearing aids.
- Peripheral neuropathy was present in 32% participants. The condition also increased the risk for death over the study period (hazard ratio [HR], 1.32; P = .003). Participants with both hearing loss and peripheral neuropathy showed reduced balance and survival time compared with people with either condition alone (HR, 1.55; P < .0001).
IN PRACTICE:
“Like peripheral neuropathy, advanced-age hearing loss is associated with reduced life expectancy, probably mediated in part through an adverse impact on balance,” the authors wrote. “Greater appreciation for the serious impacts of hearing loss and peripheral neuropathy could lead to further efforts to understand their causes and improve prevention and treatment strategies.”
SOURCE:
The study was led by James W. Mold, MD, MPH, of the University of Oklahoma Health Sciences Center, Oklahoma City. It was published online in the Journal of the American Geriatrics Society.
LIMITATIONS:
The dataset was collected in 1999 and may not entirely represent the current cohorts of older primary care patients. The absence of soundproof rooms and the exclusion of some components of the standard audiometric evaluation may have affected low-frequency sound measurements. Furthermore, physical examination was a less accurate measure of peripheral neuropathy. Information on the duration or severity of predictors and causes of death was not available.
DISCLOSURES:
The study was funded by the Presbyterian Health Foundation. The authors did not disclose any competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers analyzed 793 older adults recruited from primary care practices participating in the OKLAHOMA Studies in 1999.
- Participants completed a questionnaire and underwent a physical examination; timed gait assessments (50 ft); and tests for peripheral nerve function, balance, and hearing.
- Hearing thresholds were tested at 20, 25, and 40 dB, respectively, and at sound frequencies of 500, 1000, 2000, and 4000 Hz.
- Researchers tracked mortality data over 22 years.
TAKEAWAY:
- Overall, 83% participants experienced hearing loss. Regular use of hearing aids was low, reported in 19% and 55% of those with moderate and severe hearing loss, respectively.
- Hearing loss was linked to impaired balance (P = .0014), slower walking (P = .0024), and reduced survival time (P = .0001). Moderate to severe hearing loss was strongly associated with reduced survival time (odds ratio, 1.36; P = .001), independent of the use of hearing aids.
- Peripheral neuropathy was present in 32% participants. The condition also increased the risk for death over the study period (hazard ratio [HR], 1.32; P = .003). Participants with both hearing loss and peripheral neuropathy showed reduced balance and survival time compared with people with either condition alone (HR, 1.55; P < .0001).
IN PRACTICE:
“Like peripheral neuropathy, advanced-age hearing loss is associated with reduced life expectancy, probably mediated in part through an adverse impact on balance,” the authors wrote. “Greater appreciation for the serious impacts of hearing loss and peripheral neuropathy could lead to further efforts to understand their causes and improve prevention and treatment strategies.”
SOURCE:
The study was led by James W. Mold, MD, MPH, of the University of Oklahoma Health Sciences Center, Oklahoma City. It was published online in the Journal of the American Geriatrics Society.
LIMITATIONS:
The dataset was collected in 1999 and may not entirely represent the current cohorts of older primary care patients. The absence of soundproof rooms and the exclusion of some components of the standard audiometric evaluation may have affected low-frequency sound measurements. Furthermore, physical examination was a less accurate measure of peripheral neuropathy. Information on the duration or severity of predictors and causes of death was not available.
DISCLOSURES:
The study was funded by the Presbyterian Health Foundation. The authors did not disclose any competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
AI Matches Expert Interpretation of Routine EEGs
, according to investigators.
These findings suggest that SCORE-AI, the model tested, can reliably interpret common EEGs in real-world practice, supporting its recent FDA approval, reported lead author Daniel Mansilla, MD, a neurologist at Montreal Neurological Institute and Hospital, and colleagues.
“Overinterpretation of clinical EEG is the most common cause of misdiagnosing epilepsy,” the investigators wrote in Epilepsia. “AI tools may be a solution for this challenge, both as an additional resource for confirmation and classification of epilepsy, and as an aid for the interpretation of EEG in critical care medicine.”
To date, however, AI tools have struggled with the variability encountered in real-world neurology practice.“When tested on external data from different centers and diverse patient populations, and using equipment distinct from the initial study, medical AI models frequently exhibit modest performance, and only a few AI tools have successfully transitioned into medical practice,” the investigators wrote.
SCORE-AI Matches Expert Interpretation of Routine EEGs
The present study put SCORE-AI to the test with EEGs from 104 patients between 16 and 91 years. These individuals hailed from “geographically distinct” regions, while recording equipment and conditions also varied widely, according to Dr. Mansilla and colleagues.
To set an external gold-standard for comparison, EEGs were first interpreted by three human expert raters, who were blinded to all case information except the EEGs themselves. The dataset comprised 50% normal and 50% abnormal EEGs. Four major classes of EEG abnormalities were included: focal epileptiform, generalized epileptiform, focal nonepileptiform, and diffuse nonepileptiform.
Comparing SCORE-AI interpretations with the experts’ interpretations revealed no significant difference in any metric or category. The AI tool had an overall accuracy of 92%, compared with 94% for the human experts. Of note, SCORE-AI maintained this level of performance regardless of vigilance state or normal variants.
“SCORE-AI has obtained FDA approval for routine clinical EEGs and is presently being integrated into broadly available EEG software (Natus NeuroWorks),” the investigators wrote.
Further Validation May Be Needed
Wesley T. Kerr, MD, PhD, functional (nonepileptic) seizures clinic lead epileptologist at the University of Pittsburgh Medical Center, and handling associate editor for this study in Epilepsia, said the present findings are important because they show that SCORE-AI can perform in scenarios beyond the one in which it was developed.
Still, it may be premature for broad commercial rollout.
In a written comment, Dr. Kerr called for “much larger studies” to validate SCORE-AI, noting that seizures can be caused by “many rare conditions,” and some patients have multiple EEG abnormalities.
Since SCORE-AI has not yet demonstrated accuracy in those situations, he predicted that the tool will remain exactly that – a tool – before it replaces human experts.
“They have only looked at SCORE-AI by itself,” Dr. Kerr said. “Practically, SCORE-AI is going to be used in combination with a neurologist for a long time before SCORE-AI can operate semi-independently or independently. They need to do studies looking at this combination to see how this tool impacts the clinical practice of EEG interpretation.”
Daniel Friedman, MD, an epileptologist and associate clinical professor of neurology at NYU Langone, pointed out another limitation of the present study: The EEGs were collected at specialty centers.
“The technical standards of data collection were, therefore, pretty high,” Dr. Friedman said in a written comment. “The majority of EEGs performed in the world are not collected by highly skilled EEG technologists and the performance of AI classification algorithms under less-than-ideal technical conditions is unknown.”
AI-Assisted EEG Interpretation Is Here to Stay
When asked about the long-term future of AI-assisted EEG interpretation, Dr. Friedman predicted that it will be “critical” for helping improve the accuracy of epilepsy diagnoses, particularly because most EEGs worldwide are interpreted by non-experts, leading to the known issue with epilepsy misdiagnosis.
“However,” he added, “it is important to note that epilepsy is a clinical diagnosis ... [EEG] is only one piece of evidence in neurologic decision making. History and accurate eyewitness description of the events of concern are extremely critical to the diagnosis and cannot be replaced by AI yet.”
Dr. Kerr offered a similar view, highlighting the potential for SCORE-AI to raise the game of non-epileptologists.
“My anticipation is that neurologists who don’t use SCORE-AI will be replaced by neurologists who use SCORE-AI well,” he said. “Neurologists who use it well will be able to read more EEGs in less time without sacrificing quality. This will allow the neurologist to spend more time talking with the patient about the interpretation of the tests and how that impacts clinical care.”
Then again, that time spent talking with the patient may also one day be delegated to a machine.
“It is certainly imaginable that AI chatbots using large language models to interact with patients and family could be developed to extract consistent epilepsy histories for diagnostic support,” Dr. Wesley said.
This work was supported by a project grant from the Canadian Institutes of Health Research and Duke Neurology start-up funding. The investigators and interviewees reported no relevant conflicts of interest.
, according to investigators.
These findings suggest that SCORE-AI, the model tested, can reliably interpret common EEGs in real-world practice, supporting its recent FDA approval, reported lead author Daniel Mansilla, MD, a neurologist at Montreal Neurological Institute and Hospital, and colleagues.
“Overinterpretation of clinical EEG is the most common cause of misdiagnosing epilepsy,” the investigators wrote in Epilepsia. “AI tools may be a solution for this challenge, both as an additional resource for confirmation and classification of epilepsy, and as an aid for the interpretation of EEG in critical care medicine.”
To date, however, AI tools have struggled with the variability encountered in real-world neurology practice.“When tested on external data from different centers and diverse patient populations, and using equipment distinct from the initial study, medical AI models frequently exhibit modest performance, and only a few AI tools have successfully transitioned into medical practice,” the investigators wrote.
SCORE-AI Matches Expert Interpretation of Routine EEGs
The present study put SCORE-AI to the test with EEGs from 104 patients between 16 and 91 years. These individuals hailed from “geographically distinct” regions, while recording equipment and conditions also varied widely, according to Dr. Mansilla and colleagues.
To set an external gold-standard for comparison, EEGs were first interpreted by three human expert raters, who were blinded to all case information except the EEGs themselves. The dataset comprised 50% normal and 50% abnormal EEGs. Four major classes of EEG abnormalities were included: focal epileptiform, generalized epileptiform, focal nonepileptiform, and diffuse nonepileptiform.
Comparing SCORE-AI interpretations with the experts’ interpretations revealed no significant difference in any metric or category. The AI tool had an overall accuracy of 92%, compared with 94% for the human experts. Of note, SCORE-AI maintained this level of performance regardless of vigilance state or normal variants.
“SCORE-AI has obtained FDA approval for routine clinical EEGs and is presently being integrated into broadly available EEG software (Natus NeuroWorks),” the investigators wrote.
Further Validation May Be Needed
Wesley T. Kerr, MD, PhD, functional (nonepileptic) seizures clinic lead epileptologist at the University of Pittsburgh Medical Center, and handling associate editor for this study in Epilepsia, said the present findings are important because they show that SCORE-AI can perform in scenarios beyond the one in which it was developed.
Still, it may be premature for broad commercial rollout.
In a written comment, Dr. Kerr called for “much larger studies” to validate SCORE-AI, noting that seizures can be caused by “many rare conditions,” and some patients have multiple EEG abnormalities.
Since SCORE-AI has not yet demonstrated accuracy in those situations, he predicted that the tool will remain exactly that – a tool – before it replaces human experts.
“They have only looked at SCORE-AI by itself,” Dr. Kerr said. “Practically, SCORE-AI is going to be used in combination with a neurologist for a long time before SCORE-AI can operate semi-independently or independently. They need to do studies looking at this combination to see how this tool impacts the clinical practice of EEG interpretation.”
Daniel Friedman, MD, an epileptologist and associate clinical professor of neurology at NYU Langone, pointed out another limitation of the present study: The EEGs were collected at specialty centers.
“The technical standards of data collection were, therefore, pretty high,” Dr. Friedman said in a written comment. “The majority of EEGs performed in the world are not collected by highly skilled EEG technologists and the performance of AI classification algorithms under less-than-ideal technical conditions is unknown.”
AI-Assisted EEG Interpretation Is Here to Stay
When asked about the long-term future of AI-assisted EEG interpretation, Dr. Friedman predicted that it will be “critical” for helping improve the accuracy of epilepsy diagnoses, particularly because most EEGs worldwide are interpreted by non-experts, leading to the known issue with epilepsy misdiagnosis.
“However,” he added, “it is important to note that epilepsy is a clinical diagnosis ... [EEG] is only one piece of evidence in neurologic decision making. History and accurate eyewitness description of the events of concern are extremely critical to the diagnosis and cannot be replaced by AI yet.”
Dr. Kerr offered a similar view, highlighting the potential for SCORE-AI to raise the game of non-epileptologists.
“My anticipation is that neurologists who don’t use SCORE-AI will be replaced by neurologists who use SCORE-AI well,” he said. “Neurologists who use it well will be able to read more EEGs in less time without sacrificing quality. This will allow the neurologist to spend more time talking with the patient about the interpretation of the tests and how that impacts clinical care.”
Then again, that time spent talking with the patient may also one day be delegated to a machine.
“It is certainly imaginable that AI chatbots using large language models to interact with patients and family could be developed to extract consistent epilepsy histories for diagnostic support,” Dr. Wesley said.
This work was supported by a project grant from the Canadian Institutes of Health Research and Duke Neurology start-up funding. The investigators and interviewees reported no relevant conflicts of interest.
, according to investigators.
These findings suggest that SCORE-AI, the model tested, can reliably interpret common EEGs in real-world practice, supporting its recent FDA approval, reported lead author Daniel Mansilla, MD, a neurologist at Montreal Neurological Institute and Hospital, and colleagues.
“Overinterpretation of clinical EEG is the most common cause of misdiagnosing epilepsy,” the investigators wrote in Epilepsia. “AI tools may be a solution for this challenge, both as an additional resource for confirmation and classification of epilepsy, and as an aid for the interpretation of EEG in critical care medicine.”
To date, however, AI tools have struggled with the variability encountered in real-world neurology practice.“When tested on external data from different centers and diverse patient populations, and using equipment distinct from the initial study, medical AI models frequently exhibit modest performance, and only a few AI tools have successfully transitioned into medical practice,” the investigators wrote.
SCORE-AI Matches Expert Interpretation of Routine EEGs
The present study put SCORE-AI to the test with EEGs from 104 patients between 16 and 91 years. These individuals hailed from “geographically distinct” regions, while recording equipment and conditions also varied widely, according to Dr. Mansilla and colleagues.
To set an external gold-standard for comparison, EEGs were first interpreted by three human expert raters, who were blinded to all case information except the EEGs themselves. The dataset comprised 50% normal and 50% abnormal EEGs. Four major classes of EEG abnormalities were included: focal epileptiform, generalized epileptiform, focal nonepileptiform, and diffuse nonepileptiform.
Comparing SCORE-AI interpretations with the experts’ interpretations revealed no significant difference in any metric or category. The AI tool had an overall accuracy of 92%, compared with 94% for the human experts. Of note, SCORE-AI maintained this level of performance regardless of vigilance state or normal variants.
“SCORE-AI has obtained FDA approval for routine clinical EEGs and is presently being integrated into broadly available EEG software (Natus NeuroWorks),” the investigators wrote.
Further Validation May Be Needed
Wesley T. Kerr, MD, PhD, functional (nonepileptic) seizures clinic lead epileptologist at the University of Pittsburgh Medical Center, and handling associate editor for this study in Epilepsia, said the present findings are important because they show that SCORE-AI can perform in scenarios beyond the one in which it was developed.
Still, it may be premature for broad commercial rollout.
In a written comment, Dr. Kerr called for “much larger studies” to validate SCORE-AI, noting that seizures can be caused by “many rare conditions,” and some patients have multiple EEG abnormalities.
Since SCORE-AI has not yet demonstrated accuracy in those situations, he predicted that the tool will remain exactly that – a tool – before it replaces human experts.
“They have only looked at SCORE-AI by itself,” Dr. Kerr said. “Practically, SCORE-AI is going to be used in combination with a neurologist for a long time before SCORE-AI can operate semi-independently or independently. They need to do studies looking at this combination to see how this tool impacts the clinical practice of EEG interpretation.”
Daniel Friedman, MD, an epileptologist and associate clinical professor of neurology at NYU Langone, pointed out another limitation of the present study: The EEGs were collected at specialty centers.
“The technical standards of data collection were, therefore, pretty high,” Dr. Friedman said in a written comment. “The majority of EEGs performed in the world are not collected by highly skilled EEG technologists and the performance of AI classification algorithms under less-than-ideal technical conditions is unknown.”
AI-Assisted EEG Interpretation Is Here to Stay
When asked about the long-term future of AI-assisted EEG interpretation, Dr. Friedman predicted that it will be “critical” for helping improve the accuracy of epilepsy diagnoses, particularly because most EEGs worldwide are interpreted by non-experts, leading to the known issue with epilepsy misdiagnosis.
“However,” he added, “it is important to note that epilepsy is a clinical diagnosis ... [EEG] is only one piece of evidence in neurologic decision making. History and accurate eyewitness description of the events of concern are extremely critical to the diagnosis and cannot be replaced by AI yet.”
Dr. Kerr offered a similar view, highlighting the potential for SCORE-AI to raise the game of non-epileptologists.
“My anticipation is that neurologists who don’t use SCORE-AI will be replaced by neurologists who use SCORE-AI well,” he said. “Neurologists who use it well will be able to read more EEGs in less time without sacrificing quality. This will allow the neurologist to spend more time talking with the patient about the interpretation of the tests and how that impacts clinical care.”
Then again, that time spent talking with the patient may also one day be delegated to a machine.
“It is certainly imaginable that AI chatbots using large language models to interact with patients and family could be developed to extract consistent epilepsy histories for diagnostic support,” Dr. Wesley said.
This work was supported by a project grant from the Canadian Institutes of Health Research and Duke Neurology start-up funding. The investigators and interviewees reported no relevant conflicts of interest.
FROM EPILEPSIA
Is Vision Loss a New Dementia Risk Factor? What Do the Data Say?
In 2019, 57 million people worldwide were living with dementia, a figure expected to soar to 153 million by 2050. A recent Lancet Commission report suggests that nearly half of dementia cases could be prevented or delayed by addressing 14 modifiable risk factors, including impaired vision.
The report’s authors recommend that vision-loss screening and treatment be universally available. But are these recommendations warranted? What is the evidence? What is the potential mechanism? And what are the potential implications for clinical practice?
Worldwide, the prevalence of avoidable vision loss and blindness in adults aged 50 years or older is estimated to hover around 13%.
“There is now overwhelming evidence that vision impairment in later life is associated with more rapid cognitive decline and an increased risk of dementia,” said Joshua Ehrlich, MD, MPH, associate professor in ophthalmology and visual sciences, the Institute for Social Research at the University of Michigan, Ann Arbor.
The evidence includes a meta-analysis of 14 prospective cohort studies with roughly 6.2 million older adults who were cognitively intact at baseline. Over the course of up to 14 years, 171,888 developed dementia. Vision loss was associated with a pooled relative risk (RR) for dementia of 1.47.
A separate meta-analysis also identified an increased risk for dementia (RR, 1.38) with visual loss. When broken down into different eye conditions, an increased dementia risk was associated with cataracts and diabetic retinopathy but not with glaucoma or age-related macular degeneration.
A US study that followed roughly 3000 older adults with cataracts and normal cognition at baseline for more than 20 years found that those who had cataract extraction had significantly reduced risk for dementia compared with those who did not have cataract extraction (hazard ratio, 0.71), after controlling for age, race, APOE genotype, education, smoking, and an extensive list of comorbidities.
Causation or Coincidence?
The mechanisms behind these associations might be related to underlying illness, such as diabetes, which is a risk factor for dementia; vision loss itself, as might be suggested by a possible effect of cataract surgery; or shared neuropathologic processes in the retina and the brain.
A longitudinal study from Korea that included roughly 6 million adults showed that dementia risk increased with severity of visual loss, which supports the hypothesis that vision loss in itself might be causal or that there is a dose-response effect to a shared causal factor.
“Work is still needed to sort out” exactly how visual deficits may raise dementia risk, although several hypotheses exist, Dr. Ehrlich said.
For example, “decreased input to the brain via the visual pathways may directly induce brain changes. Also, consequences of vision loss, like social isolation, physical inactivity, and depression, are themselves risk factors for dementia and may explain the pathways through which vision impairment increases risk,” he said.
Is the link causal? “We’ll never know definitively because we can’t randomize people to not get cataract surgery versus getting cataract surgery, because we know that improving vision improves quality of life, so we’d never want to do that. But the new evidence that’s come in over the last 5 years or so is pretty promising,” said Esme Fuller-Thomson, PhD, director of the Institute for Life Course and Aging and professor, Department of Family and Community Medicine and Faculty of Nursing, at the University of Toronto, Ontario, Canada.
She noted that results of two studies that have looked at this “seem to indicate that those who have cataract surgery are not nearly at as high risk of dementia as those who have cataracts but don’t have the surgery. That’s leaning towards causality.”
A study published in July suggests that cataracts increase dementia risk through vascular and non–Alzheimer’s disease mechanisms.
Clear Clinical Implications
Dr. Ehrlich said that evidence for an association between untreated vision loss and dementia risk and potential modification by treatment has clear implications for care.
“Loss of vision impacts so many aspects of people’s lives beyond just how they see the world and losing vision in later life is not a normal part of aging. Thus, when older adults experience vision loss, this should be a cause for concern and prompt an immediate referral to an eye care professional,” he noted.
Dr. Fuller-Thomson agrees. “Addressing vision loss will certainly help people see better and function at a higher level and improve quality of life, and it seems probable that it might decrease dementia risk so it’s a win-win,” she said.
In her own research, Dr. Fuller-Thomson has found that the combination of hearing loss and vision loss is linked to an eightfold increased risk for cognitive impairment.
“The idea is that vision and/or hearing loss makes it harder for you to be physically active, to be socially engaged, to be mentally stimulated. They are equally important in terms of social isolation, which could lead to loneliness, and we know that loneliness is not good for dementia,” she said.
“With dual sensory impairment, you don’t have as much information coming in — your brain is not engaged as much — and having an engaged brain, doing hobbies, having intellectually stimulating conversation, all of those are factors are associated with lowering risk of dementia,” Dr. Fuller-Thomson said.
The latest Lancet Commission report noted that treatment for visual loss is “effective and cost-effective” for an estimated 90% of people. However, across the world, particularly in low- and middle-income countries, visual loss often goes untreated.
the report concluded.
Dr. Ehrlich and Dr. Fuller-Thomson have no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
In 2019, 57 million people worldwide were living with dementia, a figure expected to soar to 153 million by 2050. A recent Lancet Commission report suggests that nearly half of dementia cases could be prevented or delayed by addressing 14 modifiable risk factors, including impaired vision.
The report’s authors recommend that vision-loss screening and treatment be universally available. But are these recommendations warranted? What is the evidence? What is the potential mechanism? And what are the potential implications for clinical practice?
Worldwide, the prevalence of avoidable vision loss and blindness in adults aged 50 years or older is estimated to hover around 13%.
“There is now overwhelming evidence that vision impairment in later life is associated with more rapid cognitive decline and an increased risk of dementia,” said Joshua Ehrlich, MD, MPH, associate professor in ophthalmology and visual sciences, the Institute for Social Research at the University of Michigan, Ann Arbor.
The evidence includes a meta-analysis of 14 prospective cohort studies with roughly 6.2 million older adults who were cognitively intact at baseline. Over the course of up to 14 years, 171,888 developed dementia. Vision loss was associated with a pooled relative risk (RR) for dementia of 1.47.
A separate meta-analysis also identified an increased risk for dementia (RR, 1.38) with visual loss. When broken down into different eye conditions, an increased dementia risk was associated with cataracts and diabetic retinopathy but not with glaucoma or age-related macular degeneration.
A US study that followed roughly 3000 older adults with cataracts and normal cognition at baseline for more than 20 years found that those who had cataract extraction had significantly reduced risk for dementia compared with those who did not have cataract extraction (hazard ratio, 0.71), after controlling for age, race, APOE genotype, education, smoking, and an extensive list of comorbidities.
Causation or Coincidence?
The mechanisms behind these associations might be related to underlying illness, such as diabetes, which is a risk factor for dementia; vision loss itself, as might be suggested by a possible effect of cataract surgery; or shared neuropathologic processes in the retina and the brain.
A longitudinal study from Korea that included roughly 6 million adults showed that dementia risk increased with severity of visual loss, which supports the hypothesis that vision loss in itself might be causal or that there is a dose-response effect to a shared causal factor.
“Work is still needed to sort out” exactly how visual deficits may raise dementia risk, although several hypotheses exist, Dr. Ehrlich said.
For example, “decreased input to the brain via the visual pathways may directly induce brain changes. Also, consequences of vision loss, like social isolation, physical inactivity, and depression, are themselves risk factors for dementia and may explain the pathways through which vision impairment increases risk,” he said.
Is the link causal? “We’ll never know definitively because we can’t randomize people to not get cataract surgery versus getting cataract surgery, because we know that improving vision improves quality of life, so we’d never want to do that. But the new evidence that’s come in over the last 5 years or so is pretty promising,” said Esme Fuller-Thomson, PhD, director of the Institute for Life Course and Aging and professor, Department of Family and Community Medicine and Faculty of Nursing, at the University of Toronto, Ontario, Canada.
She noted that results of two studies that have looked at this “seem to indicate that those who have cataract surgery are not nearly at as high risk of dementia as those who have cataracts but don’t have the surgery. That’s leaning towards causality.”
A study published in July suggests that cataracts increase dementia risk through vascular and non–Alzheimer’s disease mechanisms.
Clear Clinical Implications
Dr. Ehrlich said that evidence for an association between untreated vision loss and dementia risk and potential modification by treatment has clear implications for care.
“Loss of vision impacts so many aspects of people’s lives beyond just how they see the world and losing vision in later life is not a normal part of aging. Thus, when older adults experience vision loss, this should be a cause for concern and prompt an immediate referral to an eye care professional,” he noted.
Dr. Fuller-Thomson agrees. “Addressing vision loss will certainly help people see better and function at a higher level and improve quality of life, and it seems probable that it might decrease dementia risk so it’s a win-win,” she said.
In her own research, Dr. Fuller-Thomson has found that the combination of hearing loss and vision loss is linked to an eightfold increased risk for cognitive impairment.
“The idea is that vision and/or hearing loss makes it harder for you to be physically active, to be socially engaged, to be mentally stimulated. They are equally important in terms of social isolation, which could lead to loneliness, and we know that loneliness is not good for dementia,” she said.
“With dual sensory impairment, you don’t have as much information coming in — your brain is not engaged as much — and having an engaged brain, doing hobbies, having intellectually stimulating conversation, all of those are factors are associated with lowering risk of dementia,” Dr. Fuller-Thomson said.
The latest Lancet Commission report noted that treatment for visual loss is “effective and cost-effective” for an estimated 90% of people. However, across the world, particularly in low- and middle-income countries, visual loss often goes untreated.
the report concluded.
Dr. Ehrlich and Dr. Fuller-Thomson have no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
In 2019, 57 million people worldwide were living with dementia, a figure expected to soar to 153 million by 2050. A recent Lancet Commission report suggests that nearly half of dementia cases could be prevented or delayed by addressing 14 modifiable risk factors, including impaired vision.
The report’s authors recommend that vision-loss screening and treatment be universally available. But are these recommendations warranted? What is the evidence? What is the potential mechanism? And what are the potential implications for clinical practice?
Worldwide, the prevalence of avoidable vision loss and blindness in adults aged 50 years or older is estimated to hover around 13%.
“There is now overwhelming evidence that vision impairment in later life is associated with more rapid cognitive decline and an increased risk of dementia,” said Joshua Ehrlich, MD, MPH, associate professor in ophthalmology and visual sciences, the Institute for Social Research at the University of Michigan, Ann Arbor.
The evidence includes a meta-analysis of 14 prospective cohort studies with roughly 6.2 million older adults who were cognitively intact at baseline. Over the course of up to 14 years, 171,888 developed dementia. Vision loss was associated with a pooled relative risk (RR) for dementia of 1.47.
A separate meta-analysis also identified an increased risk for dementia (RR, 1.38) with visual loss. When broken down into different eye conditions, an increased dementia risk was associated with cataracts and diabetic retinopathy but not with glaucoma or age-related macular degeneration.
A US study that followed roughly 3000 older adults with cataracts and normal cognition at baseline for more than 20 years found that those who had cataract extraction had significantly reduced risk for dementia compared with those who did not have cataract extraction (hazard ratio, 0.71), after controlling for age, race, APOE genotype, education, smoking, and an extensive list of comorbidities.
Causation or Coincidence?
The mechanisms behind these associations might be related to underlying illness, such as diabetes, which is a risk factor for dementia; vision loss itself, as might be suggested by a possible effect of cataract surgery; or shared neuropathologic processes in the retina and the brain.
A longitudinal study from Korea that included roughly 6 million adults showed that dementia risk increased with severity of visual loss, which supports the hypothesis that vision loss in itself might be causal or that there is a dose-response effect to a shared causal factor.
“Work is still needed to sort out” exactly how visual deficits may raise dementia risk, although several hypotheses exist, Dr. Ehrlich said.
For example, “decreased input to the brain via the visual pathways may directly induce brain changes. Also, consequences of vision loss, like social isolation, physical inactivity, and depression, are themselves risk factors for dementia and may explain the pathways through which vision impairment increases risk,” he said.
Is the link causal? “We’ll never know definitively because we can’t randomize people to not get cataract surgery versus getting cataract surgery, because we know that improving vision improves quality of life, so we’d never want to do that. But the new evidence that’s come in over the last 5 years or so is pretty promising,” said Esme Fuller-Thomson, PhD, director of the Institute for Life Course and Aging and professor, Department of Family and Community Medicine and Faculty of Nursing, at the University of Toronto, Ontario, Canada.
She noted that results of two studies that have looked at this “seem to indicate that those who have cataract surgery are not nearly at as high risk of dementia as those who have cataracts but don’t have the surgery. That’s leaning towards causality.”
A study published in July suggests that cataracts increase dementia risk through vascular and non–Alzheimer’s disease mechanisms.
Clear Clinical Implications
Dr. Ehrlich said that evidence for an association between untreated vision loss and dementia risk and potential modification by treatment has clear implications for care.
“Loss of vision impacts so many aspects of people’s lives beyond just how they see the world and losing vision in later life is not a normal part of aging. Thus, when older adults experience vision loss, this should be a cause for concern and prompt an immediate referral to an eye care professional,” he noted.
Dr. Fuller-Thomson agrees. “Addressing vision loss will certainly help people see better and function at a higher level and improve quality of life, and it seems probable that it might decrease dementia risk so it’s a win-win,” she said.
In her own research, Dr. Fuller-Thomson has found that the combination of hearing loss and vision loss is linked to an eightfold increased risk for cognitive impairment.
“The idea is that vision and/or hearing loss makes it harder for you to be physically active, to be socially engaged, to be mentally stimulated. They are equally important in terms of social isolation, which could lead to loneliness, and we know that loneliness is not good for dementia,” she said.
“With dual sensory impairment, you don’t have as much information coming in — your brain is not engaged as much — and having an engaged brain, doing hobbies, having intellectually stimulating conversation, all of those are factors are associated with lowering risk of dementia,” Dr. Fuller-Thomson said.
The latest Lancet Commission report noted that treatment for visual loss is “effective and cost-effective” for an estimated 90% of people. However, across the world, particularly in low- and middle-income countries, visual loss often goes untreated.
the report concluded.
Dr. Ehrlich and Dr. Fuller-Thomson have no relevant conflicts of interest.
A version of this article appeared on Medscape.com.