User login
White Matter Shows Decline After Bipolar Diagnosis
based on data from 88 individuals.
Patients with bipolar disorder demonstrate cognitive impairment and brain structure abnormalities, including global white matter loss, that have been associated with poor outcomes, but data on the stability or progression of neuroanatomical changes are limited, wrote Julian Macoveanu, PhD, of Copenhagen University Hospital, Denmark, and colleagues.
In a study published in The Journal of Affective Disorders, the researchers identified 97 adults aged 18 to 60 years with recently diagnosed bipolar disorder and matched them with 66 healthy controls. Participants were enrolled in the larger Bipolar Illness Onset (BIO) study. All participants underwent structural MRI and neuropsychological testing at baseline and were in full or partial remission based on total scores of 14 or less on the Hamilton Depression Rating Scale and the Young Mania Rating Scale. Approximately half of the participants (50 bipolar patients and 38 controls) participated in follow-up scans and testing after 6-27 months (mean 16 months), because of limited resources, according to the researchers.
The researchers compared changes in cortical gray matter volume and thickness, total cerebral white matter, hippocampal and amygdala volumes, estimated brain age, and cognitive functioning over time. In addition, they examined within-patient associations between baseline brain structure abnormalities and later mood episodes.
Overall, bipolar patients (BD) showed a significant decrease in total cerebral white matter from baseline, compared with healthy controls (HC) in mixed models (P = .006). “This effect was driven by BD patients showing a decrease in WM volume over time compared to HC who remained stable,” the researchers wrote, and the effect persisted in a post hoc analysis adjusting for subsyndromal symptoms and body mass index.
BD patients also had a larger amygdala volume at baseline and follow-up than HC, but no changes were noted between the groups. Changes in hippocampal volume also remained similar between the groups.
Analysis of cognitive data showed no significant differences in trajectories between BD patients and controls across cognitive domains or globally; although BD patients performed worse than controls at both time points.
BD patients in general experienced lower functioning and worse quality of life, compared with controls, but the trajectories of each group were similar for both functional and quality of life.
The researchers found no significant differences over time in total white matter, hippocampus, or amygdala volumes between BD patients who experienced at least one mood episode during the study period and those who remained in remission.
The findings were limited by several factors including the small sample size and limited generalizability of the findings because of the restriction to patients in full or partial remission, the researchers noted. Other limitations included the variation in follow-up time and the potential impact of psychotropic medication use.
However, the results were strengthened by the use of neuropsychiatric testing in addition to MRI to compare brain structure and cognitive function, the researchers said. The data suggest that both amygdala volume and cognitive impairment may be stable markers of BD soon after diagnosis, but that decreases in white matter may stem from disease progression.
The BIO study is funded by the Mental Health Services, Capital Region of Denmark, the Danish Council for Independent Research, Medical Sciences, Weimans Fund, Markedsmodningsfonden, Gangstedfonden, Læge Sofus Carl Emil og hustru Olga Boris Friis’ legat, Helsefonden, Innovation Fund Denmark, Copenhagen Center for Health Technology (CACHET), EU H2020 ITN, Augustinusfonden, and The Capital Region of Denmark. Macoveanu had no financial conflicts to disclose.
based on data from 88 individuals.
Patients with bipolar disorder demonstrate cognitive impairment and brain structure abnormalities, including global white matter loss, that have been associated with poor outcomes, but data on the stability or progression of neuroanatomical changes are limited, wrote Julian Macoveanu, PhD, of Copenhagen University Hospital, Denmark, and colleagues.
In a study published in The Journal of Affective Disorders, the researchers identified 97 adults aged 18 to 60 years with recently diagnosed bipolar disorder and matched them with 66 healthy controls. Participants were enrolled in the larger Bipolar Illness Onset (BIO) study. All participants underwent structural MRI and neuropsychological testing at baseline and were in full or partial remission based on total scores of 14 or less on the Hamilton Depression Rating Scale and the Young Mania Rating Scale. Approximately half of the participants (50 bipolar patients and 38 controls) participated in follow-up scans and testing after 6-27 months (mean 16 months), because of limited resources, according to the researchers.
The researchers compared changes in cortical gray matter volume and thickness, total cerebral white matter, hippocampal and amygdala volumes, estimated brain age, and cognitive functioning over time. In addition, they examined within-patient associations between baseline brain structure abnormalities and later mood episodes.
Overall, bipolar patients (BD) showed a significant decrease in total cerebral white matter from baseline, compared with healthy controls (HC) in mixed models (P = .006). “This effect was driven by BD patients showing a decrease in WM volume over time compared to HC who remained stable,” the researchers wrote, and the effect persisted in a post hoc analysis adjusting for subsyndromal symptoms and body mass index.
BD patients also had a larger amygdala volume at baseline and follow-up than HC, but no changes were noted between the groups. Changes in hippocampal volume also remained similar between the groups.
Analysis of cognitive data showed no significant differences in trajectories between BD patients and controls across cognitive domains or globally; although BD patients performed worse than controls at both time points.
BD patients in general experienced lower functioning and worse quality of life, compared with controls, but the trajectories of each group were similar for both functional and quality of life.
The researchers found no significant differences over time in total white matter, hippocampus, or amygdala volumes between BD patients who experienced at least one mood episode during the study period and those who remained in remission.
The findings were limited by several factors including the small sample size and limited generalizability of the findings because of the restriction to patients in full or partial remission, the researchers noted. Other limitations included the variation in follow-up time and the potential impact of psychotropic medication use.
However, the results were strengthened by the use of neuropsychiatric testing in addition to MRI to compare brain structure and cognitive function, the researchers said. The data suggest that both amygdala volume and cognitive impairment may be stable markers of BD soon after diagnosis, but that decreases in white matter may stem from disease progression.
The BIO study is funded by the Mental Health Services, Capital Region of Denmark, the Danish Council for Independent Research, Medical Sciences, Weimans Fund, Markedsmodningsfonden, Gangstedfonden, Læge Sofus Carl Emil og hustru Olga Boris Friis’ legat, Helsefonden, Innovation Fund Denmark, Copenhagen Center for Health Technology (CACHET), EU H2020 ITN, Augustinusfonden, and The Capital Region of Denmark. Macoveanu had no financial conflicts to disclose.
based on data from 88 individuals.
Patients with bipolar disorder demonstrate cognitive impairment and brain structure abnormalities, including global white matter loss, that have been associated with poor outcomes, but data on the stability or progression of neuroanatomical changes are limited, wrote Julian Macoveanu, PhD, of Copenhagen University Hospital, Denmark, and colleagues.
In a study published in The Journal of Affective Disorders, the researchers identified 97 adults aged 18 to 60 years with recently diagnosed bipolar disorder and matched them with 66 healthy controls. Participants were enrolled in the larger Bipolar Illness Onset (BIO) study. All participants underwent structural MRI and neuropsychological testing at baseline and were in full or partial remission based on total scores of 14 or less on the Hamilton Depression Rating Scale and the Young Mania Rating Scale. Approximately half of the participants (50 bipolar patients and 38 controls) participated in follow-up scans and testing after 6-27 months (mean 16 months), because of limited resources, according to the researchers.
The researchers compared changes in cortical gray matter volume and thickness, total cerebral white matter, hippocampal and amygdala volumes, estimated brain age, and cognitive functioning over time. In addition, they examined within-patient associations between baseline brain structure abnormalities and later mood episodes.
Overall, bipolar patients (BD) showed a significant decrease in total cerebral white matter from baseline, compared with healthy controls (HC) in mixed models (P = .006). “This effect was driven by BD patients showing a decrease in WM volume over time compared to HC who remained stable,” the researchers wrote, and the effect persisted in a post hoc analysis adjusting for subsyndromal symptoms and body mass index.
BD patients also had a larger amygdala volume at baseline and follow-up than HC, but no changes were noted between the groups. Changes in hippocampal volume also remained similar between the groups.
Analysis of cognitive data showed no significant differences in trajectories between BD patients and controls across cognitive domains or globally; although BD patients performed worse than controls at both time points.
BD patients in general experienced lower functioning and worse quality of life, compared with controls, but the trajectories of each group were similar for both functional and quality of life.
The researchers found no significant differences over time in total white matter, hippocampus, or amygdala volumes between BD patients who experienced at least one mood episode during the study period and those who remained in remission.
The findings were limited by several factors including the small sample size and limited generalizability of the findings because of the restriction to patients in full or partial remission, the researchers noted. Other limitations included the variation in follow-up time and the potential impact of psychotropic medication use.
However, the results were strengthened by the use of neuropsychiatric testing in addition to MRI to compare brain structure and cognitive function, the researchers said. The data suggest that both amygdala volume and cognitive impairment may be stable markers of BD soon after diagnosis, but that decreases in white matter may stem from disease progression.
The BIO study is funded by the Mental Health Services, Capital Region of Denmark, the Danish Council for Independent Research, Medical Sciences, Weimans Fund, Markedsmodningsfonden, Gangstedfonden, Læge Sofus Carl Emil og hustru Olga Boris Friis’ legat, Helsefonden, Innovation Fund Denmark, Copenhagen Center for Health Technology (CACHET), EU H2020 ITN, Augustinusfonden, and The Capital Region of Denmark. Macoveanu had no financial conflicts to disclose.
FROM THE JOURNAL OF AFFECTIVE DISORDERS
Cognitive Decline and Antihypertensive Use: New Data
TOPLINE:
a new study suggests. The association was strongest among those with dementia.
METHODOLOGY:
- The cohort study included 12,644 long-term care residents (mean age, 77.7 years; 97% men; 17.5% Black) with stays of at least 12 weeks from 2006 to 2019.
- Residents who experienced either a reduction in the total number of antihypertensive medications or a sustained 30% decrease in dosage for at least 2 weeks were classified as deprescribing users (n = 1290). Those with no medication changes were considered stable users (n = 11,354).
- The primary outcome was cognitive impairment assessed using the four-point Cognitive Function Scale (CFS), with the score proportional to the severity of impairment.
- The median follow-up duration was 23 weeks for the deprescribing users and 21 weeks for the stable users.
TAKEAWAY:
- Deprescribing antihypertensives was associated with a 12% lower likelihood of progressing to a worse CFS score per 12-week period (odds ratio [OR], 0.88; 95% CI, 0.78-0.99), compared with stable users.
- Among residents with dementia, deprescribing was associated with a 16% reduced likelihood of cognitive decline per 12-week period (OR, 0.84; 95% CI, 0.72-0.98).
- At the end of follow-up, 12% of residents had a higher CFS score and 7.7% had a lower CFS score.
- In the intention-to-treat analysis, the association between deprescribing antihypertensive medications and reduced cognitive decline remained consistent (OR, 0.94; 95% CI, 0.90-0.98).
IN PRACTICE:
“This work highlights the need for patient-centered approaches to deprescribing, ensuring that medication regimens for older adults are optimized to preserve cognitive function and minimize potential harms,” the study authors wrote.
SOURCE:
The study was led by Bocheng Jing, MS, Department of Medicine, University of California, San Francisco. It was published online in JAMA Internal Medicine.
LIMITATIONS:
The study population included predominantly men and White individuals, limiting the generalizability of the results to women and other racial and ethnic groups. The findings may not be applicable to patients with heart failure owing to their noninclusion. The specificity of dementia diagnosis was limited, as this study combined various forms of dementia, making it challenging to differentiate the impacts among subgroups.
DISCLOSURES:
This study was supported by the US National Institute on Aging. Two authors reported receiving grants, honoraria, consulting fees, or royalties from various sources. Details are provided in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
a new study suggests. The association was strongest among those with dementia.
METHODOLOGY:
- The cohort study included 12,644 long-term care residents (mean age, 77.7 years; 97% men; 17.5% Black) with stays of at least 12 weeks from 2006 to 2019.
- Residents who experienced either a reduction in the total number of antihypertensive medications or a sustained 30% decrease in dosage for at least 2 weeks were classified as deprescribing users (n = 1290). Those with no medication changes were considered stable users (n = 11,354).
- The primary outcome was cognitive impairment assessed using the four-point Cognitive Function Scale (CFS), with the score proportional to the severity of impairment.
- The median follow-up duration was 23 weeks for the deprescribing users and 21 weeks for the stable users.
TAKEAWAY:
- Deprescribing antihypertensives was associated with a 12% lower likelihood of progressing to a worse CFS score per 12-week period (odds ratio [OR], 0.88; 95% CI, 0.78-0.99), compared with stable users.
- Among residents with dementia, deprescribing was associated with a 16% reduced likelihood of cognitive decline per 12-week period (OR, 0.84; 95% CI, 0.72-0.98).
- At the end of follow-up, 12% of residents had a higher CFS score and 7.7% had a lower CFS score.
- In the intention-to-treat analysis, the association between deprescribing antihypertensive medications and reduced cognitive decline remained consistent (OR, 0.94; 95% CI, 0.90-0.98).
IN PRACTICE:
“This work highlights the need for patient-centered approaches to deprescribing, ensuring that medication regimens for older adults are optimized to preserve cognitive function and minimize potential harms,” the study authors wrote.
SOURCE:
The study was led by Bocheng Jing, MS, Department of Medicine, University of California, San Francisco. It was published online in JAMA Internal Medicine.
LIMITATIONS:
The study population included predominantly men and White individuals, limiting the generalizability of the results to women and other racial and ethnic groups. The findings may not be applicable to patients with heart failure owing to their noninclusion. The specificity of dementia diagnosis was limited, as this study combined various forms of dementia, making it challenging to differentiate the impacts among subgroups.
DISCLOSURES:
This study was supported by the US National Institute on Aging. Two authors reported receiving grants, honoraria, consulting fees, or royalties from various sources. Details are provided in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
a new study suggests. The association was strongest among those with dementia.
METHODOLOGY:
- The cohort study included 12,644 long-term care residents (mean age, 77.7 years; 97% men; 17.5% Black) with stays of at least 12 weeks from 2006 to 2019.
- Residents who experienced either a reduction in the total number of antihypertensive medications or a sustained 30% decrease in dosage for at least 2 weeks were classified as deprescribing users (n = 1290). Those with no medication changes were considered stable users (n = 11,354).
- The primary outcome was cognitive impairment assessed using the four-point Cognitive Function Scale (CFS), with the score proportional to the severity of impairment.
- The median follow-up duration was 23 weeks for the deprescribing users and 21 weeks for the stable users.
TAKEAWAY:
- Deprescribing antihypertensives was associated with a 12% lower likelihood of progressing to a worse CFS score per 12-week period (odds ratio [OR], 0.88; 95% CI, 0.78-0.99), compared with stable users.
- Among residents with dementia, deprescribing was associated with a 16% reduced likelihood of cognitive decline per 12-week period (OR, 0.84; 95% CI, 0.72-0.98).
- At the end of follow-up, 12% of residents had a higher CFS score and 7.7% had a lower CFS score.
- In the intention-to-treat analysis, the association between deprescribing antihypertensive medications and reduced cognitive decline remained consistent (OR, 0.94; 95% CI, 0.90-0.98).
IN PRACTICE:
“This work highlights the need for patient-centered approaches to deprescribing, ensuring that medication regimens for older adults are optimized to preserve cognitive function and minimize potential harms,” the study authors wrote.
SOURCE:
The study was led by Bocheng Jing, MS, Department of Medicine, University of California, San Francisco. It was published online in JAMA Internal Medicine.
LIMITATIONS:
The study population included predominantly men and White individuals, limiting the generalizability of the results to women and other racial and ethnic groups. The findings may not be applicable to patients with heart failure owing to their noninclusion. The specificity of dementia diagnosis was limited, as this study combined various forms of dementia, making it challenging to differentiate the impacts among subgroups.
DISCLOSURES:
This study was supported by the US National Institute on Aging. Two authors reported receiving grants, honoraria, consulting fees, or royalties from various sources. Details are provided in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
DIY Brain Stimulation Is Growing in Popularity, but Is It Safe, Effective?
As at-home, do-it-yourself (DIY) brain stimulation devices like transcranial direct current stimulation (tDCS) gain popularity for common psychiatric conditions like depression, anxiety, and posttraumatic stress disorder (PTSD), questions arise about their safety and efficacy.
However, the US Food and Drug Administration (FDA) has yet to “fully” clear any of these devices and has only granted breakthrough device designation to a few. In addition, most of the portable products don’t market themselves as medical interventions, putting them into a regulatory “gray area” that has little oversight.
This has led to a free-for-all environment, allowing individuals to purchase these products online and self-administer “treatment” — often without the guidance or even knowledge of their healthcare providers.
So how effective and safe are these noninvasive brain stimulators, and what guidance, if any, should clinicians provide to patients who are or are contemplating using them at home; what does the research show, and what are the ethical considerations?
What the Research Shows
Data from studies examining unsupervised at-home and use under medical supervision are mixed. Results from a recent randomized trial of more than 200 participants showed no significant difference in safety or efficacy between adjunctive at-home tDCS and at-home sham tDCS for depressive symptoms.
“To be fair, they did not find any unexpected safety issues. What they did find was that there was no clear signal that it worked,” said Noah S. Philip, MD, professor of psychiatry and human behavior, Warren Alpert Medical School of Brown University, Providence, Rhode Island.
Philip, who is also lead for mental health research at Brown’s Center for Neurorestoration and Neurotechnology, Providence, Rhode Island, and was not involved in the study, noted that while other research papers have shown more promising results for depression and other conditions such as adult attention-deficit/hyperactivity disorder (ADHD) and pain, they often are not placebo controlled or include large numbers of patients.
Still, he added the growing use of these devices reflects the fact that standard treatment often doesn’t meet patients’ needs.
“Broadly speaking, part of the hope with brain stimulation is that instead of taking a pill, we’re trying to more directly affect the brain tissues involved — and therefore, avoid the issue of having systemic side effects that you get from the meds. There’s certainly a hunger” for better interventions, Philip said.
tDCS involves a low-intensity electrical current applied through electrodes on the scalp in order to influence brain activity. Generally speaking, it emits less energy than other types of noninvasive brain stimulation, such as transcranial magnetic stimulation. “The trade-off is that’s it also a little harder to find a clear signal about how it works,” Philip said.
As such, he added, it’s important for clinicians to familiarize themselves with these devices, to ask about patient use, and to set up structured assessments of efficacy and adverse events.
Results from a randomized trial published last year in The Lancet showed no significant benefit for in-office use of tDCS plus a selective serotonin reuptake inhibitor vs sham tDCS for major depression.
On the other hand, a randomized trial published earlier this year in Brain Stimulation showed that older adults who received active tDCS had greater reductions in depressive and anxiety symptoms than those in the sham group.
In addition, results from a small study of eight participants published last year in SAGE Open Medicine showed adjuvant tDCS helped patients with refractory PTSD. Finally, a randomized trial of 54 veterans from Philip’s own team showed tDCS plus virtual reality was effective for combat-related PTSD.
Although there have also been several studies showing possible benefit of tDCS for Alzheimer’s disease, Gayatri Devi, MD, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, noted in a Medscape Neurology Decision Point that “the problem with all these studies is that they’re all very small, and there [are] so many different variables in terms of how you interpret response.”
On-Demand Brain Stim
As for at-home use, there’s now a wide offering of these types of devices available online, allowing an individual to apply daily brain stimulation via headsets, dispensing with the need to consult a clinician. Most are battery-powered and emit a low-level current.
Philip noted that there are essentially two ways to obtain such devices. Some are readily available from online stores, while others require a prescription, which typically includes guidelines on how to use the device.
So far, none of these portable products have been fully cleared by the FDA — although the agency did grant Breakthrough Device designation to Sooma Medical for its device to treat depression in 2023 and to Flow Neuroscience in 2022.
In August 2023, Flow announced that its device is now being reviewed for full FDA clearance on the basis of trial results showing at-home tDCS was “twice as effective” as antidepressants. The company received regulatory approval in Europe in 2019.
Other research has shown “encouraging” results for these at-home devices for conditions such as adult ADHD and pain relief with remote supervision.
Philip noted that more high-quality randomized controlled trials are definitely needed, with “a number of companies probably getting close to releasing data sometime soon.”
Is it possible that a placebo effect is at work here? “Yes, partially,” said Philip. Users often become more mindful of managing their depression and other conditions, which leads to behavior change, he said.
A Quick Fix for a Broken System?
Joseph J. Fins, MD, The E. William Davis Jr, MD, professor of Medical Ethics and chief of the Division of Medical Ethics at Weill Cornell Medicine, New York City, also believes there could be a placebo effect at play.
“It’s important that we don’t ascribe efficacy to a device without being aware of the placebo effect,” he said. That’s why more and larger, placebo-controlled trials are needed, he added.
There’s a multitude of reasons why patients may turn to at-home devices on their own, including drug shortages and the inability to see a psychiatrist in a timely manner.
“I think it speaks to the isolation of these folks that leads to them doing this on their own. These devices become a technological quick fix for a system that’s desperately broken. There’s nothing wrong with being a consumer, but at a certain point they need to be a patient, and they need to have a clinician there to help them,” he said.
Fins said that he also worries about regulatory oversight because of the way the devices are classified. He likened them to supplements, which, because they don’t make certain claims, are not regulated with the same stringency as other products and fall into an area “in between regulatory spheres.”
“I think we’re trying to take old regulatory frameworks and jerry-rig it to accommodate new and evolving technologies. And I think we need to have serious study of how we protect patients as they become consumers — to make sure there’s enough safety and enough efficacy and that they don’t get ripped off out of desperation,” Fins said.
As for safety, at-home devices are unlikely to cause physical harm — at least when used as intended. “The riskier situations happen when people build their own, overuse it, or use it in combination with drugs or alcohol or other factors that can produce unpredictable results,” Philip said.
He added that DIY-built products carry a higher risk for burns or excessive energy output. A 2016 “open letter” from a group of neurologists, published in Annals of Neurology, warned about the dangers of DIY tDCS.
In addition, Philip noted that he has seen instances where patients become manic after using at-home tDCS, especially when trying to improve cognition.
“We have seen a number of peculiar side effects emerge in those situations. Typically, it’s anxiety, panic attacks, and sensitivity to bright lights, in addition to the emergence of mania, which would require major psychiatric intervention,” he said.
“So, it’s important that if folks do engage with these sorts of things, it’s with some degree of medical involvement,” Philip added.
Ethical Considerations
Roy Hamilton, MD, professor of neurology, psychiatry, and physical medicine & rehabilitation at the University of Pennsylvania, in Philadelphia, said that in the setting of proper training, proper clinician communication, and proper oversight, he doesn’t view at-home tDCS as ethically problematic.
“For individuals who have conditions that are clearly causing them remarkable detriment to quality of life or to their health, it seems like the risk-benefit ratio with respect to the likelihood of harm is quite good,” said Hamilton, who is also the director of the Penn Brain Science, Translation, Innovation, and Modulation Center.
In addition, tDCS and other transcranial electrical stimulation techniques seem to have a better safety profile than “many of the other things we send patients home with to treat their pain,” he said.
On the other hand, this risk calculus changes in a scenario where patients are neurologically intact, he said.
The brain, Hamilton noted, exhibits functional differences based on the region undergoing stimulation. This means users should follow a specific, prescribed method. However, he pointed out that those using commercially available devices often lack clear guidance on where to place the electrodes and what intensity to use.
“This raises concerns because the way you use the device is important,” he said.
Hamilton also highlighted important ethical considerations regarding enhanced cognition through technology or pharmaceutical interventions. The possibility of coercive use raises questions about equity and fairness, particularly if individuals feel pressured to use such devices to remain competitive in academic or professional settings.
This mirrors the current issues surrounding the use of stimulants among students, where those without ADHD may feel compelled to use these drugs to improve performance. In addition, there is the possibility that the capacity to access devices that enhance cognition could exacerbate existing inequalities.
“Any time you introduce a technological intervention, you have to worry about discriminative justice. That’s where only people who can afford such devices or have access to specialists who can give them such devices get to receive improvements in their cognition,” Hamilton said.
Neither the American Academy of Neurology nor the American Psychiatric Association has established practice guidelines for tDCS, either for use in clinical settings or for use at home. Hamilton believes this is due to the current lack of data, noting that organizations likely want to see more approvals and widespread use before creating guidelines.
Fins emphasized the need for organized medicine to sponsor research, noting that the use of these devices is becoming a public health issue. He expressed concern that some devices are marketed as nonmedical interventions, despite involving medical procedures like brain stimulation. He concluded that while scrutiny is necessary, the current landscape should be approached without judgment.
Fins reported no relevant financial relationships. Philip reported serving on a scientific advisory board for Pulvinar Neuro and past involvement in clinical trials related to these devices and their use as home. Hamilton reported he is on the board of trustees for the McKnight Brain Research Foundation, which is dedicated to advancing healthy cognitive aging.
A version of this article first appeared on Medscape.com.
As at-home, do-it-yourself (DIY) brain stimulation devices like transcranial direct current stimulation (tDCS) gain popularity for common psychiatric conditions like depression, anxiety, and posttraumatic stress disorder (PTSD), questions arise about their safety and efficacy.
However, the US Food and Drug Administration (FDA) has yet to “fully” clear any of these devices and has only granted breakthrough device designation to a few. In addition, most of the portable products don’t market themselves as medical interventions, putting them into a regulatory “gray area” that has little oversight.
This has led to a free-for-all environment, allowing individuals to purchase these products online and self-administer “treatment” — often without the guidance or even knowledge of their healthcare providers.
So how effective and safe are these noninvasive brain stimulators, and what guidance, if any, should clinicians provide to patients who are or are contemplating using them at home; what does the research show, and what are the ethical considerations?
What the Research Shows
Data from studies examining unsupervised at-home and use under medical supervision are mixed. Results from a recent randomized trial of more than 200 participants showed no significant difference in safety or efficacy between adjunctive at-home tDCS and at-home sham tDCS for depressive symptoms.
“To be fair, they did not find any unexpected safety issues. What they did find was that there was no clear signal that it worked,” said Noah S. Philip, MD, professor of psychiatry and human behavior, Warren Alpert Medical School of Brown University, Providence, Rhode Island.
Philip, who is also lead for mental health research at Brown’s Center for Neurorestoration and Neurotechnology, Providence, Rhode Island, and was not involved in the study, noted that while other research papers have shown more promising results for depression and other conditions such as adult attention-deficit/hyperactivity disorder (ADHD) and pain, they often are not placebo controlled or include large numbers of patients.
Still, he added the growing use of these devices reflects the fact that standard treatment often doesn’t meet patients’ needs.
“Broadly speaking, part of the hope with brain stimulation is that instead of taking a pill, we’re trying to more directly affect the brain tissues involved — and therefore, avoid the issue of having systemic side effects that you get from the meds. There’s certainly a hunger” for better interventions, Philip said.
tDCS involves a low-intensity electrical current applied through electrodes on the scalp in order to influence brain activity. Generally speaking, it emits less energy than other types of noninvasive brain stimulation, such as transcranial magnetic stimulation. “The trade-off is that’s it also a little harder to find a clear signal about how it works,” Philip said.
As such, he added, it’s important for clinicians to familiarize themselves with these devices, to ask about patient use, and to set up structured assessments of efficacy and adverse events.
Results from a randomized trial published last year in The Lancet showed no significant benefit for in-office use of tDCS plus a selective serotonin reuptake inhibitor vs sham tDCS for major depression.
On the other hand, a randomized trial published earlier this year in Brain Stimulation showed that older adults who received active tDCS had greater reductions in depressive and anxiety symptoms than those in the sham group.
In addition, results from a small study of eight participants published last year in SAGE Open Medicine showed adjuvant tDCS helped patients with refractory PTSD. Finally, a randomized trial of 54 veterans from Philip’s own team showed tDCS plus virtual reality was effective for combat-related PTSD.
Although there have also been several studies showing possible benefit of tDCS for Alzheimer’s disease, Gayatri Devi, MD, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, noted in a Medscape Neurology Decision Point that “the problem with all these studies is that they’re all very small, and there [are] so many different variables in terms of how you interpret response.”
On-Demand Brain Stim
As for at-home use, there’s now a wide offering of these types of devices available online, allowing an individual to apply daily brain stimulation via headsets, dispensing with the need to consult a clinician. Most are battery-powered and emit a low-level current.
Philip noted that there are essentially two ways to obtain such devices. Some are readily available from online stores, while others require a prescription, which typically includes guidelines on how to use the device.
So far, none of these portable products have been fully cleared by the FDA — although the agency did grant Breakthrough Device designation to Sooma Medical for its device to treat depression in 2023 and to Flow Neuroscience in 2022.
In August 2023, Flow announced that its device is now being reviewed for full FDA clearance on the basis of trial results showing at-home tDCS was “twice as effective” as antidepressants. The company received regulatory approval in Europe in 2019.
Other research has shown “encouraging” results for these at-home devices for conditions such as adult ADHD and pain relief with remote supervision.
Philip noted that more high-quality randomized controlled trials are definitely needed, with “a number of companies probably getting close to releasing data sometime soon.”
Is it possible that a placebo effect is at work here? “Yes, partially,” said Philip. Users often become more mindful of managing their depression and other conditions, which leads to behavior change, he said.
A Quick Fix for a Broken System?
Joseph J. Fins, MD, The E. William Davis Jr, MD, professor of Medical Ethics and chief of the Division of Medical Ethics at Weill Cornell Medicine, New York City, also believes there could be a placebo effect at play.
“It’s important that we don’t ascribe efficacy to a device without being aware of the placebo effect,” he said. That’s why more and larger, placebo-controlled trials are needed, he added.
There’s a multitude of reasons why patients may turn to at-home devices on their own, including drug shortages and the inability to see a psychiatrist in a timely manner.
“I think it speaks to the isolation of these folks that leads to them doing this on their own. These devices become a technological quick fix for a system that’s desperately broken. There’s nothing wrong with being a consumer, but at a certain point they need to be a patient, and they need to have a clinician there to help them,” he said.
Fins said that he also worries about regulatory oversight because of the way the devices are classified. He likened them to supplements, which, because they don’t make certain claims, are not regulated with the same stringency as other products and fall into an area “in between regulatory spheres.”
“I think we’re trying to take old regulatory frameworks and jerry-rig it to accommodate new and evolving technologies. And I think we need to have serious study of how we protect patients as they become consumers — to make sure there’s enough safety and enough efficacy and that they don’t get ripped off out of desperation,” Fins said.
As for safety, at-home devices are unlikely to cause physical harm — at least when used as intended. “The riskier situations happen when people build their own, overuse it, or use it in combination with drugs or alcohol or other factors that can produce unpredictable results,” Philip said.
He added that DIY-built products carry a higher risk for burns or excessive energy output. A 2016 “open letter” from a group of neurologists, published in Annals of Neurology, warned about the dangers of DIY tDCS.
In addition, Philip noted that he has seen instances where patients become manic after using at-home tDCS, especially when trying to improve cognition.
“We have seen a number of peculiar side effects emerge in those situations. Typically, it’s anxiety, panic attacks, and sensitivity to bright lights, in addition to the emergence of mania, which would require major psychiatric intervention,” he said.
“So, it’s important that if folks do engage with these sorts of things, it’s with some degree of medical involvement,” Philip added.
Ethical Considerations
Roy Hamilton, MD, professor of neurology, psychiatry, and physical medicine & rehabilitation at the University of Pennsylvania, in Philadelphia, said that in the setting of proper training, proper clinician communication, and proper oversight, he doesn’t view at-home tDCS as ethically problematic.
“For individuals who have conditions that are clearly causing them remarkable detriment to quality of life or to their health, it seems like the risk-benefit ratio with respect to the likelihood of harm is quite good,” said Hamilton, who is also the director of the Penn Brain Science, Translation, Innovation, and Modulation Center.
In addition, tDCS and other transcranial electrical stimulation techniques seem to have a better safety profile than “many of the other things we send patients home with to treat their pain,” he said.
On the other hand, this risk calculus changes in a scenario where patients are neurologically intact, he said.
The brain, Hamilton noted, exhibits functional differences based on the region undergoing stimulation. This means users should follow a specific, prescribed method. However, he pointed out that those using commercially available devices often lack clear guidance on where to place the electrodes and what intensity to use.
“This raises concerns because the way you use the device is important,” he said.
Hamilton also highlighted important ethical considerations regarding enhanced cognition through technology or pharmaceutical interventions. The possibility of coercive use raises questions about equity and fairness, particularly if individuals feel pressured to use such devices to remain competitive in academic or professional settings.
This mirrors the current issues surrounding the use of stimulants among students, where those without ADHD may feel compelled to use these drugs to improve performance. In addition, there is the possibility that the capacity to access devices that enhance cognition could exacerbate existing inequalities.
“Any time you introduce a technological intervention, you have to worry about discriminative justice. That’s where only people who can afford such devices or have access to specialists who can give them such devices get to receive improvements in their cognition,” Hamilton said.
Neither the American Academy of Neurology nor the American Psychiatric Association has established practice guidelines for tDCS, either for use in clinical settings or for use at home. Hamilton believes this is due to the current lack of data, noting that organizations likely want to see more approvals and widespread use before creating guidelines.
Fins emphasized the need for organized medicine to sponsor research, noting that the use of these devices is becoming a public health issue. He expressed concern that some devices are marketed as nonmedical interventions, despite involving medical procedures like brain stimulation. He concluded that while scrutiny is necessary, the current landscape should be approached without judgment.
Fins reported no relevant financial relationships. Philip reported serving on a scientific advisory board for Pulvinar Neuro and past involvement in clinical trials related to these devices and their use as home. Hamilton reported he is on the board of trustees for the McKnight Brain Research Foundation, which is dedicated to advancing healthy cognitive aging.
A version of this article first appeared on Medscape.com.
As at-home, do-it-yourself (DIY) brain stimulation devices like transcranial direct current stimulation (tDCS) gain popularity for common psychiatric conditions like depression, anxiety, and posttraumatic stress disorder (PTSD), questions arise about their safety and efficacy.
However, the US Food and Drug Administration (FDA) has yet to “fully” clear any of these devices and has only granted breakthrough device designation to a few. In addition, most of the portable products don’t market themselves as medical interventions, putting them into a regulatory “gray area” that has little oversight.
This has led to a free-for-all environment, allowing individuals to purchase these products online and self-administer “treatment” — often without the guidance or even knowledge of their healthcare providers.
So how effective and safe are these noninvasive brain stimulators, and what guidance, if any, should clinicians provide to patients who are or are contemplating using them at home; what does the research show, and what are the ethical considerations?
What the Research Shows
Data from studies examining unsupervised at-home and use under medical supervision are mixed. Results from a recent randomized trial of more than 200 participants showed no significant difference in safety or efficacy between adjunctive at-home tDCS and at-home sham tDCS for depressive symptoms.
“To be fair, they did not find any unexpected safety issues. What they did find was that there was no clear signal that it worked,” said Noah S. Philip, MD, professor of psychiatry and human behavior, Warren Alpert Medical School of Brown University, Providence, Rhode Island.
Philip, who is also lead for mental health research at Brown’s Center for Neurorestoration and Neurotechnology, Providence, Rhode Island, and was not involved in the study, noted that while other research papers have shown more promising results for depression and other conditions such as adult attention-deficit/hyperactivity disorder (ADHD) and pain, they often are not placebo controlled or include large numbers of patients.
Still, he added the growing use of these devices reflects the fact that standard treatment often doesn’t meet patients’ needs.
“Broadly speaking, part of the hope with brain stimulation is that instead of taking a pill, we’re trying to more directly affect the brain tissues involved — and therefore, avoid the issue of having systemic side effects that you get from the meds. There’s certainly a hunger” for better interventions, Philip said.
tDCS involves a low-intensity electrical current applied through electrodes on the scalp in order to influence brain activity. Generally speaking, it emits less energy than other types of noninvasive brain stimulation, such as transcranial magnetic stimulation. “The trade-off is that’s it also a little harder to find a clear signal about how it works,” Philip said.
As such, he added, it’s important for clinicians to familiarize themselves with these devices, to ask about patient use, and to set up structured assessments of efficacy and adverse events.
Results from a randomized trial published last year in The Lancet showed no significant benefit for in-office use of tDCS plus a selective serotonin reuptake inhibitor vs sham tDCS for major depression.
On the other hand, a randomized trial published earlier this year in Brain Stimulation showed that older adults who received active tDCS had greater reductions in depressive and anxiety symptoms than those in the sham group.
In addition, results from a small study of eight participants published last year in SAGE Open Medicine showed adjuvant tDCS helped patients with refractory PTSD. Finally, a randomized trial of 54 veterans from Philip’s own team showed tDCS plus virtual reality was effective for combat-related PTSD.
Although there have also been several studies showing possible benefit of tDCS for Alzheimer’s disease, Gayatri Devi, MD, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, noted in a Medscape Neurology Decision Point that “the problem with all these studies is that they’re all very small, and there [are] so many different variables in terms of how you interpret response.”
On-Demand Brain Stim
As for at-home use, there’s now a wide offering of these types of devices available online, allowing an individual to apply daily brain stimulation via headsets, dispensing with the need to consult a clinician. Most are battery-powered and emit a low-level current.
Philip noted that there are essentially two ways to obtain such devices. Some are readily available from online stores, while others require a prescription, which typically includes guidelines on how to use the device.
So far, none of these portable products have been fully cleared by the FDA — although the agency did grant Breakthrough Device designation to Sooma Medical for its device to treat depression in 2023 and to Flow Neuroscience in 2022.
In August 2023, Flow announced that its device is now being reviewed for full FDA clearance on the basis of trial results showing at-home tDCS was “twice as effective” as antidepressants. The company received regulatory approval in Europe in 2019.
Other research has shown “encouraging” results for these at-home devices for conditions such as adult ADHD and pain relief with remote supervision.
Philip noted that more high-quality randomized controlled trials are definitely needed, with “a number of companies probably getting close to releasing data sometime soon.”
Is it possible that a placebo effect is at work here? “Yes, partially,” said Philip. Users often become more mindful of managing their depression and other conditions, which leads to behavior change, he said.
A Quick Fix for a Broken System?
Joseph J. Fins, MD, The E. William Davis Jr, MD, professor of Medical Ethics and chief of the Division of Medical Ethics at Weill Cornell Medicine, New York City, also believes there could be a placebo effect at play.
“It’s important that we don’t ascribe efficacy to a device without being aware of the placebo effect,” he said. That’s why more and larger, placebo-controlled trials are needed, he added.
There’s a multitude of reasons why patients may turn to at-home devices on their own, including drug shortages and the inability to see a psychiatrist in a timely manner.
“I think it speaks to the isolation of these folks that leads to them doing this on their own. These devices become a technological quick fix for a system that’s desperately broken. There’s nothing wrong with being a consumer, but at a certain point they need to be a patient, and they need to have a clinician there to help them,” he said.
Fins said that he also worries about regulatory oversight because of the way the devices are classified. He likened them to supplements, which, because they don’t make certain claims, are not regulated with the same stringency as other products and fall into an area “in between regulatory spheres.”
“I think we’re trying to take old regulatory frameworks and jerry-rig it to accommodate new and evolving technologies. And I think we need to have serious study of how we protect patients as they become consumers — to make sure there’s enough safety and enough efficacy and that they don’t get ripped off out of desperation,” Fins said.
As for safety, at-home devices are unlikely to cause physical harm — at least when used as intended. “The riskier situations happen when people build their own, overuse it, or use it in combination with drugs or alcohol or other factors that can produce unpredictable results,” Philip said.
He added that DIY-built products carry a higher risk for burns or excessive energy output. A 2016 “open letter” from a group of neurologists, published in Annals of Neurology, warned about the dangers of DIY tDCS.
In addition, Philip noted that he has seen instances where patients become manic after using at-home tDCS, especially when trying to improve cognition.
“We have seen a number of peculiar side effects emerge in those situations. Typically, it’s anxiety, panic attacks, and sensitivity to bright lights, in addition to the emergence of mania, which would require major psychiatric intervention,” he said.
“So, it’s important that if folks do engage with these sorts of things, it’s with some degree of medical involvement,” Philip added.
Ethical Considerations
Roy Hamilton, MD, professor of neurology, psychiatry, and physical medicine & rehabilitation at the University of Pennsylvania, in Philadelphia, said that in the setting of proper training, proper clinician communication, and proper oversight, he doesn’t view at-home tDCS as ethically problematic.
“For individuals who have conditions that are clearly causing them remarkable detriment to quality of life or to their health, it seems like the risk-benefit ratio with respect to the likelihood of harm is quite good,” said Hamilton, who is also the director of the Penn Brain Science, Translation, Innovation, and Modulation Center.
In addition, tDCS and other transcranial electrical stimulation techniques seem to have a better safety profile than “many of the other things we send patients home with to treat their pain,” he said.
On the other hand, this risk calculus changes in a scenario where patients are neurologically intact, he said.
The brain, Hamilton noted, exhibits functional differences based on the region undergoing stimulation. This means users should follow a specific, prescribed method. However, he pointed out that those using commercially available devices often lack clear guidance on where to place the electrodes and what intensity to use.
“This raises concerns because the way you use the device is important,” he said.
Hamilton also highlighted important ethical considerations regarding enhanced cognition through technology or pharmaceutical interventions. The possibility of coercive use raises questions about equity and fairness, particularly if individuals feel pressured to use such devices to remain competitive in academic or professional settings.
This mirrors the current issues surrounding the use of stimulants among students, where those without ADHD may feel compelled to use these drugs to improve performance. In addition, there is the possibility that the capacity to access devices that enhance cognition could exacerbate existing inequalities.
“Any time you introduce a technological intervention, you have to worry about discriminative justice. That’s where only people who can afford such devices or have access to specialists who can give them such devices get to receive improvements in their cognition,” Hamilton said.
Neither the American Academy of Neurology nor the American Psychiatric Association has established practice guidelines for tDCS, either for use in clinical settings or for use at home. Hamilton believes this is due to the current lack of data, noting that organizations likely want to see more approvals and widespread use before creating guidelines.
Fins emphasized the need for organized medicine to sponsor research, noting that the use of these devices is becoming a public health issue. He expressed concern that some devices are marketed as nonmedical interventions, despite involving medical procedures like brain stimulation. He concluded that while scrutiny is necessary, the current landscape should be approached without judgment.
Fins reported no relevant financial relationships. Philip reported serving on a scientific advisory board for Pulvinar Neuro and past involvement in clinical trials related to these devices and their use as home. Hamilton reported he is on the board of trustees for the McKnight Brain Research Foundation, which is dedicated to advancing healthy cognitive aging.
A version of this article first appeared on Medscape.com.
Smartphone Data Flag Early Dementia Risk in Older Adults
a novel real-world study suggested.
During a smartphone-assisted scavenger hunt on a university campus, researchers observed that older adults with subjective cognitive decline (SCD) paused more frequently, likely to reorient themselves, than those without SCD. This behavior served as an identifier of individuals with SCD.
“Deficits in spatial navigation are one of the first signs of Alzheimer’s disease,” said study investigator Nadine Diersch, PhD, guest researcher with the German Center for Neurodegenerative Diseases (DZNE), Tübingen.
This study, said Diersch, provides “first evidence of how a digital footprint for early dementia-related cognitive decline might look like in real-world settings during a short (less than 30 minutes) and remotely performed wayfinding task.”
The study was published online in PLOS Digital Health.
Trouble With Orientation
A total of 72 men and women in their mid-20s to mid-60s participated in the study; 23 of the 48 older adults had SCD but still scored normally on neuropsychological assessments.
All study participants were instructed to independently find five buildings on the medical campus of the Otto-von-Guericke-University Magdeburg in Germany, guided by a smartphone app developed by the study team. Their patterns of movement were tracked by GPS.
All participants had similar knowledge of the campus, and all were experienced in using smartphones. They also practiced using the app beforehand.
In most cases, participants reached the five destinations in less than half an hour. The younger participants performed better than the older ones; on average, the younger adults walked shorter distances and generally did not use the help function on the app as often as the older ones.
In the older adults, the number of orientation stops was predictive of SCD status. The adults with SCD tended to hesitate more at intersections. A decline in executive functioning might explain this finding, Diersch said.
“Intact executive functioning is an important component of efficient navigation, for example, when switching between different navigation strategies or planning a route. However, since this was the first study on that subject, more research is needed to determine the precise contribution of different cognitive processes on digital wayfinding data,” said Diersch.
With more study, “we think that such a smartphone-assisted wayfinding task, performed in the immediate surroundings, could be used as a low-threshold screening tool — for example, to stratify subjects with regard to the need of extended cognitive and clinical diagnostics in specialized care,” she added.
‘A Game Changer’
Commenting on the research, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, Florida, who wasn’t involved in the research, said the findings have the potential to “revolutionize” dementia care.
“We’ve seen smartphones transform everything from banking to dating — now they’re set to reshape brain health monitoring. This ingenious digital scavenger hunt detects cognitive decline in real-world scenarios, bypassing costly, complex tests. It’s a game changer,” said Lakhan.
“Just as we track our steps and calories, we could soon track our cognitive health with a tap. This isn’t just innovation; it’s the future of dementia prevention and care unfolding on our smartphone screens. We’re not just talking about convenience. We’re talking about catching Alzheimer’s before it catches us,” he added.
The next phase, Lakhan noted, would be to develop smartphone apps as digital therapeutics, not just to detect cognitive decline but to treat or even prevent it.
“Imagine your phone not only flagging potential issues but also providing personalized brain training exercises to keep your mind sharp and resilient against dementia,” Lakhan said.
This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) within the Collaborative Research Center “Neural Resources of Cognition” and a DZNE Innovation-2-Application Award. Diersch is now a full-time employee of neotiv. Lakhan had no relevant disclosures.
A version of this article first appeared on Medscape.com.
a novel real-world study suggested.
During a smartphone-assisted scavenger hunt on a university campus, researchers observed that older adults with subjective cognitive decline (SCD) paused more frequently, likely to reorient themselves, than those without SCD. This behavior served as an identifier of individuals with SCD.
“Deficits in spatial navigation are one of the first signs of Alzheimer’s disease,” said study investigator Nadine Diersch, PhD, guest researcher with the German Center for Neurodegenerative Diseases (DZNE), Tübingen.
This study, said Diersch, provides “first evidence of how a digital footprint for early dementia-related cognitive decline might look like in real-world settings during a short (less than 30 minutes) and remotely performed wayfinding task.”
The study was published online in PLOS Digital Health.
Trouble With Orientation
A total of 72 men and women in their mid-20s to mid-60s participated in the study; 23 of the 48 older adults had SCD but still scored normally on neuropsychological assessments.
All study participants were instructed to independently find five buildings on the medical campus of the Otto-von-Guericke-University Magdeburg in Germany, guided by a smartphone app developed by the study team. Their patterns of movement were tracked by GPS.
All participants had similar knowledge of the campus, and all were experienced in using smartphones. They also practiced using the app beforehand.
In most cases, participants reached the five destinations in less than half an hour. The younger participants performed better than the older ones; on average, the younger adults walked shorter distances and generally did not use the help function on the app as often as the older ones.
In the older adults, the number of orientation stops was predictive of SCD status. The adults with SCD tended to hesitate more at intersections. A decline in executive functioning might explain this finding, Diersch said.
“Intact executive functioning is an important component of efficient navigation, for example, when switching between different navigation strategies or planning a route. However, since this was the first study on that subject, more research is needed to determine the precise contribution of different cognitive processes on digital wayfinding data,” said Diersch.
With more study, “we think that such a smartphone-assisted wayfinding task, performed in the immediate surroundings, could be used as a low-threshold screening tool — for example, to stratify subjects with regard to the need of extended cognitive and clinical diagnostics in specialized care,” she added.
‘A Game Changer’
Commenting on the research, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, Florida, who wasn’t involved in the research, said the findings have the potential to “revolutionize” dementia care.
“We’ve seen smartphones transform everything from banking to dating — now they’re set to reshape brain health monitoring. This ingenious digital scavenger hunt detects cognitive decline in real-world scenarios, bypassing costly, complex tests. It’s a game changer,” said Lakhan.
“Just as we track our steps and calories, we could soon track our cognitive health with a tap. This isn’t just innovation; it’s the future of dementia prevention and care unfolding on our smartphone screens. We’re not just talking about convenience. We’re talking about catching Alzheimer’s before it catches us,” he added.
The next phase, Lakhan noted, would be to develop smartphone apps as digital therapeutics, not just to detect cognitive decline but to treat or even prevent it.
“Imagine your phone not only flagging potential issues but also providing personalized brain training exercises to keep your mind sharp and resilient against dementia,” Lakhan said.
This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) within the Collaborative Research Center “Neural Resources of Cognition” and a DZNE Innovation-2-Application Award. Diersch is now a full-time employee of neotiv. Lakhan had no relevant disclosures.
A version of this article first appeared on Medscape.com.
a novel real-world study suggested.
During a smartphone-assisted scavenger hunt on a university campus, researchers observed that older adults with subjective cognitive decline (SCD) paused more frequently, likely to reorient themselves, than those without SCD. This behavior served as an identifier of individuals with SCD.
“Deficits in spatial navigation are one of the first signs of Alzheimer’s disease,” said study investigator Nadine Diersch, PhD, guest researcher with the German Center for Neurodegenerative Diseases (DZNE), Tübingen.
This study, said Diersch, provides “first evidence of how a digital footprint for early dementia-related cognitive decline might look like in real-world settings during a short (less than 30 minutes) and remotely performed wayfinding task.”
The study was published online in PLOS Digital Health.
Trouble With Orientation
A total of 72 men and women in their mid-20s to mid-60s participated in the study; 23 of the 48 older adults had SCD but still scored normally on neuropsychological assessments.
All study participants were instructed to independently find five buildings on the medical campus of the Otto-von-Guericke-University Magdeburg in Germany, guided by a smartphone app developed by the study team. Their patterns of movement were tracked by GPS.
All participants had similar knowledge of the campus, and all were experienced in using smartphones. They also practiced using the app beforehand.
In most cases, participants reached the five destinations in less than half an hour. The younger participants performed better than the older ones; on average, the younger adults walked shorter distances and generally did not use the help function on the app as often as the older ones.
In the older adults, the number of orientation stops was predictive of SCD status. The adults with SCD tended to hesitate more at intersections. A decline in executive functioning might explain this finding, Diersch said.
“Intact executive functioning is an important component of efficient navigation, for example, when switching between different navigation strategies or planning a route. However, since this was the first study on that subject, more research is needed to determine the precise contribution of different cognitive processes on digital wayfinding data,” said Diersch.
With more study, “we think that such a smartphone-assisted wayfinding task, performed in the immediate surroundings, could be used as a low-threshold screening tool — for example, to stratify subjects with regard to the need of extended cognitive and clinical diagnostics in specialized care,” she added.
‘A Game Changer’
Commenting on the research, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, Florida, who wasn’t involved in the research, said the findings have the potential to “revolutionize” dementia care.
“We’ve seen smartphones transform everything from banking to dating — now they’re set to reshape brain health monitoring. This ingenious digital scavenger hunt detects cognitive decline in real-world scenarios, bypassing costly, complex tests. It’s a game changer,” said Lakhan.
“Just as we track our steps and calories, we could soon track our cognitive health with a tap. This isn’t just innovation; it’s the future of dementia prevention and care unfolding on our smartphone screens. We’re not just talking about convenience. We’re talking about catching Alzheimer’s before it catches us,” he added.
The next phase, Lakhan noted, would be to develop smartphone apps as digital therapeutics, not just to detect cognitive decline but to treat or even prevent it.
“Imagine your phone not only flagging potential issues but also providing personalized brain training exercises to keep your mind sharp and resilient against dementia,” Lakhan said.
This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) within the Collaborative Research Center “Neural Resources of Cognition” and a DZNE Innovation-2-Application Award. Diersch is now a full-time employee of neotiv. Lakhan had no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM PLOS DIGITAL HEALTH
Long-Term Cognitive Monitoring Warranted After First Stroke
A first stroke in older adults is associated with substantial immediate and accelerated long-term cognitive decline, suggested a new study that underscores the need for continuous cognitive monitoring in this patient population.
Results from the study, which included 14 international cohorts of older adults, showed that stroke was associated with a significant acute decline in global cognition and a small, but significant, acceleration in the rate of cognitive decline over time.
Cognitive assessments in primary care are “crucial, especially since cognitive impairment is frequently missed or undiagnosed in hospitals,” lead author Jessica Lo, MSc, biostatistician and research associate with the Center for Healthy Brain Aging, University of New South Wales, Sydney, Australia, told this news organization.
She suggested clinicians incorporate long-term cognitive assessments into care plans, using more sensitive neuropsychological tests in primary care to detect early signs of cognitive impairment. “Early detection would enable timely interventions to improve outcomes,” Lo said.
She also noted that poststroke care typically includes physical rehabilitation but not cognitive rehabilitation, which many rehabilitation centers aren’t equipped to provide.
The study was published online in JAMA Network Open.
Mapping Cognitive Decline Trajectory
Cognitive impairment after stroke is common, but the trajectory of cognitive decline following a first stroke, relative to prestroke cognitive function, remains unclear.
The investigators leveraged data from 14 population-based cohort studies of 20,860 adults (mean age, 73 years; 59% women) to map the trajectory of cognitive function before and after a first stroke.
The primary outcome was global cognition, defined as the standardized average of four cognitive domains (language, memory, processing speed, and executive function).
During a mean follow-up of 7.5 years, 1041 (5%) adults (mean age, 79 years) experienced a first stroke, a mean of 4.5 years after study entry.
In adjusted analyses, stroke was associated with a significant acute decline of 0.25 SD in global cognition and a “small but significant” acceleration in the rate of decline of −0.038 SD per year, the authors reported.
Stroke was also associated with acute decline in all individual cognitive domains except for memory, with effect sizes ranging from −0.17 to −0.22 SD. Poststroke declines in Mini-Mental State Examination scores (−0.36 SD) were also noted.
In terms of cognitive trajectory, the rate of decline before stroke in survivors was similar to that seen in peers who didn’t have a stroke (−0.048 and −0.049 SD per year in global cognition, respectively).
The researchers did not identify any vascular risk factors moderating cognitive decline following a stroke, consistent with prior research. However, cognitive decline was significantly more rapid in individuals without stroke, regardless of any future stroke, who had a history of diabetes, hypertension, high cholesterol, cardiovascular disease, depression, smoking, or were APOE4 carriers.
“Targeting modifiable vascular risk factors at an early stage may reduce the risk of stroke but also subsequent risk of stroke-related cognitive decline and cognitive impairment,” the researchers noted.
A ‘Major Step’ in the Right Direction
As previously reported by this news organization, in 2023 the American Heart Association (AHA) issued a statement noting that screening for cognitive impairment should be part of multidisciplinary care for stroke survivors.
Commenting for this news organization, Mitchell Elkind, MD, MS, AHA chief clinical science officer, said these new data are consistent with current AHA guidelines and statements that “support screening for cognitive and functional decline in patients both acutely and over the long term after stroke.”
Elkind noted that the 2022 guideline for intracerebral hemorrhage states that cognitive screening should occur “across the continuum of inpatient care and at intervals in the outpatient setting” and provides recommendations for cognitive therapy.
“Our 2021 scientific statement on the primary care of patients after stroke also recommends screening for both depression and cognitive impairment over both the short- and long-term,” said Elkind, professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City.
“These documents recognize the fact that function and cognition can continue to decline years after stroke and that patients’ rehabilitation and support needs may therefore change over time after stroke,” Elkind added.
The authors of an accompanying commentary called it a “major step” in the right direction for the future of long-term stroke outcome assessment.
“As we develop new devices, indications, and time windows for stroke treatment, it may perhaps be wise to ensure trials steer away from simpler outcomes to more complex, granular ones,” wrote Yasmin Sadigh, MSc, and Victor Volovici, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands.
The study had no commercial funding. The authors and commentary writers and Elkind have declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
A first stroke in older adults is associated with substantial immediate and accelerated long-term cognitive decline, suggested a new study that underscores the need for continuous cognitive monitoring in this patient population.
Results from the study, which included 14 international cohorts of older adults, showed that stroke was associated with a significant acute decline in global cognition and a small, but significant, acceleration in the rate of cognitive decline over time.
Cognitive assessments in primary care are “crucial, especially since cognitive impairment is frequently missed or undiagnosed in hospitals,” lead author Jessica Lo, MSc, biostatistician and research associate with the Center for Healthy Brain Aging, University of New South Wales, Sydney, Australia, told this news organization.
She suggested clinicians incorporate long-term cognitive assessments into care plans, using more sensitive neuropsychological tests in primary care to detect early signs of cognitive impairment. “Early detection would enable timely interventions to improve outcomes,” Lo said.
She also noted that poststroke care typically includes physical rehabilitation but not cognitive rehabilitation, which many rehabilitation centers aren’t equipped to provide.
The study was published online in JAMA Network Open.
Mapping Cognitive Decline Trajectory
Cognitive impairment after stroke is common, but the trajectory of cognitive decline following a first stroke, relative to prestroke cognitive function, remains unclear.
The investigators leveraged data from 14 population-based cohort studies of 20,860 adults (mean age, 73 years; 59% women) to map the trajectory of cognitive function before and after a first stroke.
The primary outcome was global cognition, defined as the standardized average of four cognitive domains (language, memory, processing speed, and executive function).
During a mean follow-up of 7.5 years, 1041 (5%) adults (mean age, 79 years) experienced a first stroke, a mean of 4.5 years after study entry.
In adjusted analyses, stroke was associated with a significant acute decline of 0.25 SD in global cognition and a “small but significant” acceleration in the rate of decline of −0.038 SD per year, the authors reported.
Stroke was also associated with acute decline in all individual cognitive domains except for memory, with effect sizes ranging from −0.17 to −0.22 SD. Poststroke declines in Mini-Mental State Examination scores (−0.36 SD) were also noted.
In terms of cognitive trajectory, the rate of decline before stroke in survivors was similar to that seen in peers who didn’t have a stroke (−0.048 and −0.049 SD per year in global cognition, respectively).
The researchers did not identify any vascular risk factors moderating cognitive decline following a stroke, consistent with prior research. However, cognitive decline was significantly more rapid in individuals without stroke, regardless of any future stroke, who had a history of diabetes, hypertension, high cholesterol, cardiovascular disease, depression, smoking, or were APOE4 carriers.
“Targeting modifiable vascular risk factors at an early stage may reduce the risk of stroke but also subsequent risk of stroke-related cognitive decline and cognitive impairment,” the researchers noted.
A ‘Major Step’ in the Right Direction
As previously reported by this news organization, in 2023 the American Heart Association (AHA) issued a statement noting that screening for cognitive impairment should be part of multidisciplinary care for stroke survivors.
Commenting for this news organization, Mitchell Elkind, MD, MS, AHA chief clinical science officer, said these new data are consistent with current AHA guidelines and statements that “support screening for cognitive and functional decline in patients both acutely and over the long term after stroke.”
Elkind noted that the 2022 guideline for intracerebral hemorrhage states that cognitive screening should occur “across the continuum of inpatient care and at intervals in the outpatient setting” and provides recommendations for cognitive therapy.
“Our 2021 scientific statement on the primary care of patients after stroke also recommends screening for both depression and cognitive impairment over both the short- and long-term,” said Elkind, professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City.
“These documents recognize the fact that function and cognition can continue to decline years after stroke and that patients’ rehabilitation and support needs may therefore change over time after stroke,” Elkind added.
The authors of an accompanying commentary called it a “major step” in the right direction for the future of long-term stroke outcome assessment.
“As we develop new devices, indications, and time windows for stroke treatment, it may perhaps be wise to ensure trials steer away from simpler outcomes to more complex, granular ones,” wrote Yasmin Sadigh, MSc, and Victor Volovici, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands.
The study had no commercial funding. The authors and commentary writers and Elkind have declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
A first stroke in older adults is associated with substantial immediate and accelerated long-term cognitive decline, suggested a new study that underscores the need for continuous cognitive monitoring in this patient population.
Results from the study, which included 14 international cohorts of older adults, showed that stroke was associated with a significant acute decline in global cognition and a small, but significant, acceleration in the rate of cognitive decline over time.
Cognitive assessments in primary care are “crucial, especially since cognitive impairment is frequently missed or undiagnosed in hospitals,” lead author Jessica Lo, MSc, biostatistician and research associate with the Center for Healthy Brain Aging, University of New South Wales, Sydney, Australia, told this news organization.
She suggested clinicians incorporate long-term cognitive assessments into care plans, using more sensitive neuropsychological tests in primary care to detect early signs of cognitive impairment. “Early detection would enable timely interventions to improve outcomes,” Lo said.
She also noted that poststroke care typically includes physical rehabilitation but not cognitive rehabilitation, which many rehabilitation centers aren’t equipped to provide.
The study was published online in JAMA Network Open.
Mapping Cognitive Decline Trajectory
Cognitive impairment after stroke is common, but the trajectory of cognitive decline following a first stroke, relative to prestroke cognitive function, remains unclear.
The investigators leveraged data from 14 population-based cohort studies of 20,860 adults (mean age, 73 years; 59% women) to map the trajectory of cognitive function before and after a first stroke.
The primary outcome was global cognition, defined as the standardized average of four cognitive domains (language, memory, processing speed, and executive function).
During a mean follow-up of 7.5 years, 1041 (5%) adults (mean age, 79 years) experienced a first stroke, a mean of 4.5 years after study entry.
In adjusted analyses, stroke was associated with a significant acute decline of 0.25 SD in global cognition and a “small but significant” acceleration in the rate of decline of −0.038 SD per year, the authors reported.
Stroke was also associated with acute decline in all individual cognitive domains except for memory, with effect sizes ranging from −0.17 to −0.22 SD. Poststroke declines in Mini-Mental State Examination scores (−0.36 SD) were also noted.
In terms of cognitive trajectory, the rate of decline before stroke in survivors was similar to that seen in peers who didn’t have a stroke (−0.048 and −0.049 SD per year in global cognition, respectively).
The researchers did not identify any vascular risk factors moderating cognitive decline following a stroke, consistent with prior research. However, cognitive decline was significantly more rapid in individuals without stroke, regardless of any future stroke, who had a history of diabetes, hypertension, high cholesterol, cardiovascular disease, depression, smoking, or were APOE4 carriers.
“Targeting modifiable vascular risk factors at an early stage may reduce the risk of stroke but also subsequent risk of stroke-related cognitive decline and cognitive impairment,” the researchers noted.
A ‘Major Step’ in the Right Direction
As previously reported by this news organization, in 2023 the American Heart Association (AHA) issued a statement noting that screening for cognitive impairment should be part of multidisciplinary care for stroke survivors.
Commenting for this news organization, Mitchell Elkind, MD, MS, AHA chief clinical science officer, said these new data are consistent with current AHA guidelines and statements that “support screening for cognitive and functional decline in patients both acutely and over the long term after stroke.”
Elkind noted that the 2022 guideline for intracerebral hemorrhage states that cognitive screening should occur “across the continuum of inpatient care and at intervals in the outpatient setting” and provides recommendations for cognitive therapy.
“Our 2021 scientific statement on the primary care of patients after stroke also recommends screening for both depression and cognitive impairment over both the short- and long-term,” said Elkind, professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City.
“These documents recognize the fact that function and cognition can continue to decline years after stroke and that patients’ rehabilitation and support needs may therefore change over time after stroke,” Elkind added.
The authors of an accompanying commentary called it a “major step” in the right direction for the future of long-term stroke outcome assessment.
“As we develop new devices, indications, and time windows for stroke treatment, it may perhaps be wise to ensure trials steer away from simpler outcomes to more complex, granular ones,” wrote Yasmin Sadigh, MSc, and Victor Volovici, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands.
The study had no commercial funding. The authors and commentary writers and Elkind have declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
A New Way to ‘Smuggle’ Drugs Through the Blood-Brain Barrier
Getting drugs to the brain is difficult. The very thing designed to protect the brain’s environment — the blood-brain barrier (BBB) — is one of the main reasons diseases like Alzheimer’s are so hard to treat.
And even if a drug can cross the BBB, it’s difficult to ensure it reaches specific areas of the brain like the hippocampus, which is located deep within the brain and notoriously difficult to target with conventional drugs.
However, new research shows that novel bioengineered proteins can target neurons in the hippocampus. Using a mouse model, the researchers found that these proteins could be delivered to the hippocampus intranasally — through the nose via a spray.
“This is an urgent topic because many potential therapeutic agents do not readily cross the blood-brain barrier or have limited effects even after intranasal delivery,” said Konrad Talbot, PhD, professor of neurosurgery and pathology at Loma Linda University, Loma Linda, California, who was not involved in the study.
This is the first time a protein drug, which is larger than many drug molecules, has been specifically delivered to the hippocampus, said Noriyasu Kamei, PhD, a professor of pharmaceutical science at Kobe Gakuin University in Kobe, Japan, and lead author of the study.
How Did They Do It?
“Smuggle” may be a flip term, but it’s not inaccurate.
Insulin has the ability to cross the BBB, so the team began with insulin as the vehicle. By attaching other molecules to an insulin fragment, researchers theorized they could create an insulin fusion protein that can be transported across the BBB and into the brain via a process called macropinocytosis.
They executed this technique in mice by fusing florescent proteins to insulin. To treat Alzheimer’s or other diseases, they would want to fuse therapeutic molecules to the insulin for brain delivery — a future step for their research.
Other groups are studying a similar approach using transferrin receptor instead of insulin to shuttle molecules across the BBB. However, the transferrin receptor doesn’t make it to the hippocampus, Kamei said.
A benefit of their system, Kamei pointed out, is that because the method just requires a small piece of insulin to work, it’s straightforward to produce in bacteria. Importantly, he said, the insulin fusion protein should not affect blood glucose levels.
Why Insulin?
Aside from its ability to cross the BBB, the team thought to use insulin as the basis of a fusion protein because of their previous work.
“I found that insulin has the unique characteristics to be accumulated specifically in the hippocampal neuronal layers,” Kamei explained. That potential for accumulation is key, as they can deliver more of a drug that way.
In their past work, Kamei and colleagues also found that it could be delivered from the nose to the brain, indicating that it may be possible to use a simple nasal spray.
“The potential for noninvasive delivery of proteins by intranasal administration to the hippocampal neurons is novel,” said John Varghese, PhD, professor of neurology at University of California Los Angeles (he was not involved in the study). He noted that it’s also possible that this method could be harnessed to treat other brain diseases.
There are other drugs that treat central nervous system diseases, such as desmopressin and buserelin, which are available as nasal sprays. However, these drugs are synthetic hormones, and though relatively small molecules, they do not cross the BBB.
There are also antibody treatments for Alzheimer’s, such as aducanumab (which will soon be discontinued), lecanemab, and donanemab; however, they aren’t always effective and they require an intravenous infusion, and while they cross the BBB to a degree, to bolster delivery to the brain, studies have proposed additional methods like focused ultrasound.
“Neuronal uptake of drugs potentially therapeutic for Alzheimer’s may be significantly enhanced by fusion of those drugs with insulin. This should be a research priority,” said Talbot.
While this is exciting and has potential, such drugs won’t be available anytime soon. Kamei would like to complete the research at a basic level in 5 years, including testing insulin fused with larger proteins such as therapeutic antibodies. If all goes well, they’ll move on to testing insulin fusion drugs in people.
A version of this article first appeared on Medscape.com.
Getting drugs to the brain is difficult. The very thing designed to protect the brain’s environment — the blood-brain barrier (BBB) — is one of the main reasons diseases like Alzheimer’s are so hard to treat.
And even if a drug can cross the BBB, it’s difficult to ensure it reaches specific areas of the brain like the hippocampus, which is located deep within the brain and notoriously difficult to target with conventional drugs.
However, new research shows that novel bioengineered proteins can target neurons in the hippocampus. Using a mouse model, the researchers found that these proteins could be delivered to the hippocampus intranasally — through the nose via a spray.
“This is an urgent topic because many potential therapeutic agents do not readily cross the blood-brain barrier or have limited effects even after intranasal delivery,” said Konrad Talbot, PhD, professor of neurosurgery and pathology at Loma Linda University, Loma Linda, California, who was not involved in the study.
This is the first time a protein drug, which is larger than many drug molecules, has been specifically delivered to the hippocampus, said Noriyasu Kamei, PhD, a professor of pharmaceutical science at Kobe Gakuin University in Kobe, Japan, and lead author of the study.
How Did They Do It?
“Smuggle” may be a flip term, but it’s not inaccurate.
Insulin has the ability to cross the BBB, so the team began with insulin as the vehicle. By attaching other molecules to an insulin fragment, researchers theorized they could create an insulin fusion protein that can be transported across the BBB and into the brain via a process called macropinocytosis.
They executed this technique in mice by fusing florescent proteins to insulin. To treat Alzheimer’s or other diseases, they would want to fuse therapeutic molecules to the insulin for brain delivery — a future step for their research.
Other groups are studying a similar approach using transferrin receptor instead of insulin to shuttle molecules across the BBB. However, the transferrin receptor doesn’t make it to the hippocampus, Kamei said.
A benefit of their system, Kamei pointed out, is that because the method just requires a small piece of insulin to work, it’s straightforward to produce in bacteria. Importantly, he said, the insulin fusion protein should not affect blood glucose levels.
Why Insulin?
Aside from its ability to cross the BBB, the team thought to use insulin as the basis of a fusion protein because of their previous work.
“I found that insulin has the unique characteristics to be accumulated specifically in the hippocampal neuronal layers,” Kamei explained. That potential for accumulation is key, as they can deliver more of a drug that way.
In their past work, Kamei and colleagues also found that it could be delivered from the nose to the brain, indicating that it may be possible to use a simple nasal spray.
“The potential for noninvasive delivery of proteins by intranasal administration to the hippocampal neurons is novel,” said John Varghese, PhD, professor of neurology at University of California Los Angeles (he was not involved in the study). He noted that it’s also possible that this method could be harnessed to treat other brain diseases.
There are other drugs that treat central nervous system diseases, such as desmopressin and buserelin, which are available as nasal sprays. However, these drugs are synthetic hormones, and though relatively small molecules, they do not cross the BBB.
There are also antibody treatments for Alzheimer’s, such as aducanumab (which will soon be discontinued), lecanemab, and donanemab; however, they aren’t always effective and they require an intravenous infusion, and while they cross the BBB to a degree, to bolster delivery to the brain, studies have proposed additional methods like focused ultrasound.
“Neuronal uptake of drugs potentially therapeutic for Alzheimer’s may be significantly enhanced by fusion of those drugs with insulin. This should be a research priority,” said Talbot.
While this is exciting and has potential, such drugs won’t be available anytime soon. Kamei would like to complete the research at a basic level in 5 years, including testing insulin fused with larger proteins such as therapeutic antibodies. If all goes well, they’ll move on to testing insulin fusion drugs in people.
A version of this article first appeared on Medscape.com.
Getting drugs to the brain is difficult. The very thing designed to protect the brain’s environment — the blood-brain barrier (BBB) — is one of the main reasons diseases like Alzheimer’s are so hard to treat.
And even if a drug can cross the BBB, it’s difficult to ensure it reaches specific areas of the brain like the hippocampus, which is located deep within the brain and notoriously difficult to target with conventional drugs.
However, new research shows that novel bioengineered proteins can target neurons in the hippocampus. Using a mouse model, the researchers found that these proteins could be delivered to the hippocampus intranasally — through the nose via a spray.
“This is an urgent topic because many potential therapeutic agents do not readily cross the blood-brain barrier or have limited effects even after intranasal delivery,” said Konrad Talbot, PhD, professor of neurosurgery and pathology at Loma Linda University, Loma Linda, California, who was not involved in the study.
This is the first time a protein drug, which is larger than many drug molecules, has been specifically delivered to the hippocampus, said Noriyasu Kamei, PhD, a professor of pharmaceutical science at Kobe Gakuin University in Kobe, Japan, and lead author of the study.
How Did They Do It?
“Smuggle” may be a flip term, but it’s not inaccurate.
Insulin has the ability to cross the BBB, so the team began with insulin as the vehicle. By attaching other molecules to an insulin fragment, researchers theorized they could create an insulin fusion protein that can be transported across the BBB and into the brain via a process called macropinocytosis.
They executed this technique in mice by fusing florescent proteins to insulin. To treat Alzheimer’s or other diseases, they would want to fuse therapeutic molecules to the insulin for brain delivery — a future step for their research.
Other groups are studying a similar approach using transferrin receptor instead of insulin to shuttle molecules across the BBB. However, the transferrin receptor doesn’t make it to the hippocampus, Kamei said.
A benefit of their system, Kamei pointed out, is that because the method just requires a small piece of insulin to work, it’s straightforward to produce in bacteria. Importantly, he said, the insulin fusion protein should not affect blood glucose levels.
Why Insulin?
Aside from its ability to cross the BBB, the team thought to use insulin as the basis of a fusion protein because of their previous work.
“I found that insulin has the unique characteristics to be accumulated specifically in the hippocampal neuronal layers,” Kamei explained. That potential for accumulation is key, as they can deliver more of a drug that way.
In their past work, Kamei and colleagues also found that it could be delivered from the nose to the brain, indicating that it may be possible to use a simple nasal spray.
“The potential for noninvasive delivery of proteins by intranasal administration to the hippocampal neurons is novel,” said John Varghese, PhD, professor of neurology at University of California Los Angeles (he was not involved in the study). He noted that it’s also possible that this method could be harnessed to treat other brain diseases.
There are other drugs that treat central nervous system diseases, such as desmopressin and buserelin, which are available as nasal sprays. However, these drugs are synthetic hormones, and though relatively small molecules, they do not cross the BBB.
There are also antibody treatments for Alzheimer’s, such as aducanumab (which will soon be discontinued), lecanemab, and donanemab; however, they aren’t always effective and they require an intravenous infusion, and while they cross the BBB to a degree, to bolster delivery to the brain, studies have proposed additional methods like focused ultrasound.
“Neuronal uptake of drugs potentially therapeutic for Alzheimer’s may be significantly enhanced by fusion of those drugs with insulin. This should be a research priority,” said Talbot.
While this is exciting and has potential, such drugs won’t be available anytime soon. Kamei would like to complete the research at a basic level in 5 years, including testing insulin fused with larger proteins such as therapeutic antibodies. If all goes well, they’ll move on to testing insulin fusion drugs in people.
A version of this article first appeared on Medscape.com.
FROM PNAS
High Cadmium Level Associated With Cognitive Impairment Risk
TOPLINE:
High levels of urinary cadmium are associated with double the risk for global cognitive impairment in White adults, a new study shows. There was no such association between the heavy metal and cognitive function in Black adults.
METHODOLOGY:
- Investigators reviewed data on 2172 adults (mean age, 64 years; 61% White; 39% Black; 55% women) from the ongoing REGARDS population-based prospective cohort study in the United States who were free of cognitive impairment or stroke at baseline.
- Global cognitive impairment was assessed annually using the Six-Item Screener, and domain-based cognitive impairment was assessed every 2 years using the Enhanced Cognitive Battery.
- Blood and urine samples were collected from the participants at baseline, and levels of urinary cadmium were assessed using a urinary creatinine-correction method.
- Covariates included participants’ age, sex, smoking pack-years, alcohol consumption, and education level.
- Mean follow-up was 10 years.
TAKEAWAY:
- Global cognitive impairment was observed in 195 cases and domain-based cognitive impairment in 53 cases.
- High levels of urinary cadmium were associated with double the risk of developing global cognitive impairment in White adults (odds ratio [OR], 2.07; 95% CI, 1.18-3.64).
- No association was observed between urinary cadmium and global cognitive impairment in the overall cohort or in Black adults.
- Median smoking pack-years — a significant source of cadmium exposure for the US population — was significantly higher in White participants than Black participants (P = .001 for the highest tertile of urinary cadmium concentration).
IN PRACTICE:
“These results need to be confirmed with studies that measure cadmium levels over time, include more people and follow people over a longer time, but there are many reasons to reduce exposure to cadmium, whether it’s through implementing policies and regulations for air pollution and drinking water or people changing their behaviors by stopping smoking or being around cigarette smoke,” lead author Liping Lu, MD, PhD, MS, Columbia University, New York City, said in a press release.
SOURCE:
The study was published online in Neurology.
LIMITATIONS:
Urinary cadmium levels were tested only at baseline, which may not have captured changes in exposure over time. A limited number of patients with cognitive impairment used the Enhanced Cognitive Battery. The study did not include occupational information, and the potential for residual confounding from smoking could not be completely excluded. The follow-up time may have been insufficient for observing a significant effect on cognition, and competing risks for mortality associated with cadmium exposure could also have affected the findings.
DISCLOSURES:
The study was co-funded by the National Institute of Neurological Disorders and Stroke and the National Institute on Aging of the National Institutes of Health (NIH). Several authors were partially supported by the NIH. Detailed disclosures are provided in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
High levels of urinary cadmium are associated with double the risk for global cognitive impairment in White adults, a new study shows. There was no such association between the heavy metal and cognitive function in Black adults.
METHODOLOGY:
- Investigators reviewed data on 2172 adults (mean age, 64 years; 61% White; 39% Black; 55% women) from the ongoing REGARDS population-based prospective cohort study in the United States who were free of cognitive impairment or stroke at baseline.
- Global cognitive impairment was assessed annually using the Six-Item Screener, and domain-based cognitive impairment was assessed every 2 years using the Enhanced Cognitive Battery.
- Blood and urine samples were collected from the participants at baseline, and levels of urinary cadmium were assessed using a urinary creatinine-correction method.
- Covariates included participants’ age, sex, smoking pack-years, alcohol consumption, and education level.
- Mean follow-up was 10 years.
TAKEAWAY:
- Global cognitive impairment was observed in 195 cases and domain-based cognitive impairment in 53 cases.
- High levels of urinary cadmium were associated with double the risk of developing global cognitive impairment in White adults (odds ratio [OR], 2.07; 95% CI, 1.18-3.64).
- No association was observed between urinary cadmium and global cognitive impairment in the overall cohort or in Black adults.
- Median smoking pack-years — a significant source of cadmium exposure for the US population — was significantly higher in White participants than Black participants (P = .001 for the highest tertile of urinary cadmium concentration).
IN PRACTICE:
“These results need to be confirmed with studies that measure cadmium levels over time, include more people and follow people over a longer time, but there are many reasons to reduce exposure to cadmium, whether it’s through implementing policies and regulations for air pollution and drinking water or people changing their behaviors by stopping smoking or being around cigarette smoke,” lead author Liping Lu, MD, PhD, MS, Columbia University, New York City, said in a press release.
SOURCE:
The study was published online in Neurology.
LIMITATIONS:
Urinary cadmium levels were tested only at baseline, which may not have captured changes in exposure over time. A limited number of patients with cognitive impairment used the Enhanced Cognitive Battery. The study did not include occupational information, and the potential for residual confounding from smoking could not be completely excluded. The follow-up time may have been insufficient for observing a significant effect on cognition, and competing risks for mortality associated with cadmium exposure could also have affected the findings.
DISCLOSURES:
The study was co-funded by the National Institute of Neurological Disorders and Stroke and the National Institute on Aging of the National Institutes of Health (NIH). Several authors were partially supported by the NIH. Detailed disclosures are provided in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
High levels of urinary cadmium are associated with double the risk for global cognitive impairment in White adults, a new study shows. There was no such association between the heavy metal and cognitive function in Black adults.
METHODOLOGY:
- Investigators reviewed data on 2172 adults (mean age, 64 years; 61% White; 39% Black; 55% women) from the ongoing REGARDS population-based prospective cohort study in the United States who were free of cognitive impairment or stroke at baseline.
- Global cognitive impairment was assessed annually using the Six-Item Screener, and domain-based cognitive impairment was assessed every 2 years using the Enhanced Cognitive Battery.
- Blood and urine samples were collected from the participants at baseline, and levels of urinary cadmium were assessed using a urinary creatinine-correction method.
- Covariates included participants’ age, sex, smoking pack-years, alcohol consumption, and education level.
- Mean follow-up was 10 years.
TAKEAWAY:
- Global cognitive impairment was observed in 195 cases and domain-based cognitive impairment in 53 cases.
- High levels of urinary cadmium were associated with double the risk of developing global cognitive impairment in White adults (odds ratio [OR], 2.07; 95% CI, 1.18-3.64).
- No association was observed between urinary cadmium and global cognitive impairment in the overall cohort or in Black adults.
- Median smoking pack-years — a significant source of cadmium exposure for the US population — was significantly higher in White participants than Black participants (P = .001 for the highest tertile of urinary cadmium concentration).
IN PRACTICE:
“These results need to be confirmed with studies that measure cadmium levels over time, include more people and follow people over a longer time, but there are many reasons to reduce exposure to cadmium, whether it’s through implementing policies and regulations for air pollution and drinking water or people changing their behaviors by stopping smoking or being around cigarette smoke,” lead author Liping Lu, MD, PhD, MS, Columbia University, New York City, said in a press release.
SOURCE:
The study was published online in Neurology.
LIMITATIONS:
Urinary cadmium levels were tested only at baseline, which may not have captured changes in exposure over time. A limited number of patients with cognitive impairment used the Enhanced Cognitive Battery. The study did not include occupational information, and the potential for residual confounding from smoking could not be completely excluded. The follow-up time may have been insufficient for observing a significant effect on cognition, and competing risks for mortality associated with cadmium exposure could also have affected the findings.
DISCLOSURES:
The study was co-funded by the National Institute of Neurological Disorders and Stroke and the National Institute on Aging of the National Institutes of Health (NIH). Several authors were partially supported by the NIH. Detailed disclosures are provided in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Autonomy Versus Safety in Cognitive Impairment Decision-Making
DUBLIN – As healthcare systems across Europe deal with an increasing prevalence of cognitive impairment, family doctors are emerging as key players in assessing and supporting patients’ decision-making capacities.
This was a central theme at the 29th WONCA Europe Conference, where the European Young Family Doctors Movement (EYFDM) presented insights from a project conducted across Europe, involving young general practitioners who participated in workshops held in multiple countries.
“Family doctors are the linchpin in these decisions,” said Alina Zidaru, MD, from the Irish College of Physicians, Dublin. “They understand the patient’s history, build long-term relationships, and are best positioned to ensure that decisions reflect the patient’s values, not just what the law or the family might say.”
Dr. Zidaru and her colleague, Nick Mamo, MD, member of EYFDM in Glasgow, Scotland, emphasized the central role family doctors play in ensuring that patient rights and preferences are respected, regardless of their cognitive state. They are often the first to identify cognitive impairments and must carefully navigate the legal and ethical landscape of decision-making support.
“Often, we focus too much on avoiding harm and overlook the principle of autonomy,” said Dr. Mamo. “But it’s essential to give patients the right to make their own decisions, even when those decisions might seem unwise to us.”
The Case of Jay
Dr. Zidaru said: “We’ve conducted workshops in Brussels, Vienna, and Sydney, focusing on how to build habits that support patients. We presented real-life cases, like Jay, a 43-year-old man with trisomy and a moderate intellectual disability who must decide whether to undergo surgery for a hernia. The most significant challenge was ensuring continuity of care and respecting his autonomy, despite cognitive limitations.”
Jay’s case illustrates the complex ethical dilemmas faced by family doctors when balancing autonomy with patient safety. In many cases, cognitive impairments raise concerns about whether a patient can make decisions independently.
During the session, the audience was asked to share their thoughts on the case and to indicate whether they would allow Jay to make his own decision, and if they felt confident in assessing his cognitive capacity. The responses revealed a range of mixed feelings.
Legal and Cultural Variations Across Europe
The session also explored how different European countries approach decision-making for cognitively impaired individuals. A clear divide exists between nations that give family members automatic decision-making rights and those that require legal appointments.
In the United Kingdom, the Mental Capacity Act 2005 presumes capacity unless proven otherwise. Family doctors can assess patients’ decision-making abilities using any validated tool they find suitable. They should also aim to ensure that decisions are made in the patient’s best interests if they lack capacity. Family members only have legal authority if appointed through formal means, such as a lasting power of attorney.
In Spain and Italy, functional assessments are performed when patients retain decision-making authority in areas where they demonstrate competence. Legal guardianship can be appointed by the courts, sometimes limited to specific areas, but it is intended to support rather than replace the patient’s autonomy.
In France and Portugal, guardianship may be implemented in specific domains, but the patient’s ability to participate in decisions is always prioritized.
In Turkey, according to Turkish general practitioners in the audience, the courts and close family members often share the decision-making responsibility.
Dr. Zidaru added that Ireland’s Assisted Decision-Making (Capacity) Act 2015 introduced significant changes to how cognitive impairment is managed there. “Ireland adopted a standardized functional test of capacity, used by any doctor. A person can still make decisions as long as they understand, retain, and weigh the information needed to make that choice. If their capacity diminishes, a decision-making assistant, co–decision-maker, or representative can be appointed, but the patient’s will and preferences always come first.”
Family Doctors, a Growing Responsibility
“It’s not just about the legal framework: it’s about cultural awareness and early communication,” added Dr. Mamo. “We have to ask ourselves: Do patients have the right to make bad decisions? And how do we, as family doctors, respect that while still ensuring their safety?”
The session concluded with a discussion on how the role of family doctors in decision-making for cognitively impaired patients will evolve as populations age and the incidence of conditions like dementia increases. The workload is rising, and the need for clear, consistent guidelines is critical.
“Family doctors will continue to play a central role in managing these challenges,” Dr. Zidaru emphasized. “But we need more resources, more education, and more support to ensure we can respect patient autonomy without compromising their well-being.”
A version of this article first appeared on Medscape.com.
DUBLIN – As healthcare systems across Europe deal with an increasing prevalence of cognitive impairment, family doctors are emerging as key players in assessing and supporting patients’ decision-making capacities.
This was a central theme at the 29th WONCA Europe Conference, where the European Young Family Doctors Movement (EYFDM) presented insights from a project conducted across Europe, involving young general practitioners who participated in workshops held in multiple countries.
“Family doctors are the linchpin in these decisions,” said Alina Zidaru, MD, from the Irish College of Physicians, Dublin. “They understand the patient’s history, build long-term relationships, and are best positioned to ensure that decisions reflect the patient’s values, not just what the law or the family might say.”
Dr. Zidaru and her colleague, Nick Mamo, MD, member of EYFDM in Glasgow, Scotland, emphasized the central role family doctors play in ensuring that patient rights and preferences are respected, regardless of their cognitive state. They are often the first to identify cognitive impairments and must carefully navigate the legal and ethical landscape of decision-making support.
“Often, we focus too much on avoiding harm and overlook the principle of autonomy,” said Dr. Mamo. “But it’s essential to give patients the right to make their own decisions, even when those decisions might seem unwise to us.”
The Case of Jay
Dr. Zidaru said: “We’ve conducted workshops in Brussels, Vienna, and Sydney, focusing on how to build habits that support patients. We presented real-life cases, like Jay, a 43-year-old man with trisomy and a moderate intellectual disability who must decide whether to undergo surgery for a hernia. The most significant challenge was ensuring continuity of care and respecting his autonomy, despite cognitive limitations.”
Jay’s case illustrates the complex ethical dilemmas faced by family doctors when balancing autonomy with patient safety. In many cases, cognitive impairments raise concerns about whether a patient can make decisions independently.
During the session, the audience was asked to share their thoughts on the case and to indicate whether they would allow Jay to make his own decision, and if they felt confident in assessing his cognitive capacity. The responses revealed a range of mixed feelings.
Legal and Cultural Variations Across Europe
The session also explored how different European countries approach decision-making for cognitively impaired individuals. A clear divide exists between nations that give family members automatic decision-making rights and those that require legal appointments.
In the United Kingdom, the Mental Capacity Act 2005 presumes capacity unless proven otherwise. Family doctors can assess patients’ decision-making abilities using any validated tool they find suitable. They should also aim to ensure that decisions are made in the patient’s best interests if they lack capacity. Family members only have legal authority if appointed through formal means, such as a lasting power of attorney.
In Spain and Italy, functional assessments are performed when patients retain decision-making authority in areas where they demonstrate competence. Legal guardianship can be appointed by the courts, sometimes limited to specific areas, but it is intended to support rather than replace the patient’s autonomy.
In France and Portugal, guardianship may be implemented in specific domains, but the patient’s ability to participate in decisions is always prioritized.
In Turkey, according to Turkish general practitioners in the audience, the courts and close family members often share the decision-making responsibility.
Dr. Zidaru added that Ireland’s Assisted Decision-Making (Capacity) Act 2015 introduced significant changes to how cognitive impairment is managed there. “Ireland adopted a standardized functional test of capacity, used by any doctor. A person can still make decisions as long as they understand, retain, and weigh the information needed to make that choice. If their capacity diminishes, a decision-making assistant, co–decision-maker, or representative can be appointed, but the patient’s will and preferences always come first.”
Family Doctors, a Growing Responsibility
“It’s not just about the legal framework: it’s about cultural awareness and early communication,” added Dr. Mamo. “We have to ask ourselves: Do patients have the right to make bad decisions? And how do we, as family doctors, respect that while still ensuring their safety?”
The session concluded with a discussion on how the role of family doctors in decision-making for cognitively impaired patients will evolve as populations age and the incidence of conditions like dementia increases. The workload is rising, and the need for clear, consistent guidelines is critical.
“Family doctors will continue to play a central role in managing these challenges,” Dr. Zidaru emphasized. “But we need more resources, more education, and more support to ensure we can respect patient autonomy without compromising their well-being.”
A version of this article first appeared on Medscape.com.
DUBLIN – As healthcare systems across Europe deal with an increasing prevalence of cognitive impairment, family doctors are emerging as key players in assessing and supporting patients’ decision-making capacities.
This was a central theme at the 29th WONCA Europe Conference, where the European Young Family Doctors Movement (EYFDM) presented insights from a project conducted across Europe, involving young general practitioners who participated in workshops held in multiple countries.
“Family doctors are the linchpin in these decisions,” said Alina Zidaru, MD, from the Irish College of Physicians, Dublin. “They understand the patient’s history, build long-term relationships, and are best positioned to ensure that decisions reflect the patient’s values, not just what the law or the family might say.”
Dr. Zidaru and her colleague, Nick Mamo, MD, member of EYFDM in Glasgow, Scotland, emphasized the central role family doctors play in ensuring that patient rights and preferences are respected, regardless of their cognitive state. They are often the first to identify cognitive impairments and must carefully navigate the legal and ethical landscape of decision-making support.
“Often, we focus too much on avoiding harm and overlook the principle of autonomy,” said Dr. Mamo. “But it’s essential to give patients the right to make their own decisions, even when those decisions might seem unwise to us.”
The Case of Jay
Dr. Zidaru said: “We’ve conducted workshops in Brussels, Vienna, and Sydney, focusing on how to build habits that support patients. We presented real-life cases, like Jay, a 43-year-old man with trisomy and a moderate intellectual disability who must decide whether to undergo surgery for a hernia. The most significant challenge was ensuring continuity of care and respecting his autonomy, despite cognitive limitations.”
Jay’s case illustrates the complex ethical dilemmas faced by family doctors when balancing autonomy with patient safety. In many cases, cognitive impairments raise concerns about whether a patient can make decisions independently.
During the session, the audience was asked to share their thoughts on the case and to indicate whether they would allow Jay to make his own decision, and if they felt confident in assessing his cognitive capacity. The responses revealed a range of mixed feelings.
Legal and Cultural Variations Across Europe
The session also explored how different European countries approach decision-making for cognitively impaired individuals. A clear divide exists between nations that give family members automatic decision-making rights and those that require legal appointments.
In the United Kingdom, the Mental Capacity Act 2005 presumes capacity unless proven otherwise. Family doctors can assess patients’ decision-making abilities using any validated tool they find suitable. They should also aim to ensure that decisions are made in the patient’s best interests if they lack capacity. Family members only have legal authority if appointed through formal means, such as a lasting power of attorney.
In Spain and Italy, functional assessments are performed when patients retain decision-making authority in areas where they demonstrate competence. Legal guardianship can be appointed by the courts, sometimes limited to specific areas, but it is intended to support rather than replace the patient’s autonomy.
In France and Portugal, guardianship may be implemented in specific domains, but the patient’s ability to participate in decisions is always prioritized.
In Turkey, according to Turkish general practitioners in the audience, the courts and close family members often share the decision-making responsibility.
Dr. Zidaru added that Ireland’s Assisted Decision-Making (Capacity) Act 2015 introduced significant changes to how cognitive impairment is managed there. “Ireland adopted a standardized functional test of capacity, used by any doctor. A person can still make decisions as long as they understand, retain, and weigh the information needed to make that choice. If their capacity diminishes, a decision-making assistant, co–decision-maker, or representative can be appointed, but the patient’s will and preferences always come first.”
Family Doctors, a Growing Responsibility
“It’s not just about the legal framework: it’s about cultural awareness and early communication,” added Dr. Mamo. “We have to ask ourselves: Do patients have the right to make bad decisions? And how do we, as family doctors, respect that while still ensuring their safety?”
The session concluded with a discussion on how the role of family doctors in decision-making for cognitively impaired patients will evolve as populations age and the incidence of conditions like dementia increases. The workload is rising, and the need for clear, consistent guidelines is critical.
“Family doctors will continue to play a central role in managing these challenges,” Dr. Zidaru emphasized. “But we need more resources, more education, and more support to ensure we can respect patient autonomy without compromising their well-being.”
A version of this article first appeared on Medscape.com.
FROM WONCA EUROPEAN CONFERENCE 2024
Alzheimer’s and Comorbidities: Implications for Patient Care
Alzheimer’s disease (AD), the most common cause of dementia, is the fifth leading cause of death in the United States. An estimated 6.9 million Americans aged 65 years or older have AD. Comorbid conditions in AD may exacerbate the progression of dementia and negatively affect overall health.
Although the exact mechanisms remain unclear, systemic inflammation is thought to play a significant role in the development of many common comorbidities associated with AD. Among the most frequently observed comorbid conditions are hypertension, diabetes, and depression. The presence of these comorbidities affects the treatment and management of AD, underscoring the need to understand the mechanisms of their interrelationship and develop effective management strategies.
Hypertension
Hypertension is a well-established risk factor for numerous health conditions, including AD. A comprehensive review of five meta-analyses and 52 primary studies revealed that elevated systolic blood pressure (SBP) correlates with an 11 % increased risk of developing AD, raising the question of whether early intervention and control of blood pressure would mitigate the risk for AD later in life.
Findings from the Northern Manhattan Study suggest that although elevated SBP contributes to cognitive decline in older patients, the use of antihypertensive medications can neutralize the effects of high SBP on certain cognitive functions. Furthermore, a systematic review and meta-analysis comprising 12 trials (92,135 participants) demonstrated a significant reduction in the risk for dementia and cognitive impairment with antihypertensive treatment.
Notably, a retrospective cohort study involving 69,081 participants treated with beta-blockers for hypertension found that beta-blockers with high blood-brain barrier permeability were associated with a reduced risk for AD compared with those with low blood-brain barrier permeability. Additionally, a secondary analysis of the SPRINT trial found antihypertensive medications that stimulate vs inhibit type 2 and 4 angiotensin II receptors were associated with a lower incidence of cognitive impairment. Although further clinical trials are necessary to directly assess specific medications, these findings emphasize the potential of antihypertensive treatment as a strategic approach to reduce the risk for AD.
Type 2 Diabetes
The connection between AD and type 2 diabetes is such that AD is sometimes referred to as “type 3 diabetes.” Both diseases share some of the same underlying pathophysiologic mechanisms, particularly the development of insulin resistance and oxidative stress. A prospective cohort study of 10,095 participants showed that diabetes was significantly associated with a higher risk of developing dementia; this risk is even greater in patients who develop diabetes at an earlier age.
In an interview with this news organization, Alvaro Pascual-Leone, MD, PhD, a professor of neurology at Harvard Medical School, Boston, said, “In addition to being a comorbidity factor, diabetes appears to be a predisposing risk factor for AD.” This is supported by a comprehensive literature review showing an increased progression from mild cognitive impairment (MCI) to dementia in patients with diabetes, prediabetes, or metabolic syndrome, with a pooled odds ratio for dementia progression in individuals with diabetes of 1.53.
Owing to the overlapping pathophysiologic mechanisms in AD and diabetes, treating one condition may have beneficial effects on the other. A systematic umbrella review and meta-analysis that included 10 meta-analyses across nine classes of diabetes drugs found a protective effect against dementia with the use of metformin, thiazolidinediones (including pioglitazone), glucagon-like peptide 1 receptor agonists, and sodium-glucose cotransporter 2 inhibitors. Moreover, a cohort study of 12,220 patients who discontinued metformin early (ie, stopped using metformin without a prior history of abnormal kidney function) and 29,126 patients considered routine users found an increased risk for dementia in the early terminator group. Although further research is warranted, the concurrent treatment of AD and diabetes with antidiabetic agents holds considerable promise.
Depression and Anxiety
Anxiety and depression are significant risk factors for AD, and conversely, AD increases the likelihood of developing these psychiatric conditions. A systematic review of 14,760 studies showed dysthymia often emerges during the early stages of AD as an emotional response to cognitive decline.
Data from the Australian Imaging Biomarkers and Lifestyle study showed a markedly elevated risk for AD and MCI among individuals with preexisting anxiety or depression. This study also found that age, sex, and marital status are important determinants, with men and single individuals with depression being particularly susceptible to developing AD. Conversely, a cohort study of 129,410 AD patients with AD, 390,088 patients with all-cause dementia, and 3,900,880 age-matched controls without a history of depression showed a cumulative incidence of depression of 13% in the AD group vs 3% in the control group, suggesting a heightened risk for depression following an AD diagnosis.
These findings underscore the importance of targeted screening and assessment for patients with anxiety and depression who may be at risk for AD or those diagnosed with AD who are at risk for subsequent depression and anxiety. Although antidepressants are effective in treating depression in general, their efficacy in AD-related depression is of variable quality, probably owing to differing pathophysiologic mechanisms of the disease. Further research is necessary to explore both pharmacologic and nonpharmacologic interventions for treating depression in AD patients. Some studies have found that cognitive behavioral-therapy can be effective in improving depression in patients with AD.
Sleep Disorders
Research has shown a strong correlation between AD and sleep disorders, particularly obstructive sleep apnea, insomnia, and circadian rhythm disruptions. Additionally, studies suggest that insomnia and sleep deprivation contribute to increased amyloid beta production and tau pathology, hallmark features of AD. A scoping review of 70 studies proposed that this relationship is mediated by the glymphatic system (glial-dependent waste clearance pathway in the central nervous system), and that sleep deprivation disrupts its function, leading to protein accumulation and subsequent neurologic symptoms of AD. Another study showed that sleep deprivation triggers glial cell activation, initiating an inflammatory cascade that accelerates AD progression.
Given that the gold standard treatment for obstructive sleep apnea is continuous positive airway pressure (CPAP), it has been hypothesized that CPAP could also alleviate AD symptoms owing to shared pathophysiologic mechanisms of these conditions. A large systemic review found that CPAP use improved AD symptoms in patients with mild AD or MCI, though other sleep interventions, such as cognitive-behavioral therapy and melatonin supplementation, have yielded mixed outcomes. However, most studies in this area are small in scale, and there remains a paucity of research on treating sleep disorders in AD patients, indicating a need for further investigation.
Musculoskeletal Disorders
Although no direct causative link has been established, research indicates an association between osteoarthritis (OA) and dementia, likely because of similar pathophysiologic mechanisms, including systemic inflammation. Longitudinal analyses of data from the Alzheimer’s Disease Neuroimaging Initiative study found cognitively normal older individuals with OA experience more rapid declines in hippocampal volumes compared to those without OA, suggesting that OA may elevate the risk of cognitive impairment. Current treatments for OA, such as nonsteroidal anti-inflammatory drugs, glucocorticoids, and disease-modifying OA drugs, might also help alleviate AD symptoms related to inflammation, though the research in this area is limited.
AD has also been linked to osteoporosis. In a longitudinal follow-up study involving 78,994 patients with osteoporosis and 78,994 controls, AD developed in 5856 patients with osteoporosis compared with 3761 patients in the control group. These findings represent a 1.27-fold higher incidence of AD in patients with osteoporosis than in the control group, suggesting that osteoporosis might be a risk factor for AD.
Additionally, research has identified a relationship between AD and increased fracture risk and decreased bone mineral density, with AD patients exhibiting a significantly higher likelihood of bone fractures compared with those without AD. “Falls and fractures, aside from the risk they pose in all geriatric patients, in individuals with cognitive impairment — whether due to AD or another cause — have higher risk to cause delirium and that can result in greater morbidity and mortality and a lasting increase in cognitive disability,” stated Dr. Pascual-Leone. Current recommendations emphasize exercise and fall prevention strategies to reduce fracture risk in patients with AD, but there is a lack of comprehensive research on the safety and efficacy of osteoporosis medications in this population.
Implications for Clinical Practice
The intricate interplay between AD and its comorbidities highlights the need for a comprehensive and integrated approach to patient care. The overlapping pathophysiologic mechanisms suggest that these comorbidities can contribute to the evolution and progression of AD. Likewise, AD can exacerbate comorbid conditions. As such, a holistic assessment strategy that prioritizes early detection and management of comorbid conditions to mitigate their impact on AD progression would be beneficial. Dr. Pascual-Leone added, “The presence of any of these comorbidities suggests a need to screen for MCI earlier than might otherwise be indicated or as part of the treatment for the comorbid condition. In many cases, patients can make lifestyle modifications that improve not only the comorbid condition but also reduce its effect on dementia.” In doing so, healthcare providers can help improve patient outcomes and enhance the overall quality of life for individuals living with AD.
Alissa Hershberger, Professor of Nursing, University of Central Missouri, Lee’s Summit, Missouri, has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Alzheimer’s disease (AD), the most common cause of dementia, is the fifth leading cause of death in the United States. An estimated 6.9 million Americans aged 65 years or older have AD. Comorbid conditions in AD may exacerbate the progression of dementia and negatively affect overall health.
Although the exact mechanisms remain unclear, systemic inflammation is thought to play a significant role in the development of many common comorbidities associated with AD. Among the most frequently observed comorbid conditions are hypertension, diabetes, and depression. The presence of these comorbidities affects the treatment and management of AD, underscoring the need to understand the mechanisms of their interrelationship and develop effective management strategies.
Hypertension
Hypertension is a well-established risk factor for numerous health conditions, including AD. A comprehensive review of five meta-analyses and 52 primary studies revealed that elevated systolic blood pressure (SBP) correlates with an 11 % increased risk of developing AD, raising the question of whether early intervention and control of blood pressure would mitigate the risk for AD later in life.
Findings from the Northern Manhattan Study suggest that although elevated SBP contributes to cognitive decline in older patients, the use of antihypertensive medications can neutralize the effects of high SBP on certain cognitive functions. Furthermore, a systematic review and meta-analysis comprising 12 trials (92,135 participants) demonstrated a significant reduction in the risk for dementia and cognitive impairment with antihypertensive treatment.
Notably, a retrospective cohort study involving 69,081 participants treated with beta-blockers for hypertension found that beta-blockers with high blood-brain barrier permeability were associated with a reduced risk for AD compared with those with low blood-brain barrier permeability. Additionally, a secondary analysis of the SPRINT trial found antihypertensive medications that stimulate vs inhibit type 2 and 4 angiotensin II receptors were associated with a lower incidence of cognitive impairment. Although further clinical trials are necessary to directly assess specific medications, these findings emphasize the potential of antihypertensive treatment as a strategic approach to reduce the risk for AD.
Type 2 Diabetes
The connection between AD and type 2 diabetes is such that AD is sometimes referred to as “type 3 diabetes.” Both diseases share some of the same underlying pathophysiologic mechanisms, particularly the development of insulin resistance and oxidative stress. A prospective cohort study of 10,095 participants showed that diabetes was significantly associated with a higher risk of developing dementia; this risk is even greater in patients who develop diabetes at an earlier age.
In an interview with this news organization, Alvaro Pascual-Leone, MD, PhD, a professor of neurology at Harvard Medical School, Boston, said, “In addition to being a comorbidity factor, diabetes appears to be a predisposing risk factor for AD.” This is supported by a comprehensive literature review showing an increased progression from mild cognitive impairment (MCI) to dementia in patients with diabetes, prediabetes, or metabolic syndrome, with a pooled odds ratio for dementia progression in individuals with diabetes of 1.53.
Owing to the overlapping pathophysiologic mechanisms in AD and diabetes, treating one condition may have beneficial effects on the other. A systematic umbrella review and meta-analysis that included 10 meta-analyses across nine classes of diabetes drugs found a protective effect against dementia with the use of metformin, thiazolidinediones (including pioglitazone), glucagon-like peptide 1 receptor agonists, and sodium-glucose cotransporter 2 inhibitors. Moreover, a cohort study of 12,220 patients who discontinued metformin early (ie, stopped using metformin without a prior history of abnormal kidney function) and 29,126 patients considered routine users found an increased risk for dementia in the early terminator group. Although further research is warranted, the concurrent treatment of AD and diabetes with antidiabetic agents holds considerable promise.
Depression and Anxiety
Anxiety and depression are significant risk factors for AD, and conversely, AD increases the likelihood of developing these psychiatric conditions. A systematic review of 14,760 studies showed dysthymia often emerges during the early stages of AD as an emotional response to cognitive decline.
Data from the Australian Imaging Biomarkers and Lifestyle study showed a markedly elevated risk for AD and MCI among individuals with preexisting anxiety or depression. This study also found that age, sex, and marital status are important determinants, with men and single individuals with depression being particularly susceptible to developing AD. Conversely, a cohort study of 129,410 AD patients with AD, 390,088 patients with all-cause dementia, and 3,900,880 age-matched controls without a history of depression showed a cumulative incidence of depression of 13% in the AD group vs 3% in the control group, suggesting a heightened risk for depression following an AD diagnosis.
These findings underscore the importance of targeted screening and assessment for patients with anxiety and depression who may be at risk for AD or those diagnosed with AD who are at risk for subsequent depression and anxiety. Although antidepressants are effective in treating depression in general, their efficacy in AD-related depression is of variable quality, probably owing to differing pathophysiologic mechanisms of the disease. Further research is necessary to explore both pharmacologic and nonpharmacologic interventions for treating depression in AD patients. Some studies have found that cognitive behavioral-therapy can be effective in improving depression in patients with AD.
Sleep Disorders
Research has shown a strong correlation between AD and sleep disorders, particularly obstructive sleep apnea, insomnia, and circadian rhythm disruptions. Additionally, studies suggest that insomnia and sleep deprivation contribute to increased amyloid beta production and tau pathology, hallmark features of AD. A scoping review of 70 studies proposed that this relationship is mediated by the glymphatic system (glial-dependent waste clearance pathway in the central nervous system), and that sleep deprivation disrupts its function, leading to protein accumulation and subsequent neurologic symptoms of AD. Another study showed that sleep deprivation triggers glial cell activation, initiating an inflammatory cascade that accelerates AD progression.
Given that the gold standard treatment for obstructive sleep apnea is continuous positive airway pressure (CPAP), it has been hypothesized that CPAP could also alleviate AD symptoms owing to shared pathophysiologic mechanisms of these conditions. A large systemic review found that CPAP use improved AD symptoms in patients with mild AD or MCI, though other sleep interventions, such as cognitive-behavioral therapy and melatonin supplementation, have yielded mixed outcomes. However, most studies in this area are small in scale, and there remains a paucity of research on treating sleep disorders in AD patients, indicating a need for further investigation.
Musculoskeletal Disorders
Although no direct causative link has been established, research indicates an association between osteoarthritis (OA) and dementia, likely because of similar pathophysiologic mechanisms, including systemic inflammation. Longitudinal analyses of data from the Alzheimer’s Disease Neuroimaging Initiative study found cognitively normal older individuals with OA experience more rapid declines in hippocampal volumes compared to those without OA, suggesting that OA may elevate the risk of cognitive impairment. Current treatments for OA, such as nonsteroidal anti-inflammatory drugs, glucocorticoids, and disease-modifying OA drugs, might also help alleviate AD symptoms related to inflammation, though the research in this area is limited.
AD has also been linked to osteoporosis. In a longitudinal follow-up study involving 78,994 patients with osteoporosis and 78,994 controls, AD developed in 5856 patients with osteoporosis compared with 3761 patients in the control group. These findings represent a 1.27-fold higher incidence of AD in patients with osteoporosis than in the control group, suggesting that osteoporosis might be a risk factor for AD.
Additionally, research has identified a relationship between AD and increased fracture risk and decreased bone mineral density, with AD patients exhibiting a significantly higher likelihood of bone fractures compared with those without AD. “Falls and fractures, aside from the risk they pose in all geriatric patients, in individuals with cognitive impairment — whether due to AD or another cause — have higher risk to cause delirium and that can result in greater morbidity and mortality and a lasting increase in cognitive disability,” stated Dr. Pascual-Leone. Current recommendations emphasize exercise and fall prevention strategies to reduce fracture risk in patients with AD, but there is a lack of comprehensive research on the safety and efficacy of osteoporosis medications in this population.
Implications for Clinical Practice
The intricate interplay between AD and its comorbidities highlights the need for a comprehensive and integrated approach to patient care. The overlapping pathophysiologic mechanisms suggest that these comorbidities can contribute to the evolution and progression of AD. Likewise, AD can exacerbate comorbid conditions. As such, a holistic assessment strategy that prioritizes early detection and management of comorbid conditions to mitigate their impact on AD progression would be beneficial. Dr. Pascual-Leone added, “The presence of any of these comorbidities suggests a need to screen for MCI earlier than might otherwise be indicated or as part of the treatment for the comorbid condition. In many cases, patients can make lifestyle modifications that improve not only the comorbid condition but also reduce its effect on dementia.” In doing so, healthcare providers can help improve patient outcomes and enhance the overall quality of life for individuals living with AD.
Alissa Hershberger, Professor of Nursing, University of Central Missouri, Lee’s Summit, Missouri, has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Alzheimer’s disease (AD), the most common cause of dementia, is the fifth leading cause of death in the United States. An estimated 6.9 million Americans aged 65 years or older have AD. Comorbid conditions in AD may exacerbate the progression of dementia and negatively affect overall health.
Although the exact mechanisms remain unclear, systemic inflammation is thought to play a significant role in the development of many common comorbidities associated with AD. Among the most frequently observed comorbid conditions are hypertension, diabetes, and depression. The presence of these comorbidities affects the treatment and management of AD, underscoring the need to understand the mechanisms of their interrelationship and develop effective management strategies.
Hypertension
Hypertension is a well-established risk factor for numerous health conditions, including AD. A comprehensive review of five meta-analyses and 52 primary studies revealed that elevated systolic blood pressure (SBP) correlates with an 11 % increased risk of developing AD, raising the question of whether early intervention and control of blood pressure would mitigate the risk for AD later in life.
Findings from the Northern Manhattan Study suggest that although elevated SBP contributes to cognitive decline in older patients, the use of antihypertensive medications can neutralize the effects of high SBP on certain cognitive functions. Furthermore, a systematic review and meta-analysis comprising 12 trials (92,135 participants) demonstrated a significant reduction in the risk for dementia and cognitive impairment with antihypertensive treatment.
Notably, a retrospective cohort study involving 69,081 participants treated with beta-blockers for hypertension found that beta-blockers with high blood-brain barrier permeability were associated with a reduced risk for AD compared with those with low blood-brain barrier permeability. Additionally, a secondary analysis of the SPRINT trial found antihypertensive medications that stimulate vs inhibit type 2 and 4 angiotensin II receptors were associated with a lower incidence of cognitive impairment. Although further clinical trials are necessary to directly assess specific medications, these findings emphasize the potential of antihypertensive treatment as a strategic approach to reduce the risk for AD.
Type 2 Diabetes
The connection between AD and type 2 diabetes is such that AD is sometimes referred to as “type 3 diabetes.” Both diseases share some of the same underlying pathophysiologic mechanisms, particularly the development of insulin resistance and oxidative stress. A prospective cohort study of 10,095 participants showed that diabetes was significantly associated with a higher risk of developing dementia; this risk is even greater in patients who develop diabetes at an earlier age.
In an interview with this news organization, Alvaro Pascual-Leone, MD, PhD, a professor of neurology at Harvard Medical School, Boston, said, “In addition to being a comorbidity factor, diabetes appears to be a predisposing risk factor for AD.” This is supported by a comprehensive literature review showing an increased progression from mild cognitive impairment (MCI) to dementia in patients with diabetes, prediabetes, or metabolic syndrome, with a pooled odds ratio for dementia progression in individuals with diabetes of 1.53.
Owing to the overlapping pathophysiologic mechanisms in AD and diabetes, treating one condition may have beneficial effects on the other. A systematic umbrella review and meta-analysis that included 10 meta-analyses across nine classes of diabetes drugs found a protective effect against dementia with the use of metformin, thiazolidinediones (including pioglitazone), glucagon-like peptide 1 receptor agonists, and sodium-glucose cotransporter 2 inhibitors. Moreover, a cohort study of 12,220 patients who discontinued metformin early (ie, stopped using metformin without a prior history of abnormal kidney function) and 29,126 patients considered routine users found an increased risk for dementia in the early terminator group. Although further research is warranted, the concurrent treatment of AD and diabetes with antidiabetic agents holds considerable promise.
Depression and Anxiety
Anxiety and depression are significant risk factors for AD, and conversely, AD increases the likelihood of developing these psychiatric conditions. A systematic review of 14,760 studies showed dysthymia often emerges during the early stages of AD as an emotional response to cognitive decline.
Data from the Australian Imaging Biomarkers and Lifestyle study showed a markedly elevated risk for AD and MCI among individuals with preexisting anxiety or depression. This study also found that age, sex, and marital status are important determinants, with men and single individuals with depression being particularly susceptible to developing AD. Conversely, a cohort study of 129,410 AD patients with AD, 390,088 patients with all-cause dementia, and 3,900,880 age-matched controls without a history of depression showed a cumulative incidence of depression of 13% in the AD group vs 3% in the control group, suggesting a heightened risk for depression following an AD diagnosis.
These findings underscore the importance of targeted screening and assessment for patients with anxiety and depression who may be at risk for AD or those diagnosed with AD who are at risk for subsequent depression and anxiety. Although antidepressants are effective in treating depression in general, their efficacy in AD-related depression is of variable quality, probably owing to differing pathophysiologic mechanisms of the disease. Further research is necessary to explore both pharmacologic and nonpharmacologic interventions for treating depression in AD patients. Some studies have found that cognitive behavioral-therapy can be effective in improving depression in patients with AD.
Sleep Disorders
Research has shown a strong correlation between AD and sleep disorders, particularly obstructive sleep apnea, insomnia, and circadian rhythm disruptions. Additionally, studies suggest that insomnia and sleep deprivation contribute to increased amyloid beta production and tau pathology, hallmark features of AD. A scoping review of 70 studies proposed that this relationship is mediated by the glymphatic system (glial-dependent waste clearance pathway in the central nervous system), and that sleep deprivation disrupts its function, leading to protein accumulation and subsequent neurologic symptoms of AD. Another study showed that sleep deprivation triggers glial cell activation, initiating an inflammatory cascade that accelerates AD progression.
Given that the gold standard treatment for obstructive sleep apnea is continuous positive airway pressure (CPAP), it has been hypothesized that CPAP could also alleviate AD symptoms owing to shared pathophysiologic mechanisms of these conditions. A large systemic review found that CPAP use improved AD symptoms in patients with mild AD or MCI, though other sleep interventions, such as cognitive-behavioral therapy and melatonin supplementation, have yielded mixed outcomes. However, most studies in this area are small in scale, and there remains a paucity of research on treating sleep disorders in AD patients, indicating a need for further investigation.
Musculoskeletal Disorders
Although no direct causative link has been established, research indicates an association between osteoarthritis (OA) and dementia, likely because of similar pathophysiologic mechanisms, including systemic inflammation. Longitudinal analyses of data from the Alzheimer’s Disease Neuroimaging Initiative study found cognitively normal older individuals with OA experience more rapid declines in hippocampal volumes compared to those without OA, suggesting that OA may elevate the risk of cognitive impairment. Current treatments for OA, such as nonsteroidal anti-inflammatory drugs, glucocorticoids, and disease-modifying OA drugs, might also help alleviate AD symptoms related to inflammation, though the research in this area is limited.
AD has also been linked to osteoporosis. In a longitudinal follow-up study involving 78,994 patients with osteoporosis and 78,994 controls, AD developed in 5856 patients with osteoporosis compared with 3761 patients in the control group. These findings represent a 1.27-fold higher incidence of AD in patients with osteoporosis than in the control group, suggesting that osteoporosis might be a risk factor for AD.
Additionally, research has identified a relationship between AD and increased fracture risk and decreased bone mineral density, with AD patients exhibiting a significantly higher likelihood of bone fractures compared with those without AD. “Falls and fractures, aside from the risk they pose in all geriatric patients, in individuals with cognitive impairment — whether due to AD or another cause — have higher risk to cause delirium and that can result in greater morbidity and mortality and a lasting increase in cognitive disability,” stated Dr. Pascual-Leone. Current recommendations emphasize exercise and fall prevention strategies to reduce fracture risk in patients with AD, but there is a lack of comprehensive research on the safety and efficacy of osteoporosis medications in this population.
Implications for Clinical Practice
The intricate interplay between AD and its comorbidities highlights the need for a comprehensive and integrated approach to patient care. The overlapping pathophysiologic mechanisms suggest that these comorbidities can contribute to the evolution and progression of AD. Likewise, AD can exacerbate comorbid conditions. As such, a holistic assessment strategy that prioritizes early detection and management of comorbid conditions to mitigate their impact on AD progression would be beneficial. Dr. Pascual-Leone added, “The presence of any of these comorbidities suggests a need to screen for MCI earlier than might otherwise be indicated or as part of the treatment for the comorbid condition. In many cases, patients can make lifestyle modifications that improve not only the comorbid condition but also reduce its effect on dementia.” In doing so, healthcare providers can help improve patient outcomes and enhance the overall quality of life for individuals living with AD.
Alissa Hershberger, Professor of Nursing, University of Central Missouri, Lee’s Summit, Missouri, has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Antidepressants Linked to Improved Verbal Memory
MILAN — , a clinical effect linked to changes in serotonin 4 (5-HT4) receptor levels in the brain, as shown on PET.
These findings suggested there is a role for specifically targeting the 5-HT4 receptor to improve verbal memory in depression, said investigator Vibeke H. Dam, PhD, from Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark.
“Verbal memory is often impaired in depression, and this has a lot of impact on patients’ ability to work and have a normal life. That’s why we’re so excited about this receptor in particular,” Dr. Dam said.
“If we can find a way to activate it more directly, we’re thinking this could be a way to treat this memory symptom that a lot of patients have and that currently we don’t really have a treatment for,” she added.
The findings were presented at the 37th European College of Neuropsychopharmacology (ECNP) Congress and recently published in Biological Psychiatry .
Largest Trial of Its Kind
The study is the largest single-site PET trial investigating serotonergic neurotransmission in major depressive disorder over the course of antidepressant treatment to date. It included 90 patients with moderate to severe depression who underwent baseline cognitive tests and brain scans to measure 5-HT4 receptor levels before starting their treatment with the selective serotonin reuptake inhibitor escitalopram.
Patients who showed no improvement in depressive symptoms after 4 weeks (n = 14), as assessed by the Hamilton Depression Rating Scale 6 (HAMD6), were switched to the serotonin-norepinephrine reuptake inhibitor duloxetine.
Both escitalopram and duloxetine inhibit the reuptake of 5-HT4, enhancing neurotransmitter activity; escitalopram primarily increases serotonin levels, while duloxetine increases both serotonin and norepinephrine levels.
The primary cognitive outcome measure was change in the Verbal Affective Memory Task 26. Secondary cognitive outcomes were change in working memory, reaction time, emotion recognition bias, and negative social emotion.
After 8 weeks of treatment, a subset of 40 patients repeated PET scans, and at 12 weeks, all patients repeated cognitive testing.
Matching neuroimaging and cognitive data were available for 88 patients at baseline and for 39 patients with rescan.
As expected, the study showed that antidepressant treatment resulted in the downregulation of 5-HT4 receptor levels. “One hypothesis is that if we increase the availability of serotonin [with treatment], downregulation of the receptors might be a response,” said Dr. Dam.
“What was interesting was that this was the effect across all patients, whether they [clinically] responded or not. So we see the medication does what it’s supposed to do in the brain.” But, she said, there was no association between 5-HT4 receptor levels and HAMD6 scores.
Gains in Verbal Memory
Although the downregulation of 5-HT4 did not correlate with somatic or mood symptoms, it did correlate with cognitive symptoms.
Interestingly, while most patients showed improvement in depressive symptoms — many reaching remission or recovery — they also experienced gains in verbal memory. However, these improvements were not correlated. It was possible for one to improve more than the other, with no apparent link between the two, said Dr. Dam.
“What was linked was how the brain responded to the medication for this particular receptor. So even though there is this downregulation of the receptor, there’s still a lot of activation of it, and our thinking is that it’s activation of the receptor that is the important bit.”
Work by other groups has shown that another medication, prucalopride, which is used to treat gastroparesis, can more directly activate the 5-HT4 receptor, and that the treatment of healthy volunteers with this medication can boost memory and learning, said Dr. Dam.
“We could repurpose this drug, and we’re currently looking for funding to test this in a wide variety of different groups such as concussion, diabetes, and depression.”
The study’s coinvestigator, Vibe G. Frokjaer, MD, said more research is required to understand the potential implications of the findings.
“Poor cognitive function is very hard to treat efficiently and may require extra treatment. This work points to the possibility of stimulating this specific receptor so that we can treat cognitive problems, even aside from whether or not the patient has overcome the core symptoms of depression,” she said in a release.
Commenting on the research, Philip Cowen, MD, professor of psychopharmacology at the University of Oxford, England, said in a release that in light of “recent controversies about the role of brain serotonin in clinical depression, it is noteworthy that the PET studies of the Copenhagen Group provide unequivocal evidence that brain 5-HT4 receptors are decreased in unmedicated depressed patients.
“Their work also demonstrates the intimate role of brain 5-HT4 receptors in cognitive function,” he added. “This confirms recent work from Oxford, showing that the 5-HT4 receptor stimulant, prucalopride — a drug licensed for the treatment of constipation — improves memory in both healthy participants and people at risk of depression,” he added.
The study was funded by the Innovation Fund Denmark, Research Fund of the Mental Health Services – Capital Region of Denmark, Independent Research Fund Denmark, Global Justice Foundation, Research Council of Rigshospitalet, Augustinus Foundation, Savværksejer Jeppe Juhl og hustru Ovita Juhls Mindelegat, Lundbeck Foundation, and H. Lundbeck A/S.
Dr. Dam reported serving as a speaker for H. Lundbeck. Frokjaer reported serving as a consultant for Sage Therapeutics and lecturer for H. Lundbeck, Janssen-Cilag, and Gedeon Richter. Study investigator Martin B. Jørgensen has given talks sponsored by Boehringer Ingelheim and Lundbeck Pharma. All other investigators reported no relevant disclosures.
A version of this article appeared on Medscape.com.
MILAN — , a clinical effect linked to changes in serotonin 4 (5-HT4) receptor levels in the brain, as shown on PET.
These findings suggested there is a role for specifically targeting the 5-HT4 receptor to improve verbal memory in depression, said investigator Vibeke H. Dam, PhD, from Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark.
“Verbal memory is often impaired in depression, and this has a lot of impact on patients’ ability to work and have a normal life. That’s why we’re so excited about this receptor in particular,” Dr. Dam said.
“If we can find a way to activate it more directly, we’re thinking this could be a way to treat this memory symptom that a lot of patients have and that currently we don’t really have a treatment for,” she added.
The findings were presented at the 37th European College of Neuropsychopharmacology (ECNP) Congress and recently published in Biological Psychiatry .
Largest Trial of Its Kind
The study is the largest single-site PET trial investigating serotonergic neurotransmission in major depressive disorder over the course of antidepressant treatment to date. It included 90 patients with moderate to severe depression who underwent baseline cognitive tests and brain scans to measure 5-HT4 receptor levels before starting their treatment with the selective serotonin reuptake inhibitor escitalopram.
Patients who showed no improvement in depressive symptoms after 4 weeks (n = 14), as assessed by the Hamilton Depression Rating Scale 6 (HAMD6), were switched to the serotonin-norepinephrine reuptake inhibitor duloxetine.
Both escitalopram and duloxetine inhibit the reuptake of 5-HT4, enhancing neurotransmitter activity; escitalopram primarily increases serotonin levels, while duloxetine increases both serotonin and norepinephrine levels.
The primary cognitive outcome measure was change in the Verbal Affective Memory Task 26. Secondary cognitive outcomes were change in working memory, reaction time, emotion recognition bias, and negative social emotion.
After 8 weeks of treatment, a subset of 40 patients repeated PET scans, and at 12 weeks, all patients repeated cognitive testing.
Matching neuroimaging and cognitive data were available for 88 patients at baseline and for 39 patients with rescan.
As expected, the study showed that antidepressant treatment resulted in the downregulation of 5-HT4 receptor levels. “One hypothesis is that if we increase the availability of serotonin [with treatment], downregulation of the receptors might be a response,” said Dr. Dam.
“What was interesting was that this was the effect across all patients, whether they [clinically] responded or not. So we see the medication does what it’s supposed to do in the brain.” But, she said, there was no association between 5-HT4 receptor levels and HAMD6 scores.
Gains in Verbal Memory
Although the downregulation of 5-HT4 did not correlate with somatic or mood symptoms, it did correlate with cognitive symptoms.
Interestingly, while most patients showed improvement in depressive symptoms — many reaching remission or recovery — they also experienced gains in verbal memory. However, these improvements were not correlated. It was possible for one to improve more than the other, with no apparent link between the two, said Dr. Dam.
“What was linked was how the brain responded to the medication for this particular receptor. So even though there is this downregulation of the receptor, there’s still a lot of activation of it, and our thinking is that it’s activation of the receptor that is the important bit.”
Work by other groups has shown that another medication, prucalopride, which is used to treat gastroparesis, can more directly activate the 5-HT4 receptor, and that the treatment of healthy volunteers with this medication can boost memory and learning, said Dr. Dam.
“We could repurpose this drug, and we’re currently looking for funding to test this in a wide variety of different groups such as concussion, diabetes, and depression.”
The study’s coinvestigator, Vibe G. Frokjaer, MD, said more research is required to understand the potential implications of the findings.
“Poor cognitive function is very hard to treat efficiently and may require extra treatment. This work points to the possibility of stimulating this specific receptor so that we can treat cognitive problems, even aside from whether or not the patient has overcome the core symptoms of depression,” she said in a release.
Commenting on the research, Philip Cowen, MD, professor of psychopharmacology at the University of Oxford, England, said in a release that in light of “recent controversies about the role of brain serotonin in clinical depression, it is noteworthy that the PET studies of the Copenhagen Group provide unequivocal evidence that brain 5-HT4 receptors are decreased in unmedicated depressed patients.
“Their work also demonstrates the intimate role of brain 5-HT4 receptors in cognitive function,” he added. “This confirms recent work from Oxford, showing that the 5-HT4 receptor stimulant, prucalopride — a drug licensed for the treatment of constipation — improves memory in both healthy participants and people at risk of depression,” he added.
The study was funded by the Innovation Fund Denmark, Research Fund of the Mental Health Services – Capital Region of Denmark, Independent Research Fund Denmark, Global Justice Foundation, Research Council of Rigshospitalet, Augustinus Foundation, Savværksejer Jeppe Juhl og hustru Ovita Juhls Mindelegat, Lundbeck Foundation, and H. Lundbeck A/S.
Dr. Dam reported serving as a speaker for H. Lundbeck. Frokjaer reported serving as a consultant for Sage Therapeutics and lecturer for H. Lundbeck, Janssen-Cilag, and Gedeon Richter. Study investigator Martin B. Jørgensen has given talks sponsored by Boehringer Ingelheim and Lundbeck Pharma. All other investigators reported no relevant disclosures.
A version of this article appeared on Medscape.com.
MILAN — , a clinical effect linked to changes in serotonin 4 (5-HT4) receptor levels in the brain, as shown on PET.
These findings suggested there is a role for specifically targeting the 5-HT4 receptor to improve verbal memory in depression, said investigator Vibeke H. Dam, PhD, from Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark.
“Verbal memory is often impaired in depression, and this has a lot of impact on patients’ ability to work and have a normal life. That’s why we’re so excited about this receptor in particular,” Dr. Dam said.
“If we can find a way to activate it more directly, we’re thinking this could be a way to treat this memory symptom that a lot of patients have and that currently we don’t really have a treatment for,” she added.
The findings were presented at the 37th European College of Neuropsychopharmacology (ECNP) Congress and recently published in Biological Psychiatry .
Largest Trial of Its Kind
The study is the largest single-site PET trial investigating serotonergic neurotransmission in major depressive disorder over the course of antidepressant treatment to date. It included 90 patients with moderate to severe depression who underwent baseline cognitive tests and brain scans to measure 5-HT4 receptor levels before starting their treatment with the selective serotonin reuptake inhibitor escitalopram.
Patients who showed no improvement in depressive symptoms after 4 weeks (n = 14), as assessed by the Hamilton Depression Rating Scale 6 (HAMD6), were switched to the serotonin-norepinephrine reuptake inhibitor duloxetine.
Both escitalopram and duloxetine inhibit the reuptake of 5-HT4, enhancing neurotransmitter activity; escitalopram primarily increases serotonin levels, while duloxetine increases both serotonin and norepinephrine levels.
The primary cognitive outcome measure was change in the Verbal Affective Memory Task 26. Secondary cognitive outcomes were change in working memory, reaction time, emotion recognition bias, and negative social emotion.
After 8 weeks of treatment, a subset of 40 patients repeated PET scans, and at 12 weeks, all patients repeated cognitive testing.
Matching neuroimaging and cognitive data were available for 88 patients at baseline and for 39 patients with rescan.
As expected, the study showed that antidepressant treatment resulted in the downregulation of 5-HT4 receptor levels. “One hypothesis is that if we increase the availability of serotonin [with treatment], downregulation of the receptors might be a response,” said Dr. Dam.
“What was interesting was that this was the effect across all patients, whether they [clinically] responded or not. So we see the medication does what it’s supposed to do in the brain.” But, she said, there was no association between 5-HT4 receptor levels and HAMD6 scores.
Gains in Verbal Memory
Although the downregulation of 5-HT4 did not correlate with somatic or mood symptoms, it did correlate with cognitive symptoms.
Interestingly, while most patients showed improvement in depressive symptoms — many reaching remission or recovery — they also experienced gains in verbal memory. However, these improvements were not correlated. It was possible for one to improve more than the other, with no apparent link between the two, said Dr. Dam.
“What was linked was how the brain responded to the medication for this particular receptor. So even though there is this downregulation of the receptor, there’s still a lot of activation of it, and our thinking is that it’s activation of the receptor that is the important bit.”
Work by other groups has shown that another medication, prucalopride, which is used to treat gastroparesis, can more directly activate the 5-HT4 receptor, and that the treatment of healthy volunteers with this medication can boost memory and learning, said Dr. Dam.
“We could repurpose this drug, and we’re currently looking for funding to test this in a wide variety of different groups such as concussion, diabetes, and depression.”
The study’s coinvestigator, Vibe G. Frokjaer, MD, said more research is required to understand the potential implications of the findings.
“Poor cognitive function is very hard to treat efficiently and may require extra treatment. This work points to the possibility of stimulating this specific receptor so that we can treat cognitive problems, even aside from whether or not the patient has overcome the core symptoms of depression,” she said in a release.
Commenting on the research, Philip Cowen, MD, professor of psychopharmacology at the University of Oxford, England, said in a release that in light of “recent controversies about the role of brain serotonin in clinical depression, it is noteworthy that the PET studies of the Copenhagen Group provide unequivocal evidence that brain 5-HT4 receptors are decreased in unmedicated depressed patients.
“Their work also demonstrates the intimate role of brain 5-HT4 receptors in cognitive function,” he added. “This confirms recent work from Oxford, showing that the 5-HT4 receptor stimulant, prucalopride — a drug licensed for the treatment of constipation — improves memory in both healthy participants and people at risk of depression,” he added.
The study was funded by the Innovation Fund Denmark, Research Fund of the Mental Health Services – Capital Region of Denmark, Independent Research Fund Denmark, Global Justice Foundation, Research Council of Rigshospitalet, Augustinus Foundation, Savværksejer Jeppe Juhl og hustru Ovita Juhls Mindelegat, Lundbeck Foundation, and H. Lundbeck A/S.
Dr. Dam reported serving as a speaker for H. Lundbeck. Frokjaer reported serving as a consultant for Sage Therapeutics and lecturer for H. Lundbeck, Janssen-Cilag, and Gedeon Richter. Study investigator Martin B. Jørgensen has given talks sponsored by Boehringer Ingelheim and Lundbeck Pharma. All other investigators reported no relevant disclosures.
A version of this article appeared on Medscape.com.
FROM ECNP 2024