Adult ADHD improved by home-based, noninvasive brain stimulation

Article Type
Changed
Tue, 08/23/2022 - 13:59

 

Transcranial direct current stimulation (tDCS) using a home-based device can help improve attention in adults with attention-deficit/hyperactivity disorder who are not taking stimulants, new research suggests.

Results from the sham-controlled trial also showed that the tDCS treatment was both safe and well tolerated.

Overall, the findings suggest that the device could be a nondrug alternative for treating this patient population, Douglas Teixeira Leffa, MD, PhD, department of psychiatry, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil, and colleagues note.

Dr. Douglas Teixeira Leffa

“This is particularly relevant since a vast body of literature describes low long-term adherence rates and persistence to pharmacological treatment in patients with ADHD,” they write.

The findings were published online in JAMA Psychiatry.
 

Avoiding office visits

A noninvasive technique that is easy to use and relatively inexpensive, tDCS involves applying a low-intensity current over the scalp to modulate cortical excitability and induce neuroplasticity. Home-use tDCS devices, which avoid the need for daily office visits for stimulation sessions, have been validated in previous clinical samples.

The current study included 64 adults with ADHD who are not taking stimulants. They had moderate or severe symptoms of inattention, with an inattention score of 21 or higher on the clinician-administered Adult ADHD Self-Report Scale version 1.1 (CASRS).

The CASRS includes nine questions related to inattention symptoms (CASRS-I) and nine related to hyperactivity-impulsivity symptoms (CASRS-HI). The score can vary from 0 to 36 for each domain, with higher scores indicating increased symptoms.

Researchers randomly assigned participants to receive either active or sham stimulation.

The tDCS device used in the study delivered a current with 35-cm2 electrodes (7 cm by 5 cm). The anodal and cathodal electrodes were positioned corresponding to the right and left dorsolateral prefrontal cortex (DLPFC), respectively.

The investigators note that decreased activation in the right DLPFC has been reported before in patients with ADHD during tasks that require attention.

After learning to use the device, participants underwent 30-minute daily sessions of tDCS (2-mA direct constant current) for 4 weeks for a total of 28 sessions.

Devices programmed for sham treatment delivered a 30-second ramp-up (0-2 mA) stimulation followed by a 30-second ramp-down (2-0 mA) at the beginning, middle, and end of the application. This mimicked the tactile sensations reported with tDCS and has been shown to be a reliable sham protocol.

Participants were encouraged to perform the stimulation sessions at the same time of day. To improve adherence, they received daily text message reminders.

Nine patients discontinued treatment, two in the sham group and seven in the active group. However, patients who finished the trial completed a mean 25 of 28 sessions.
 

Window of opportunity?

The mean inattention score on CASRS-I at week 4, the primary outcome, was 18.88 in the active tDCS group vs. 23.63 in the sham tDCS group. There was a statistically significant treatment by time interaction for CASRS-I (beta interaction, –3.18; 95% confidence interval, –4.60 to –1.75; P < .001), showing decreased inattention symptoms in the active vs. sham groups.

The estimated Cohen’s d was 1.23 (95% CI, .67-1.78), indicating at least a moderate effect. This effect was similar to that reported with trigeminal nerve stimulation (TNS), the first approved device-based therapy for ADHD, and to that of atomoxetine, the second-line treatment for ADHD, the researchers note.

About one-third of patients (34.3%) in the active tDCS group achieved a 30% reduction in CASRS-I score, compared with 6.2% in the sham tDCS group.

There was no statistically significant difference in the secondary outcome of hyperactivity-impulsivity symptoms evaluated with the CASRS-HI. This may be because hyperactivity-impulsivity in ADHD is associated with a hypoactivation in the right inferior frontal cortex rather than the right DLPFC, the investigators write.

There were also no significant group differences in other secondary outcomes, including depression, anxiety, and executive function.

Adverse events (AE) were mostly mild and included skin redness and scalp burn. There were no severe or serious AEs.

Using a home-based tDCS device allows for considerably more sessions, with 28 being the highest number so far applied to patients with ADHD. This, the researchers note, is important because evidence suggests increased efficacy of tDCS with extended periods of treatment.

The home-based device “opens a new window of opportunity, especially for participants who live in geographically remote areas or have physical or cognitive disabilities that may hinder access to clinical centers,” they write.

Although a study limitation was the relatively high dropout rate in the active group, which might bias interpretation of the findings, only two of seven dropouts in the active group left because of an AE, the investigators note.

Patients received training in using the device, but there was no remote monitoring of sessions. In addition, the study population, which was relatively homogeneous with participants having no moderate to severe symptoms of depression or anxiety, differed from the usual patients with ADHD who are treated in clinical centers, the researchers point out.

As well, the study included only patients not taking pharmacologic treatment for ADHD – so the findings might not be generalizable to other patients, they add.
 

‘Just a first step’

Commenting on the study, Mark George, MD, distinguished professor of psychiatry, radiology, and neurology, Medical University of South Carolina, Charleston, noted that although this was a single-center study with a relatively small sample size, it is still important.

Showing it is possible to do high-quality tDCS studies at home “is a huge advance,” said Dr. George, who was not involved with the research.

Dr. Mark S. George

“Home treatment is cheaper and easier for patients and allows many people to get treatment who would not be able to make it to the clinic daily for treatment,” he added.

He noted the study showed “a clear improvement in ADHD,” which is important because better treatments are needed.

However, he cautioned that this is “just a first step” and more studies are needed. For example, he said, it is not clear whether improvements persist and if patients need to self-treat forever, as they would with a medication.

Dr. George also noted that although the study used “a pioneering research device” with several safety features, many home-based tDCS devices on the market do not have those.

“I don’t advise patients to do this now. Further studies are needed for FDA approval and general public use,” he said.

The study was funded by the National Council for Scientific and Technological Development, the Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul, the Brain & Behavior Research Foundation, Fundação de Amparo à Pesquisa do Estado de São Paulo, and the Brazilian Innovation Agency. Dr. Leffa reported having received grants from the Brain & Behavior Research Foundation, the National Council for Scientific and Technological Development, and Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul during the conduction of the study. Dr. George reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Transcranial direct current stimulation (tDCS) using a home-based device can help improve attention in adults with attention-deficit/hyperactivity disorder who are not taking stimulants, new research suggests.

Results from the sham-controlled trial also showed that the tDCS treatment was both safe and well tolerated.

Overall, the findings suggest that the device could be a nondrug alternative for treating this patient population, Douglas Teixeira Leffa, MD, PhD, department of psychiatry, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil, and colleagues note.

Dr. Douglas Teixeira Leffa

“This is particularly relevant since a vast body of literature describes low long-term adherence rates and persistence to pharmacological treatment in patients with ADHD,” they write.

The findings were published online in JAMA Psychiatry.
 

Avoiding office visits

A noninvasive technique that is easy to use and relatively inexpensive, tDCS involves applying a low-intensity current over the scalp to modulate cortical excitability and induce neuroplasticity. Home-use tDCS devices, which avoid the need for daily office visits for stimulation sessions, have been validated in previous clinical samples.

The current study included 64 adults with ADHD who are not taking stimulants. They had moderate or severe symptoms of inattention, with an inattention score of 21 or higher on the clinician-administered Adult ADHD Self-Report Scale version 1.1 (CASRS).

The CASRS includes nine questions related to inattention symptoms (CASRS-I) and nine related to hyperactivity-impulsivity symptoms (CASRS-HI). The score can vary from 0 to 36 for each domain, with higher scores indicating increased symptoms.

Researchers randomly assigned participants to receive either active or sham stimulation.

The tDCS device used in the study delivered a current with 35-cm2 electrodes (7 cm by 5 cm). The anodal and cathodal electrodes were positioned corresponding to the right and left dorsolateral prefrontal cortex (DLPFC), respectively.

The investigators note that decreased activation in the right DLPFC has been reported before in patients with ADHD during tasks that require attention.

After learning to use the device, participants underwent 30-minute daily sessions of tDCS (2-mA direct constant current) for 4 weeks for a total of 28 sessions.

Devices programmed for sham treatment delivered a 30-second ramp-up (0-2 mA) stimulation followed by a 30-second ramp-down (2-0 mA) at the beginning, middle, and end of the application. This mimicked the tactile sensations reported with tDCS and has been shown to be a reliable sham protocol.

Participants were encouraged to perform the stimulation sessions at the same time of day. To improve adherence, they received daily text message reminders.

Nine patients discontinued treatment, two in the sham group and seven in the active group. However, patients who finished the trial completed a mean 25 of 28 sessions.
 

Window of opportunity?

The mean inattention score on CASRS-I at week 4, the primary outcome, was 18.88 in the active tDCS group vs. 23.63 in the sham tDCS group. There was a statistically significant treatment by time interaction for CASRS-I (beta interaction, –3.18; 95% confidence interval, –4.60 to –1.75; P < .001), showing decreased inattention symptoms in the active vs. sham groups.

The estimated Cohen’s d was 1.23 (95% CI, .67-1.78), indicating at least a moderate effect. This effect was similar to that reported with trigeminal nerve stimulation (TNS), the first approved device-based therapy for ADHD, and to that of atomoxetine, the second-line treatment for ADHD, the researchers note.

About one-third of patients (34.3%) in the active tDCS group achieved a 30% reduction in CASRS-I score, compared with 6.2% in the sham tDCS group.

There was no statistically significant difference in the secondary outcome of hyperactivity-impulsivity symptoms evaluated with the CASRS-HI. This may be because hyperactivity-impulsivity in ADHD is associated with a hypoactivation in the right inferior frontal cortex rather than the right DLPFC, the investigators write.

There were also no significant group differences in other secondary outcomes, including depression, anxiety, and executive function.

Adverse events (AE) were mostly mild and included skin redness and scalp burn. There were no severe or serious AEs.

Using a home-based tDCS device allows for considerably more sessions, with 28 being the highest number so far applied to patients with ADHD. This, the researchers note, is important because evidence suggests increased efficacy of tDCS with extended periods of treatment.

The home-based device “opens a new window of opportunity, especially for participants who live in geographically remote areas or have physical or cognitive disabilities that may hinder access to clinical centers,” they write.

Although a study limitation was the relatively high dropout rate in the active group, which might bias interpretation of the findings, only two of seven dropouts in the active group left because of an AE, the investigators note.

Patients received training in using the device, but there was no remote monitoring of sessions. In addition, the study population, which was relatively homogeneous with participants having no moderate to severe symptoms of depression or anxiety, differed from the usual patients with ADHD who are treated in clinical centers, the researchers point out.

As well, the study included only patients not taking pharmacologic treatment for ADHD – so the findings might not be generalizable to other patients, they add.
 

‘Just a first step’

Commenting on the study, Mark George, MD, distinguished professor of psychiatry, radiology, and neurology, Medical University of South Carolina, Charleston, noted that although this was a single-center study with a relatively small sample size, it is still important.

Showing it is possible to do high-quality tDCS studies at home “is a huge advance,” said Dr. George, who was not involved with the research.

Dr. Mark S. George

“Home treatment is cheaper and easier for patients and allows many people to get treatment who would not be able to make it to the clinic daily for treatment,” he added.

He noted the study showed “a clear improvement in ADHD,” which is important because better treatments are needed.

However, he cautioned that this is “just a first step” and more studies are needed. For example, he said, it is not clear whether improvements persist and if patients need to self-treat forever, as they would with a medication.

Dr. George also noted that although the study used “a pioneering research device” with several safety features, many home-based tDCS devices on the market do not have those.

“I don’t advise patients to do this now. Further studies are needed for FDA approval and general public use,” he said.

The study was funded by the National Council for Scientific and Technological Development, the Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul, the Brain & Behavior Research Foundation, Fundação de Amparo à Pesquisa do Estado de São Paulo, and the Brazilian Innovation Agency. Dr. Leffa reported having received grants from the Brain & Behavior Research Foundation, the National Council for Scientific and Technological Development, and Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul during the conduction of the study. Dr. George reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Transcranial direct current stimulation (tDCS) using a home-based device can help improve attention in adults with attention-deficit/hyperactivity disorder who are not taking stimulants, new research suggests.

Results from the sham-controlled trial also showed that the tDCS treatment was both safe and well tolerated.

Overall, the findings suggest that the device could be a nondrug alternative for treating this patient population, Douglas Teixeira Leffa, MD, PhD, department of psychiatry, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil, and colleagues note.

Dr. Douglas Teixeira Leffa

“This is particularly relevant since a vast body of literature describes low long-term adherence rates and persistence to pharmacological treatment in patients with ADHD,” they write.

The findings were published online in JAMA Psychiatry.
 

Avoiding office visits

A noninvasive technique that is easy to use and relatively inexpensive, tDCS involves applying a low-intensity current over the scalp to modulate cortical excitability and induce neuroplasticity. Home-use tDCS devices, which avoid the need for daily office visits for stimulation sessions, have been validated in previous clinical samples.

The current study included 64 adults with ADHD who are not taking stimulants. They had moderate or severe symptoms of inattention, with an inattention score of 21 or higher on the clinician-administered Adult ADHD Self-Report Scale version 1.1 (CASRS).

The CASRS includes nine questions related to inattention symptoms (CASRS-I) and nine related to hyperactivity-impulsivity symptoms (CASRS-HI). The score can vary from 0 to 36 for each domain, with higher scores indicating increased symptoms.

Researchers randomly assigned participants to receive either active or sham stimulation.

The tDCS device used in the study delivered a current with 35-cm2 electrodes (7 cm by 5 cm). The anodal and cathodal electrodes were positioned corresponding to the right and left dorsolateral prefrontal cortex (DLPFC), respectively.

The investigators note that decreased activation in the right DLPFC has been reported before in patients with ADHD during tasks that require attention.

After learning to use the device, participants underwent 30-minute daily sessions of tDCS (2-mA direct constant current) for 4 weeks for a total of 28 sessions.

Devices programmed for sham treatment delivered a 30-second ramp-up (0-2 mA) stimulation followed by a 30-second ramp-down (2-0 mA) at the beginning, middle, and end of the application. This mimicked the tactile sensations reported with tDCS and has been shown to be a reliable sham protocol.

Participants were encouraged to perform the stimulation sessions at the same time of day. To improve adherence, they received daily text message reminders.

Nine patients discontinued treatment, two in the sham group and seven in the active group. However, patients who finished the trial completed a mean 25 of 28 sessions.
 

Window of opportunity?

The mean inattention score on CASRS-I at week 4, the primary outcome, was 18.88 in the active tDCS group vs. 23.63 in the sham tDCS group. There was a statistically significant treatment by time interaction for CASRS-I (beta interaction, –3.18; 95% confidence interval, –4.60 to –1.75; P < .001), showing decreased inattention symptoms in the active vs. sham groups.

The estimated Cohen’s d was 1.23 (95% CI, .67-1.78), indicating at least a moderate effect. This effect was similar to that reported with trigeminal nerve stimulation (TNS), the first approved device-based therapy for ADHD, and to that of atomoxetine, the second-line treatment for ADHD, the researchers note.

About one-third of patients (34.3%) in the active tDCS group achieved a 30% reduction in CASRS-I score, compared with 6.2% in the sham tDCS group.

There was no statistically significant difference in the secondary outcome of hyperactivity-impulsivity symptoms evaluated with the CASRS-HI. This may be because hyperactivity-impulsivity in ADHD is associated with a hypoactivation in the right inferior frontal cortex rather than the right DLPFC, the investigators write.

There were also no significant group differences in other secondary outcomes, including depression, anxiety, and executive function.

Adverse events (AE) were mostly mild and included skin redness and scalp burn. There were no severe or serious AEs.

Using a home-based tDCS device allows for considerably more sessions, with 28 being the highest number so far applied to patients with ADHD. This, the researchers note, is important because evidence suggests increased efficacy of tDCS with extended periods of treatment.

The home-based device “opens a new window of opportunity, especially for participants who live in geographically remote areas or have physical or cognitive disabilities that may hinder access to clinical centers,” they write.

Although a study limitation was the relatively high dropout rate in the active group, which might bias interpretation of the findings, only two of seven dropouts in the active group left because of an AE, the investigators note.

Patients received training in using the device, but there was no remote monitoring of sessions. In addition, the study population, which was relatively homogeneous with participants having no moderate to severe symptoms of depression or anxiety, differed from the usual patients with ADHD who are treated in clinical centers, the researchers point out.

As well, the study included only patients not taking pharmacologic treatment for ADHD – so the findings might not be generalizable to other patients, they add.
 

‘Just a first step’

Commenting on the study, Mark George, MD, distinguished professor of psychiatry, radiology, and neurology, Medical University of South Carolina, Charleston, noted that although this was a single-center study with a relatively small sample size, it is still important.

Showing it is possible to do high-quality tDCS studies at home “is a huge advance,” said Dr. George, who was not involved with the research.

Dr. Mark S. George

“Home treatment is cheaper and easier for patients and allows many people to get treatment who would not be able to make it to the clinic daily for treatment,” he added.

He noted the study showed “a clear improvement in ADHD,” which is important because better treatments are needed.

However, he cautioned that this is “just a first step” and more studies are needed. For example, he said, it is not clear whether improvements persist and if patients need to self-treat forever, as they would with a medication.

Dr. George also noted that although the study used “a pioneering research device” with several safety features, many home-based tDCS devices on the market do not have those.

“I don’t advise patients to do this now. Further studies are needed for FDA approval and general public use,” he said.

The study was funded by the National Council for Scientific and Technological Development, the Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul, the Brain & Behavior Research Foundation, Fundação de Amparo à Pesquisa do Estado de São Paulo, and the Brazilian Innovation Agency. Dr. Leffa reported having received grants from the Brain & Behavior Research Foundation, the National Council for Scientific and Technological Development, and Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul during the conduction of the study. Dr. George reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Positive phase 3 results for novel schizophrenia drug

Article Type
Changed
Wed, 08/10/2022 - 14:26

The investigational agent xanomeline-trospium (KarXT, Karuna Therapeutics), which combines a muscarinic receptor agonist with an anticholinergic agent, helps improve psychosis symptoms and is not associated with weight gain or sedation in adults with schizophrenia, new research shows.

Top-line results from the phase 3 EMERGENT-2 trial showed a significantly greater reduction from baseline on Positive and Negative Syndrome Scale (PANSS) total scores for those receiving the active drug than for those receiving placebo, meeting its primary endpoint.

The findings “underscore the potential for KarXT, with its novel and unique mechanism of action, to redefine what successful treatment looks like for the 21 million people living with schizophrenia worldwide, and potentially usher in the first new class of medicine for these patients in more than 50 years,” Steve Paul, MD, chief executive officer, president, and chairman of Karuna Therapeutics, said in a press release.
 

Primary outcome met

About 20%-33% of patients with schizophrenia do not respond to conventional treatments, the company noted. Many have poor functional status and quality of life despite lifelong treatment with current antipsychotic agents.

Unlike current therapies, KarXT doesn’t rely on the dopaminergic or serotonergic pathways. It comprises the muscarinic agonist xanomeline and the muscarinic antagonist trospium and is designed to preferentially stimulate muscarinic receptors in the central nervous system.

Results from a phase 2 trial of almost 200 patients with schizophrenia were published last year in the New England Journal of Medicine. The findings showed that those who received xanomeline-trospium had a significantly greater reduction in psychosis symptoms than those who received placebo.

In the current phase 3 EMERGENT-2 trial, investigators included 252 adults aged 18-65 years who were diagnosed with schizophrenia and were experiencing symptoms of psychosis. Patients were randomly assigned to receive either a flexible dose of xanomeline-trospium or placebo twice daily.

The primary endpoint was change from baseline in the PANSS total score at week 5. Results showed a statistically significant and clinically meaningful 9.6-point reduction in the PANSS total score in participants taking the active drug, compared with those taking placebo (–21.2 vs. –11.6, respectively; P < .0001; Cohen’s d effect size, 0.61).

In addition, there was an early and sustained significant reduction of schizophrenia symptoms, as assessed by the PANSS total score, starting at week 2. This reduction was maintained through all trial timepoints.
 

Safety profile

The novel drug also met key secondary endpoints. In the active treatment group, there was a significant reduction on the PANSS subscales in both positive symptoms of schizophrenia, such as hallucinations or delusions, and negative symptoms, such as difficulty enjoying life or withdrawal from others.

Overall, the agent was generally well tolerated. The treatment-emergent adverse events (TEAEs) rate for xanomeline-trospium and placebo was 75% versus 58%, respectively.

The most common TEAEs for the active treatment were all mild-to-moderate in severity and included constipation, dyspepsia, nausea, vomiting, headache, increases in blood pressure, dizziness, gastroesophageal reflux disease, abdominal discomfort, and diarrhea.

As in prior trials, an increase in heart rate was also associated with the active treatment and decreased in magnitude by the end of the current study.

Discontinuation rates related to TEAEs were similar between xanomeline-trospium (7%) and placebo (6%), as were rates of serious TEAEs (2% in each group) – which included suicidal ideation, worsening of schizophrenia symptoms, and appendicitis.

Notably, the drug was not associated with common problematic adverse events of current therapies, such as weight gain, sedation, and movement disorders.

Karuna plans to submit a New Drug Application with the U.S. Food and Drug Administration for KarXT in mid-2023. In addition to schizophrenia, the drug is in development for the treatment of other psychiatric and neurological conditions, including Alzheimer’s disease.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The investigational agent xanomeline-trospium (KarXT, Karuna Therapeutics), which combines a muscarinic receptor agonist with an anticholinergic agent, helps improve psychosis symptoms and is not associated with weight gain or sedation in adults with schizophrenia, new research shows.

Top-line results from the phase 3 EMERGENT-2 trial showed a significantly greater reduction from baseline on Positive and Negative Syndrome Scale (PANSS) total scores for those receiving the active drug than for those receiving placebo, meeting its primary endpoint.

The findings “underscore the potential for KarXT, with its novel and unique mechanism of action, to redefine what successful treatment looks like for the 21 million people living with schizophrenia worldwide, and potentially usher in the first new class of medicine for these patients in more than 50 years,” Steve Paul, MD, chief executive officer, president, and chairman of Karuna Therapeutics, said in a press release.
 

Primary outcome met

About 20%-33% of patients with schizophrenia do not respond to conventional treatments, the company noted. Many have poor functional status and quality of life despite lifelong treatment with current antipsychotic agents.

Unlike current therapies, KarXT doesn’t rely on the dopaminergic or serotonergic pathways. It comprises the muscarinic agonist xanomeline and the muscarinic antagonist trospium and is designed to preferentially stimulate muscarinic receptors in the central nervous system.

Results from a phase 2 trial of almost 200 patients with schizophrenia were published last year in the New England Journal of Medicine. The findings showed that those who received xanomeline-trospium had a significantly greater reduction in psychosis symptoms than those who received placebo.

In the current phase 3 EMERGENT-2 trial, investigators included 252 adults aged 18-65 years who were diagnosed with schizophrenia and were experiencing symptoms of psychosis. Patients were randomly assigned to receive either a flexible dose of xanomeline-trospium or placebo twice daily.

The primary endpoint was change from baseline in the PANSS total score at week 5. Results showed a statistically significant and clinically meaningful 9.6-point reduction in the PANSS total score in participants taking the active drug, compared with those taking placebo (–21.2 vs. –11.6, respectively; P < .0001; Cohen’s d effect size, 0.61).

In addition, there was an early and sustained significant reduction of schizophrenia symptoms, as assessed by the PANSS total score, starting at week 2. This reduction was maintained through all trial timepoints.
 

Safety profile

The novel drug also met key secondary endpoints. In the active treatment group, there was a significant reduction on the PANSS subscales in both positive symptoms of schizophrenia, such as hallucinations or delusions, and negative symptoms, such as difficulty enjoying life or withdrawal from others.

Overall, the agent was generally well tolerated. The treatment-emergent adverse events (TEAEs) rate for xanomeline-trospium and placebo was 75% versus 58%, respectively.

The most common TEAEs for the active treatment were all mild-to-moderate in severity and included constipation, dyspepsia, nausea, vomiting, headache, increases in blood pressure, dizziness, gastroesophageal reflux disease, abdominal discomfort, and diarrhea.

As in prior trials, an increase in heart rate was also associated with the active treatment and decreased in magnitude by the end of the current study.

Discontinuation rates related to TEAEs were similar between xanomeline-trospium (7%) and placebo (6%), as were rates of serious TEAEs (2% in each group) – which included suicidal ideation, worsening of schizophrenia symptoms, and appendicitis.

Notably, the drug was not associated with common problematic adverse events of current therapies, such as weight gain, sedation, and movement disorders.

Karuna plans to submit a New Drug Application with the U.S. Food and Drug Administration for KarXT in mid-2023. In addition to schizophrenia, the drug is in development for the treatment of other psychiatric and neurological conditions, including Alzheimer’s disease.

A version of this article first appeared on Medscape.com.

The investigational agent xanomeline-trospium (KarXT, Karuna Therapeutics), which combines a muscarinic receptor agonist with an anticholinergic agent, helps improve psychosis symptoms and is not associated with weight gain or sedation in adults with schizophrenia, new research shows.

Top-line results from the phase 3 EMERGENT-2 trial showed a significantly greater reduction from baseline on Positive and Negative Syndrome Scale (PANSS) total scores for those receiving the active drug than for those receiving placebo, meeting its primary endpoint.

The findings “underscore the potential for KarXT, with its novel and unique mechanism of action, to redefine what successful treatment looks like for the 21 million people living with schizophrenia worldwide, and potentially usher in the first new class of medicine for these patients in more than 50 years,” Steve Paul, MD, chief executive officer, president, and chairman of Karuna Therapeutics, said in a press release.
 

Primary outcome met

About 20%-33% of patients with schizophrenia do not respond to conventional treatments, the company noted. Many have poor functional status and quality of life despite lifelong treatment with current antipsychotic agents.

Unlike current therapies, KarXT doesn’t rely on the dopaminergic or serotonergic pathways. It comprises the muscarinic agonist xanomeline and the muscarinic antagonist trospium and is designed to preferentially stimulate muscarinic receptors in the central nervous system.

Results from a phase 2 trial of almost 200 patients with schizophrenia were published last year in the New England Journal of Medicine. The findings showed that those who received xanomeline-trospium had a significantly greater reduction in psychosis symptoms than those who received placebo.

In the current phase 3 EMERGENT-2 trial, investigators included 252 adults aged 18-65 years who were diagnosed with schizophrenia and were experiencing symptoms of psychosis. Patients were randomly assigned to receive either a flexible dose of xanomeline-trospium or placebo twice daily.

The primary endpoint was change from baseline in the PANSS total score at week 5. Results showed a statistically significant and clinically meaningful 9.6-point reduction in the PANSS total score in participants taking the active drug, compared with those taking placebo (–21.2 vs. –11.6, respectively; P < .0001; Cohen’s d effect size, 0.61).

In addition, there was an early and sustained significant reduction of schizophrenia symptoms, as assessed by the PANSS total score, starting at week 2. This reduction was maintained through all trial timepoints.
 

Safety profile

The novel drug also met key secondary endpoints. In the active treatment group, there was a significant reduction on the PANSS subscales in both positive symptoms of schizophrenia, such as hallucinations or delusions, and negative symptoms, such as difficulty enjoying life or withdrawal from others.

Overall, the agent was generally well tolerated. The treatment-emergent adverse events (TEAEs) rate for xanomeline-trospium and placebo was 75% versus 58%, respectively.

The most common TEAEs for the active treatment were all mild-to-moderate in severity and included constipation, dyspepsia, nausea, vomiting, headache, increases in blood pressure, dizziness, gastroesophageal reflux disease, abdominal discomfort, and diarrhea.

As in prior trials, an increase in heart rate was also associated with the active treatment and decreased in magnitude by the end of the current study.

Discontinuation rates related to TEAEs were similar between xanomeline-trospium (7%) and placebo (6%), as were rates of serious TEAEs (2% in each group) – which included suicidal ideation, worsening of schizophrenia symptoms, and appendicitis.

Notably, the drug was not associated with common problematic adverse events of current therapies, such as weight gain, sedation, and movement disorders.

Karuna plans to submit a New Drug Application with the U.S. Food and Drug Administration for KarXT in mid-2023. In addition to schizophrenia, the drug is in development for the treatment of other psychiatric and neurological conditions, including Alzheimer’s disease.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Chronically low wages linked to subsequent memory decline

Article Type
Changed
Fri, 08/05/2022 - 15:21

Consistently earning a low salary in midlife is associated with increased memory decline in older age, new research suggests. In a new analysis of more than 3,000 participants in the Health and Retirement Study, those who sustained low wages in midlife showed significantly faster memory decline than their peers who never earned low wages.

The findings could have implications for future public policy and research initiatives, the investigators noted.

“Our findings, which suggest a pattern of sustained low-wage earning is harmful for cognitive health, [are] broadly applicable to researchers across numerous health disciplines,” said co-investigator Katrina Kezios, PhD, postdoctoral researcher, department of epidemiology, Mailman School of Public Health, Columbia University, New York.

The findings were presented at the 2022 Alzheimer’s Association International Conference.
 

Growing number of low-wage workers

Low-wage workers make up a growing share of the U.S. labor market. Yet little research has examined the long-term relationship between earning low wages and memory decline.

The current investigators assessed 1992-2016 data from the Health and Retirement Study, a longitudinal survey of nationally representative samples of Americans aged 50 years and older. Study participants are interviewed every 2 years and provide, among other things, information on work-related factors, including hourly wages.

Memory function was measured at each visit from 2004 to 2016 using a memory composite score. The score included immediate and delayed word recall memory assessments. For those who became too impaired to complete cognitive assessment, memory tests by proxy informants were utilized.

On average, participants completed 4.8 memory assessments over the course of the study.

Researchers defined “low wage” as an hourly wage lower than two-thirds of the federal median wage for the corresponding year. They categorized low-wage exposure history as “never” or “intermittent” or “sustained” on the basis of wages earned from 1992 to 2004.

The current analysis included 3,803 participants, 1,913 of whom were men. All participants were born from 1936 to 1941. In 2004, the average age was 65 years, and the mean memory score was 1.15 standard units.

The investigators adjusted for factors that could confound the relationship between wages and cognition, including the participant’s education, parental education, household wealth, and marital status. Later, whether the participants’ occupation type was of low skill or not was also included.
 

Cognitive harm

The confounder-adjusted annual rate of memory decline among workers who never earned low wages was –0.12 standard units (95% confidence interval, –0.14 to –0.10).

Compared with these workers, memory decline was significantly faster among participants with sustained low wage–earning during midlife (beta for interaction between time and exposure group, –0.012; 95% CI, –0.02 to 0.01), corresponding to an annual rate of –0.13 standard units.

Put another way, the cognitive aging experienced by workers earning low wages over a 10-year period was equivalent to what workers who never earned low wages would experience over 11 years.

Although similar associations were found for men and women, it was stronger in magnitude for men – a finding Dr. Kezios said was somewhat surprising. She noted that women are commonly more at risk for dementia than men.

However, she advises caution in interpreting this finding, as there were so few men in the sustained low-wage group. “Women disproportionately make up the group of workers earning low wages,” she said.

The negative low coefficient found for those who persistently earned low wages was also observed for those who intermittently earned low wages, but this was not statistically significant.

“We can speculate or hypothesize the cumulative effect of earning low wages at each exposure interval produces more cognitive harm than maybe earning low wages at some time points over that exposure period,” said Dr. Kezios.

A sensitivity analysis that examined wage earning at the same ages but in two different birth cohorts showed similar results for the two groups. When researchers removed self-employed workers from the study sample, the same association between sustained low wages and memory decline was found.

“Our findings held up, which gave us a little more reassurance that what we were seeing is at least signaling there might be something there,” said Dr. Kezios.

She described the study as a “first pass” for documenting the harmful cognitive effects of consistently earning low wages.

It would be interesting, she said, to now determine whether there’s a “dose effect” for having a low salary. However, other studies with different designs would be needed to determine at what income level cognitive health starts to be protected and the impact of raising the minimum wage, she added.
 

 

 

Unique study

Heather Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, said the study was unique. “I don’t think we have seen anything like this before,” said Dr. Snyder.

The study, which links sustained low-wage earning in midlife to later memory decline, “is looking beyond some of the other measures we’ve seen when we looked at socioeconomic status,” she noted.

The results “beg the question” of whether people who earn low wages have less access to health care, she added.

“We should think about how to ensure access and equity around health care and around potential ways that may address components of risk individuals have during their life course,” Dr. Snyder said.

She noted that the study provides a “start” at considering potential policies to address the impact of sustained low wages on overall health, particularly cognitive health, throughout life.

The study had no outside funding. Dr. Kezios has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Consistently earning a low salary in midlife is associated with increased memory decline in older age, new research suggests. In a new analysis of more than 3,000 participants in the Health and Retirement Study, those who sustained low wages in midlife showed significantly faster memory decline than their peers who never earned low wages.

The findings could have implications for future public policy and research initiatives, the investigators noted.

“Our findings, which suggest a pattern of sustained low-wage earning is harmful for cognitive health, [are] broadly applicable to researchers across numerous health disciplines,” said co-investigator Katrina Kezios, PhD, postdoctoral researcher, department of epidemiology, Mailman School of Public Health, Columbia University, New York.

The findings were presented at the 2022 Alzheimer’s Association International Conference.
 

Growing number of low-wage workers

Low-wage workers make up a growing share of the U.S. labor market. Yet little research has examined the long-term relationship between earning low wages and memory decline.

The current investigators assessed 1992-2016 data from the Health and Retirement Study, a longitudinal survey of nationally representative samples of Americans aged 50 years and older. Study participants are interviewed every 2 years and provide, among other things, information on work-related factors, including hourly wages.

Memory function was measured at each visit from 2004 to 2016 using a memory composite score. The score included immediate and delayed word recall memory assessments. For those who became too impaired to complete cognitive assessment, memory tests by proxy informants were utilized.

On average, participants completed 4.8 memory assessments over the course of the study.

Researchers defined “low wage” as an hourly wage lower than two-thirds of the federal median wage for the corresponding year. They categorized low-wage exposure history as “never” or “intermittent” or “sustained” on the basis of wages earned from 1992 to 2004.

The current analysis included 3,803 participants, 1,913 of whom were men. All participants were born from 1936 to 1941. In 2004, the average age was 65 years, and the mean memory score was 1.15 standard units.

The investigators adjusted for factors that could confound the relationship between wages and cognition, including the participant’s education, parental education, household wealth, and marital status. Later, whether the participants’ occupation type was of low skill or not was also included.
 

Cognitive harm

The confounder-adjusted annual rate of memory decline among workers who never earned low wages was –0.12 standard units (95% confidence interval, –0.14 to –0.10).

Compared with these workers, memory decline was significantly faster among participants with sustained low wage–earning during midlife (beta for interaction between time and exposure group, –0.012; 95% CI, –0.02 to 0.01), corresponding to an annual rate of –0.13 standard units.

Put another way, the cognitive aging experienced by workers earning low wages over a 10-year period was equivalent to what workers who never earned low wages would experience over 11 years.

Although similar associations were found for men and women, it was stronger in magnitude for men – a finding Dr. Kezios said was somewhat surprising. She noted that women are commonly more at risk for dementia than men.

However, she advises caution in interpreting this finding, as there were so few men in the sustained low-wage group. “Women disproportionately make up the group of workers earning low wages,” she said.

The negative low coefficient found for those who persistently earned low wages was also observed for those who intermittently earned low wages, but this was not statistically significant.

“We can speculate or hypothesize the cumulative effect of earning low wages at each exposure interval produces more cognitive harm than maybe earning low wages at some time points over that exposure period,” said Dr. Kezios.

A sensitivity analysis that examined wage earning at the same ages but in two different birth cohorts showed similar results for the two groups. When researchers removed self-employed workers from the study sample, the same association between sustained low wages and memory decline was found.

“Our findings held up, which gave us a little more reassurance that what we were seeing is at least signaling there might be something there,” said Dr. Kezios.

She described the study as a “first pass” for documenting the harmful cognitive effects of consistently earning low wages.

It would be interesting, she said, to now determine whether there’s a “dose effect” for having a low salary. However, other studies with different designs would be needed to determine at what income level cognitive health starts to be protected and the impact of raising the minimum wage, she added.
 

 

 

Unique study

Heather Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, said the study was unique. “I don’t think we have seen anything like this before,” said Dr. Snyder.

The study, which links sustained low-wage earning in midlife to later memory decline, “is looking beyond some of the other measures we’ve seen when we looked at socioeconomic status,” she noted.

The results “beg the question” of whether people who earn low wages have less access to health care, she added.

“We should think about how to ensure access and equity around health care and around potential ways that may address components of risk individuals have during their life course,” Dr. Snyder said.

She noted that the study provides a “start” at considering potential policies to address the impact of sustained low wages on overall health, particularly cognitive health, throughout life.

The study had no outside funding. Dr. Kezios has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Consistently earning a low salary in midlife is associated with increased memory decline in older age, new research suggests. In a new analysis of more than 3,000 participants in the Health and Retirement Study, those who sustained low wages in midlife showed significantly faster memory decline than their peers who never earned low wages.

The findings could have implications for future public policy and research initiatives, the investigators noted.

“Our findings, which suggest a pattern of sustained low-wage earning is harmful for cognitive health, [are] broadly applicable to researchers across numerous health disciplines,” said co-investigator Katrina Kezios, PhD, postdoctoral researcher, department of epidemiology, Mailman School of Public Health, Columbia University, New York.

The findings were presented at the 2022 Alzheimer’s Association International Conference.
 

Growing number of low-wage workers

Low-wage workers make up a growing share of the U.S. labor market. Yet little research has examined the long-term relationship between earning low wages and memory decline.

The current investigators assessed 1992-2016 data from the Health and Retirement Study, a longitudinal survey of nationally representative samples of Americans aged 50 years and older. Study participants are interviewed every 2 years and provide, among other things, information on work-related factors, including hourly wages.

Memory function was measured at each visit from 2004 to 2016 using a memory composite score. The score included immediate and delayed word recall memory assessments. For those who became too impaired to complete cognitive assessment, memory tests by proxy informants were utilized.

On average, participants completed 4.8 memory assessments over the course of the study.

Researchers defined “low wage” as an hourly wage lower than two-thirds of the federal median wage for the corresponding year. They categorized low-wage exposure history as “never” or “intermittent” or “sustained” on the basis of wages earned from 1992 to 2004.

The current analysis included 3,803 participants, 1,913 of whom were men. All participants were born from 1936 to 1941. In 2004, the average age was 65 years, and the mean memory score was 1.15 standard units.

The investigators adjusted for factors that could confound the relationship between wages and cognition, including the participant’s education, parental education, household wealth, and marital status. Later, whether the participants’ occupation type was of low skill or not was also included.
 

Cognitive harm

The confounder-adjusted annual rate of memory decline among workers who never earned low wages was –0.12 standard units (95% confidence interval, –0.14 to –0.10).

Compared with these workers, memory decline was significantly faster among participants with sustained low wage–earning during midlife (beta for interaction between time and exposure group, –0.012; 95% CI, –0.02 to 0.01), corresponding to an annual rate of –0.13 standard units.

Put another way, the cognitive aging experienced by workers earning low wages over a 10-year period was equivalent to what workers who never earned low wages would experience over 11 years.

Although similar associations were found for men and women, it was stronger in magnitude for men – a finding Dr. Kezios said was somewhat surprising. She noted that women are commonly more at risk for dementia than men.

However, she advises caution in interpreting this finding, as there were so few men in the sustained low-wage group. “Women disproportionately make up the group of workers earning low wages,” she said.

The negative low coefficient found for those who persistently earned low wages was also observed for those who intermittently earned low wages, but this was not statistically significant.

“We can speculate or hypothesize the cumulative effect of earning low wages at each exposure interval produces more cognitive harm than maybe earning low wages at some time points over that exposure period,” said Dr. Kezios.

A sensitivity analysis that examined wage earning at the same ages but in two different birth cohorts showed similar results for the two groups. When researchers removed self-employed workers from the study sample, the same association between sustained low wages and memory decline was found.

“Our findings held up, which gave us a little more reassurance that what we were seeing is at least signaling there might be something there,” said Dr. Kezios.

She described the study as a “first pass” for documenting the harmful cognitive effects of consistently earning low wages.

It would be interesting, she said, to now determine whether there’s a “dose effect” for having a low salary. However, other studies with different designs would be needed to determine at what income level cognitive health starts to be protected and the impact of raising the minimum wage, she added.
 

 

 

Unique study

Heather Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, said the study was unique. “I don’t think we have seen anything like this before,” said Dr. Snyder.

The study, which links sustained low-wage earning in midlife to later memory decline, “is looking beyond some of the other measures we’ve seen when we looked at socioeconomic status,” she noted.

The results “beg the question” of whether people who earn low wages have less access to health care, she added.

“We should think about how to ensure access and equity around health care and around potential ways that may address components of risk individuals have during their life course,” Dr. Snyder said.

She noted that the study provides a “start” at considering potential policies to address the impact of sustained low wages on overall health, particularly cognitive health, throughout life.

The study had no outside funding. Dr. Kezios has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

From AAIC 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID smell loss tops disease severity as a predictor of long-term cognitive impairment

Article Type
Changed
Wed, 08/03/2022 - 12:55

Loss of smell, not disease severity, predicts persistent cognitive impairment 1 year after SARS-CoV-2 infection, preliminary results of new research suggest.

The findings provide important insight into the long-term cognitive impact of COVID-19, said study investigator Gabriela Gonzalez-Alemán, PhD, professor at Pontifical Catholic University of Argentina, Buenos Aires.

The more information that can be gathered on factors increasing risks for this cognitive impact, “the better we can track it and begin to develop methods to prevent it,” she said.

The findings were presented at the Alzheimer’s Association International Conference.
 

Memory, attention problems

COVID-19 has infected more than 570 million people worldwide. Related infections may result in long-term sequelae, including neuropsychiatric symptoms, said Dr. Gonzalez-Alemán.

In older adults, COVID-19 sequelae may resemble early Alzheimer’s disease, and the two conditions may share risk factors and blood biomarkers.

The new study highlighted 1-year results from a large, prospective cohort study from Argentina. Researchers used measures to evaluate long-term consequences of COVID-19 in older adults recommended by the Alzheimer’s Association Consortium on Chronic Neuropsychiatric Sequelae of SARS-CoV-2 infection (CNS SC2).

Harmonizing definitions and methodologies for studying COVID-19’s impact on the brain allows consortium members to compare study results, said Dr. Gonzalez-Alemán.

The investigators used the health registry in the province of Jujuy, situated in the extreme northwestern part of Argentina. The registry includes all SARS-CoV-2 testing data for the entire region.

The investigators randomly invited adults aged 60 years and older from the registry to participate in the study. The current analysis included 766 adults aged 55-95 years (mean age 66.9 years; 57% female) with an average of 10.4 years of education. The education system in Argentina includes 12 years of school before university.

Investigators stratified subjects by polymerase chain reaction testing status. Of the total, 88.4% were infected with COVID and 11.6% were controls (subjects without COVID).

The neurocognitive assessment of participants included four cognitive domains: memory, attention, language, and executive function, and an olfactory test that determined degree of olfactory dysfunction. Cognitive impairment was defined as z scores below –2.

Researchers divided participants into groups according to cognitive performance. These included normal cognition, memory-only impairment (single domain; 11.7%), impairment in attention and executive function without memory impairment (two domains; 8.3%), and multiple domain impairment (11.6%).

“Our participants showed a predominance of memory impairment as would be seen in Alzheimer’s disease,” noted Dr. Gonzalez-Alemán. “And a large group showed a combination of memory and attention problems.”

About 40% of the study sample – but no controls – had olfactory dysfunction.

“All the subjects that had a severe cognitive impairment also had anosmia [loss of smell],” said Dr. Gonzalez-Alemán. “We established an association between olfactory dysfunction and cognitive performance and impairment.”

The analysis showed that severity of anosmia, but not clinical status, significantly predicted cognitive impairment. “So, anosmia could be a good predictor of cognitive impairment after COVID-19 infection,” said Dr. Gonzalez-Alemán.

For individuals older than 60 years, cognitive impairment can be persistent, as can be olfactory dysfunction, she added.

Results of a 1-year phone survey showed about 71.8% of subjects had received three vaccine doses and 24.9% two doses. About 12.5% of those with three doses were reinfected and 23.3% of those with two doses were reinfected.
 

 

 

Longest follow-up to date

Commenting on the research, Heather Snyder, PhD, vice president, medical and scientific relations at the Alzheimer’s Association, noted the study is “the longest follow-up we’ve seen” looking at the connection between persistent loss of smell and cognitive changes after a COVID-19 infection.

The study included a “fairly large” sample size and was “unique” in that it was set up in a part of the country with centralized testing, said Dr. Snyder.

The Argentinian group is among the most advanced of those connected to the CNS SC2, said Dr. Snyder.

Members of this Alzheimer’s Association consortium, said Dr. Snyder, regularly share updates of ongoing studies, which are at different stages and looking at various neuropsychiatric impacts of COVID-19. It is important to bring these groups together to determine what those impacts are “because no one group will be able to do this on their own,” she said. “We saw pretty early on that some individuals had changes in the brain, or changes in cognition, and loss of sense of smell or taste, which indicates there’s a connection to the brain.”

However, she added, “there’s still a lot we don’t know” about this connection.

The study was funded by Alzheimer’s Association and FULTRA.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Loss of smell, not disease severity, predicts persistent cognitive impairment 1 year after SARS-CoV-2 infection, preliminary results of new research suggest.

The findings provide important insight into the long-term cognitive impact of COVID-19, said study investigator Gabriela Gonzalez-Alemán, PhD, professor at Pontifical Catholic University of Argentina, Buenos Aires.

The more information that can be gathered on factors increasing risks for this cognitive impact, “the better we can track it and begin to develop methods to prevent it,” she said.

The findings were presented at the Alzheimer’s Association International Conference.
 

Memory, attention problems

COVID-19 has infected more than 570 million people worldwide. Related infections may result in long-term sequelae, including neuropsychiatric symptoms, said Dr. Gonzalez-Alemán.

In older adults, COVID-19 sequelae may resemble early Alzheimer’s disease, and the two conditions may share risk factors and blood biomarkers.

The new study highlighted 1-year results from a large, prospective cohort study from Argentina. Researchers used measures to evaluate long-term consequences of COVID-19 in older adults recommended by the Alzheimer’s Association Consortium on Chronic Neuropsychiatric Sequelae of SARS-CoV-2 infection (CNS SC2).

Harmonizing definitions and methodologies for studying COVID-19’s impact on the brain allows consortium members to compare study results, said Dr. Gonzalez-Alemán.

The investigators used the health registry in the province of Jujuy, situated in the extreme northwestern part of Argentina. The registry includes all SARS-CoV-2 testing data for the entire region.

The investigators randomly invited adults aged 60 years and older from the registry to participate in the study. The current analysis included 766 adults aged 55-95 years (mean age 66.9 years; 57% female) with an average of 10.4 years of education. The education system in Argentina includes 12 years of school before university.

Investigators stratified subjects by polymerase chain reaction testing status. Of the total, 88.4% were infected with COVID and 11.6% were controls (subjects without COVID).

The neurocognitive assessment of participants included four cognitive domains: memory, attention, language, and executive function, and an olfactory test that determined degree of olfactory dysfunction. Cognitive impairment was defined as z scores below –2.

Researchers divided participants into groups according to cognitive performance. These included normal cognition, memory-only impairment (single domain; 11.7%), impairment in attention and executive function without memory impairment (two domains; 8.3%), and multiple domain impairment (11.6%).

“Our participants showed a predominance of memory impairment as would be seen in Alzheimer’s disease,” noted Dr. Gonzalez-Alemán. “And a large group showed a combination of memory and attention problems.”

About 40% of the study sample – but no controls – had olfactory dysfunction.

“All the subjects that had a severe cognitive impairment also had anosmia [loss of smell],” said Dr. Gonzalez-Alemán. “We established an association between olfactory dysfunction and cognitive performance and impairment.”

The analysis showed that severity of anosmia, but not clinical status, significantly predicted cognitive impairment. “So, anosmia could be a good predictor of cognitive impairment after COVID-19 infection,” said Dr. Gonzalez-Alemán.

For individuals older than 60 years, cognitive impairment can be persistent, as can be olfactory dysfunction, she added.

Results of a 1-year phone survey showed about 71.8% of subjects had received three vaccine doses and 24.9% two doses. About 12.5% of those with three doses were reinfected and 23.3% of those with two doses were reinfected.
 

 

 

Longest follow-up to date

Commenting on the research, Heather Snyder, PhD, vice president, medical and scientific relations at the Alzheimer’s Association, noted the study is “the longest follow-up we’ve seen” looking at the connection between persistent loss of smell and cognitive changes after a COVID-19 infection.

The study included a “fairly large” sample size and was “unique” in that it was set up in a part of the country with centralized testing, said Dr. Snyder.

The Argentinian group is among the most advanced of those connected to the CNS SC2, said Dr. Snyder.

Members of this Alzheimer’s Association consortium, said Dr. Snyder, regularly share updates of ongoing studies, which are at different stages and looking at various neuropsychiatric impacts of COVID-19. It is important to bring these groups together to determine what those impacts are “because no one group will be able to do this on their own,” she said. “We saw pretty early on that some individuals had changes in the brain, or changes in cognition, and loss of sense of smell or taste, which indicates there’s a connection to the brain.”

However, she added, “there’s still a lot we don’t know” about this connection.

The study was funded by Alzheimer’s Association and FULTRA.

A version of this article first appeared on Medscape.com.

Loss of smell, not disease severity, predicts persistent cognitive impairment 1 year after SARS-CoV-2 infection, preliminary results of new research suggest.

The findings provide important insight into the long-term cognitive impact of COVID-19, said study investigator Gabriela Gonzalez-Alemán, PhD, professor at Pontifical Catholic University of Argentina, Buenos Aires.

The more information that can be gathered on factors increasing risks for this cognitive impact, “the better we can track it and begin to develop methods to prevent it,” she said.

The findings were presented at the Alzheimer’s Association International Conference.
 

Memory, attention problems

COVID-19 has infected more than 570 million people worldwide. Related infections may result in long-term sequelae, including neuropsychiatric symptoms, said Dr. Gonzalez-Alemán.

In older adults, COVID-19 sequelae may resemble early Alzheimer’s disease, and the two conditions may share risk factors and blood biomarkers.

The new study highlighted 1-year results from a large, prospective cohort study from Argentina. Researchers used measures to evaluate long-term consequences of COVID-19 in older adults recommended by the Alzheimer’s Association Consortium on Chronic Neuropsychiatric Sequelae of SARS-CoV-2 infection (CNS SC2).

Harmonizing definitions and methodologies for studying COVID-19’s impact on the brain allows consortium members to compare study results, said Dr. Gonzalez-Alemán.

The investigators used the health registry in the province of Jujuy, situated in the extreme northwestern part of Argentina. The registry includes all SARS-CoV-2 testing data for the entire region.

The investigators randomly invited adults aged 60 years and older from the registry to participate in the study. The current analysis included 766 adults aged 55-95 years (mean age 66.9 years; 57% female) with an average of 10.4 years of education. The education system in Argentina includes 12 years of school before university.

Investigators stratified subjects by polymerase chain reaction testing status. Of the total, 88.4% were infected with COVID and 11.6% were controls (subjects without COVID).

The neurocognitive assessment of participants included four cognitive domains: memory, attention, language, and executive function, and an olfactory test that determined degree of olfactory dysfunction. Cognitive impairment was defined as z scores below –2.

Researchers divided participants into groups according to cognitive performance. These included normal cognition, memory-only impairment (single domain; 11.7%), impairment in attention and executive function without memory impairment (two domains; 8.3%), and multiple domain impairment (11.6%).

“Our participants showed a predominance of memory impairment as would be seen in Alzheimer’s disease,” noted Dr. Gonzalez-Alemán. “And a large group showed a combination of memory and attention problems.”

About 40% of the study sample – but no controls – had olfactory dysfunction.

“All the subjects that had a severe cognitive impairment also had anosmia [loss of smell],” said Dr. Gonzalez-Alemán. “We established an association between olfactory dysfunction and cognitive performance and impairment.”

The analysis showed that severity of anosmia, but not clinical status, significantly predicted cognitive impairment. “So, anosmia could be a good predictor of cognitive impairment after COVID-19 infection,” said Dr. Gonzalez-Alemán.

For individuals older than 60 years, cognitive impairment can be persistent, as can be olfactory dysfunction, she added.

Results of a 1-year phone survey showed about 71.8% of subjects had received three vaccine doses and 24.9% two doses. About 12.5% of those with three doses were reinfected and 23.3% of those with two doses were reinfected.
 

 

 

Longest follow-up to date

Commenting on the research, Heather Snyder, PhD, vice president, medical and scientific relations at the Alzheimer’s Association, noted the study is “the longest follow-up we’ve seen” looking at the connection between persistent loss of smell and cognitive changes after a COVID-19 infection.

The study included a “fairly large” sample size and was “unique” in that it was set up in a part of the country with centralized testing, said Dr. Snyder.

The Argentinian group is among the most advanced of those connected to the CNS SC2, said Dr. Snyder.

Members of this Alzheimer’s Association consortium, said Dr. Snyder, regularly share updates of ongoing studies, which are at different stages and looking at various neuropsychiatric impacts of COVID-19. It is important to bring these groups together to determine what those impacts are “because no one group will be able to do this on their own,” she said. “We saw pretty early on that some individuals had changes in the brain, or changes in cognition, and loss of sense of smell or taste, which indicates there’s a connection to the brain.”

However, she added, “there’s still a lot we don’t know” about this connection.

The study was funded by Alzheimer’s Association and FULTRA.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAIC 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pharmacogenomic testing may curb drug interactions in severe depression

Article Type
Changed
Tue, 08/02/2022 - 15:02

Pharmacogenetic testing, which is used to classify how patients with major depressive disorder (MDD) metabolize medications, reduces adverse drug-gene interactions, new research shows.

In a randomized clinical trial that included almost 2,000 adults with MDD, patients in the pharmacogenomics-guided group were more likely to receive an antidepressant that had no potential drug-gene interaction than the patients who received usual care.

In addition, among the intervention group, the rate of remission over 24 weeks was significantly greater.

“These tests can be helpful in rethinking choices of antidepressants, but clinicians should not expect them to be helpful for every patient,” study investigator David W. Oslin, MD, Corporal Michael J. Crescenz VA Medical Center and professor of psychiatry at Perelman School of Medicine, University of Pennsylvania, Philadelphia, said in an interview.

The findings were published online  in JAMA.

Less trial and error

Pharmacogenomic testing can provide information to inform drug selection or dosing for patients with a genetic variation that alters pharmacokinetics or pharmacodynamics. Such testing may be particularly useful for patients with MDD, as fewer than 40% of these patients achieve clinical remission after an initial treatment with an antidepressant, the investigators note.

“To get to a treatment that works for an individual, it’s not unusual to have to try two or three or four antidepressants,” said Dr. Oslin. “If we could reduce that variance a little bit with a test like this, that would be huge from a public health perspective.”

The study included 676 physicians and 1,944 adults with MDD (mean age, 48 years; 24% women) who were receiving care at 22 Department of Veterans Affairs medical centers. Eligible patients were set to start a new antidepressant monotherapy, and all underwent a pharmacogenomic test using a cheek swab.

Investigators randomly assigned patients to receive test results when available (pharmacogenomic-guided group) or 24 weeks later (usual-care group). For the former group, clinicians were asked to initiate treatment when test results were available, typically within 2-3 days. For the latter group, they were asked to initiate treatment on a day of randomization.

Assessments included the 9-item Patient Health questionnaire (PHQ-9), scores for which range from 0-27 points, with higher scores indicating worse symptoms.

Of the total patient population, 79% completed the 24-week assessment.

Researchers characterized antidepressant medications on the basis of drug-gene interaction categories: no known interactions, moderate interactions, and substantial interactions.

The co-primary outcomes were treatment initiation within 30 days, determined on the basis of drug-gene interaction categories, and remission from depression symptoms, defined as a PHQ-9 score of less than or equal to 5.

Raters who were blinded to clinical care and study randomization assessed outcomes at 4, 8, 12, 18, and 24 weeks.
 

Significant impact?

Results showed that the pharmacogenomic-guided group was more likely to receive an antidepressant that had no potential drug-gene interaction, as opposed to one with a moderate/substantial interaction (odds ratio, 4.32; 95% confidence interval, 3.47-5.39; P < .001).

The usual-care group was more likely to receive a drug with mild potential drug-gene interaction (no/moderate interaction vs. substantial interaction: OR, 2.08; 95% CI, 1.52-2.84; P = .005).

For the intervention group, the estimated rates of receiving an antidepressant with no, moderate, and substantial drug-gene interactions were 59.3%, 30.0%, and 10.7%, respectively. For the usual-care group, the estimates were 25.7%, 54.6%, and 19.7%.

The finding that 1 in 5 patients who received usual care were initially given a medication for which there were significant drug-gene interactions means it is “not a rare event,” said Dr. Oslin. “If we can make an impact on 20% of the people we prescribe to, that’s actually pretty big.”

Rates of remission were greater in the pharmacogenomic-guided group over 24 weeks (OR, 1.28; 95% CI, 1.05-1.57; P = .02; absolute risk difference, 2.8%; 95% CI, 0.6%-5.1%).

The secondary outcomes of response to treatment, defined as at least a 50% decrease in PHQ-9 score, also favored the pharmacogenomic-guided group. This was also the case for the secondary outcome of reduction in symptom severity on the PHQ-9 score.

Some physicians have expressed skepticism about pharmacogenomic testing, but the study provides additional evidence of its usefulness, Dr. Oslin noted.

“While I don’t think testing should be standard of practice, I also don’t think we should put barriers into the testing until we can better understand how to target the testing” to those who will benefit the most, he added.

The tests are available at a commercial cost of about $1,000 – which may not be that expensive if testing has a significant impact on a patient’s life, said Dr. Oslin.
 

 

 

Important research, but with several limitations

In an accompanying editorial, Dan V. Iosifescu, MD, associate professor of psychiatry at New York University School of Medicine and director of clinical research at the Nathan Kline Institute for Psychiatric Research, called the study an important addition to the literature on pharmacogenomic testing for patients with MDD.

The study was significantly larger and had broader inclusion criteria and longer follow-up than previous clinical trials and is one of the few investigations not funded by a manufacturer of pharmacogenomic tests, writes Dr. Iosifescu, who was not involved with the research.

However, he notes that an antidepressant was not initiated for 30 days after randomization in 25% of the intervention group and in 31% of the usual-care group, which was “puzzling.” “Because these rates were comparable in the 2 groups, it cannot be explained primarily by the delay of the pharmacogenomic test results in the intervention group,” he writes.

In addition, in the co-primary outcome of symptom remission rate, the difference in clinical improvement in favor of the pharmacogenomic-guided treatment was only “modest” – the gain was of less than 2% in the proportion of patients achieving remission, Dr. Iosifescu adds.

He adds this is “likely not very meaningful clinically despite this difference achieving statistical significance in this large study sample.”

Other potential study limitations he cites include the lack of patient blinding to treatment assignment and the absence of clarity about why rates of MDD response and remission over time were relatively low in both treatment groups.

A possible approach to optimize antidepressant choices could involve integration of pharmacogenomic data into larger predictive models that include clinical and demographic variables, Dr. Iosifescu notes.

“The development of such complex models is challenging, but it is now possible given the recent substantial advances in the proficiency of computational tools,” he writes.

The study was funded by the U.S. Department of Veterans Affairs (VA), Health Services Research and Development Service, and the Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center. Dr. Oslin reports having received grants from the VA Office of Research and Development and Janssen Pharmaceuticals and nonfinancial support from Myriad Genetics during the conduct of the study. Dr. Iosifescu report having received personal fees from Alkermes, Allergan, Axsome, Biogen, the Centers for Psychiatric Excellence, Jazz, Lundbeck, Precision Neuroscience, Sage, and Sunovion and grants from Alkermes, AstraZeneca, Brainsway, Litecure, Neosync, Otsuka, Roche, and Shire.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Pharmacogenetic testing, which is used to classify how patients with major depressive disorder (MDD) metabolize medications, reduces adverse drug-gene interactions, new research shows.

In a randomized clinical trial that included almost 2,000 adults with MDD, patients in the pharmacogenomics-guided group were more likely to receive an antidepressant that had no potential drug-gene interaction than the patients who received usual care.

In addition, among the intervention group, the rate of remission over 24 weeks was significantly greater.

“These tests can be helpful in rethinking choices of antidepressants, but clinicians should not expect them to be helpful for every patient,” study investigator David W. Oslin, MD, Corporal Michael J. Crescenz VA Medical Center and professor of psychiatry at Perelman School of Medicine, University of Pennsylvania, Philadelphia, said in an interview.

The findings were published online  in JAMA.

Less trial and error

Pharmacogenomic testing can provide information to inform drug selection or dosing for patients with a genetic variation that alters pharmacokinetics or pharmacodynamics. Such testing may be particularly useful for patients with MDD, as fewer than 40% of these patients achieve clinical remission after an initial treatment with an antidepressant, the investigators note.

“To get to a treatment that works for an individual, it’s not unusual to have to try two or three or four antidepressants,” said Dr. Oslin. “If we could reduce that variance a little bit with a test like this, that would be huge from a public health perspective.”

The study included 676 physicians and 1,944 adults with MDD (mean age, 48 years; 24% women) who were receiving care at 22 Department of Veterans Affairs medical centers. Eligible patients were set to start a new antidepressant monotherapy, and all underwent a pharmacogenomic test using a cheek swab.

Investigators randomly assigned patients to receive test results when available (pharmacogenomic-guided group) or 24 weeks later (usual-care group). For the former group, clinicians were asked to initiate treatment when test results were available, typically within 2-3 days. For the latter group, they were asked to initiate treatment on a day of randomization.

Assessments included the 9-item Patient Health questionnaire (PHQ-9), scores for which range from 0-27 points, with higher scores indicating worse symptoms.

Of the total patient population, 79% completed the 24-week assessment.

Researchers characterized antidepressant medications on the basis of drug-gene interaction categories: no known interactions, moderate interactions, and substantial interactions.

The co-primary outcomes were treatment initiation within 30 days, determined on the basis of drug-gene interaction categories, and remission from depression symptoms, defined as a PHQ-9 score of less than or equal to 5.

Raters who were blinded to clinical care and study randomization assessed outcomes at 4, 8, 12, 18, and 24 weeks.
 

Significant impact?

Results showed that the pharmacogenomic-guided group was more likely to receive an antidepressant that had no potential drug-gene interaction, as opposed to one with a moderate/substantial interaction (odds ratio, 4.32; 95% confidence interval, 3.47-5.39; P < .001).

The usual-care group was more likely to receive a drug with mild potential drug-gene interaction (no/moderate interaction vs. substantial interaction: OR, 2.08; 95% CI, 1.52-2.84; P = .005).

For the intervention group, the estimated rates of receiving an antidepressant with no, moderate, and substantial drug-gene interactions were 59.3%, 30.0%, and 10.7%, respectively. For the usual-care group, the estimates were 25.7%, 54.6%, and 19.7%.

The finding that 1 in 5 patients who received usual care were initially given a medication for which there were significant drug-gene interactions means it is “not a rare event,” said Dr. Oslin. “If we can make an impact on 20% of the people we prescribe to, that’s actually pretty big.”

Rates of remission were greater in the pharmacogenomic-guided group over 24 weeks (OR, 1.28; 95% CI, 1.05-1.57; P = .02; absolute risk difference, 2.8%; 95% CI, 0.6%-5.1%).

The secondary outcomes of response to treatment, defined as at least a 50% decrease in PHQ-9 score, also favored the pharmacogenomic-guided group. This was also the case for the secondary outcome of reduction in symptom severity on the PHQ-9 score.

Some physicians have expressed skepticism about pharmacogenomic testing, but the study provides additional evidence of its usefulness, Dr. Oslin noted.

“While I don’t think testing should be standard of practice, I also don’t think we should put barriers into the testing until we can better understand how to target the testing” to those who will benefit the most, he added.

The tests are available at a commercial cost of about $1,000 – which may not be that expensive if testing has a significant impact on a patient’s life, said Dr. Oslin.
 

 

 

Important research, but with several limitations

In an accompanying editorial, Dan V. Iosifescu, MD, associate professor of psychiatry at New York University School of Medicine and director of clinical research at the Nathan Kline Institute for Psychiatric Research, called the study an important addition to the literature on pharmacogenomic testing for patients with MDD.

The study was significantly larger and had broader inclusion criteria and longer follow-up than previous clinical trials and is one of the few investigations not funded by a manufacturer of pharmacogenomic tests, writes Dr. Iosifescu, who was not involved with the research.

However, he notes that an antidepressant was not initiated for 30 days after randomization in 25% of the intervention group and in 31% of the usual-care group, which was “puzzling.” “Because these rates were comparable in the 2 groups, it cannot be explained primarily by the delay of the pharmacogenomic test results in the intervention group,” he writes.

In addition, in the co-primary outcome of symptom remission rate, the difference in clinical improvement in favor of the pharmacogenomic-guided treatment was only “modest” – the gain was of less than 2% in the proportion of patients achieving remission, Dr. Iosifescu adds.

He adds this is “likely not very meaningful clinically despite this difference achieving statistical significance in this large study sample.”

Other potential study limitations he cites include the lack of patient blinding to treatment assignment and the absence of clarity about why rates of MDD response and remission over time were relatively low in both treatment groups.

A possible approach to optimize antidepressant choices could involve integration of pharmacogenomic data into larger predictive models that include clinical and demographic variables, Dr. Iosifescu notes.

“The development of such complex models is challenging, but it is now possible given the recent substantial advances in the proficiency of computational tools,” he writes.

The study was funded by the U.S. Department of Veterans Affairs (VA), Health Services Research and Development Service, and the Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center. Dr. Oslin reports having received grants from the VA Office of Research and Development and Janssen Pharmaceuticals and nonfinancial support from Myriad Genetics during the conduct of the study. Dr. Iosifescu report having received personal fees from Alkermes, Allergan, Axsome, Biogen, the Centers for Psychiatric Excellence, Jazz, Lundbeck, Precision Neuroscience, Sage, and Sunovion and grants from Alkermes, AstraZeneca, Brainsway, Litecure, Neosync, Otsuka, Roche, and Shire.

A version of this article first appeared on Medscape.com.

Pharmacogenetic testing, which is used to classify how patients with major depressive disorder (MDD) metabolize medications, reduces adverse drug-gene interactions, new research shows.

In a randomized clinical trial that included almost 2,000 adults with MDD, patients in the pharmacogenomics-guided group were more likely to receive an antidepressant that had no potential drug-gene interaction than the patients who received usual care.

In addition, among the intervention group, the rate of remission over 24 weeks was significantly greater.

“These tests can be helpful in rethinking choices of antidepressants, but clinicians should not expect them to be helpful for every patient,” study investigator David W. Oslin, MD, Corporal Michael J. Crescenz VA Medical Center and professor of psychiatry at Perelman School of Medicine, University of Pennsylvania, Philadelphia, said in an interview.

The findings were published online  in JAMA.

Less trial and error

Pharmacogenomic testing can provide information to inform drug selection or dosing for patients with a genetic variation that alters pharmacokinetics or pharmacodynamics. Such testing may be particularly useful for patients with MDD, as fewer than 40% of these patients achieve clinical remission after an initial treatment with an antidepressant, the investigators note.

“To get to a treatment that works for an individual, it’s not unusual to have to try two or three or four antidepressants,” said Dr. Oslin. “If we could reduce that variance a little bit with a test like this, that would be huge from a public health perspective.”

The study included 676 physicians and 1,944 adults with MDD (mean age, 48 years; 24% women) who were receiving care at 22 Department of Veterans Affairs medical centers. Eligible patients were set to start a new antidepressant monotherapy, and all underwent a pharmacogenomic test using a cheek swab.

Investigators randomly assigned patients to receive test results when available (pharmacogenomic-guided group) or 24 weeks later (usual-care group). For the former group, clinicians were asked to initiate treatment when test results were available, typically within 2-3 days. For the latter group, they were asked to initiate treatment on a day of randomization.

Assessments included the 9-item Patient Health questionnaire (PHQ-9), scores for which range from 0-27 points, with higher scores indicating worse symptoms.

Of the total patient population, 79% completed the 24-week assessment.

Researchers characterized antidepressant medications on the basis of drug-gene interaction categories: no known interactions, moderate interactions, and substantial interactions.

The co-primary outcomes were treatment initiation within 30 days, determined on the basis of drug-gene interaction categories, and remission from depression symptoms, defined as a PHQ-9 score of less than or equal to 5.

Raters who were blinded to clinical care and study randomization assessed outcomes at 4, 8, 12, 18, and 24 weeks.
 

Significant impact?

Results showed that the pharmacogenomic-guided group was more likely to receive an antidepressant that had no potential drug-gene interaction, as opposed to one with a moderate/substantial interaction (odds ratio, 4.32; 95% confidence interval, 3.47-5.39; P < .001).

The usual-care group was more likely to receive a drug with mild potential drug-gene interaction (no/moderate interaction vs. substantial interaction: OR, 2.08; 95% CI, 1.52-2.84; P = .005).

For the intervention group, the estimated rates of receiving an antidepressant with no, moderate, and substantial drug-gene interactions were 59.3%, 30.0%, and 10.7%, respectively. For the usual-care group, the estimates were 25.7%, 54.6%, and 19.7%.

The finding that 1 in 5 patients who received usual care were initially given a medication for which there were significant drug-gene interactions means it is “not a rare event,” said Dr. Oslin. “If we can make an impact on 20% of the people we prescribe to, that’s actually pretty big.”

Rates of remission were greater in the pharmacogenomic-guided group over 24 weeks (OR, 1.28; 95% CI, 1.05-1.57; P = .02; absolute risk difference, 2.8%; 95% CI, 0.6%-5.1%).

The secondary outcomes of response to treatment, defined as at least a 50% decrease in PHQ-9 score, also favored the pharmacogenomic-guided group. This was also the case for the secondary outcome of reduction in symptom severity on the PHQ-9 score.

Some physicians have expressed skepticism about pharmacogenomic testing, but the study provides additional evidence of its usefulness, Dr. Oslin noted.

“While I don’t think testing should be standard of practice, I also don’t think we should put barriers into the testing until we can better understand how to target the testing” to those who will benefit the most, he added.

The tests are available at a commercial cost of about $1,000 – which may not be that expensive if testing has a significant impact on a patient’s life, said Dr. Oslin.
 

 

 

Important research, but with several limitations

In an accompanying editorial, Dan V. Iosifescu, MD, associate professor of psychiatry at New York University School of Medicine and director of clinical research at the Nathan Kline Institute for Psychiatric Research, called the study an important addition to the literature on pharmacogenomic testing for patients with MDD.

The study was significantly larger and had broader inclusion criteria and longer follow-up than previous clinical trials and is one of the few investigations not funded by a manufacturer of pharmacogenomic tests, writes Dr. Iosifescu, who was not involved with the research.

However, he notes that an antidepressant was not initiated for 30 days after randomization in 25% of the intervention group and in 31% of the usual-care group, which was “puzzling.” “Because these rates were comparable in the 2 groups, it cannot be explained primarily by the delay of the pharmacogenomic test results in the intervention group,” he writes.

In addition, in the co-primary outcome of symptom remission rate, the difference in clinical improvement in favor of the pharmacogenomic-guided treatment was only “modest” – the gain was of less than 2% in the proportion of patients achieving remission, Dr. Iosifescu adds.

He adds this is “likely not very meaningful clinically despite this difference achieving statistical significance in this large study sample.”

Other potential study limitations he cites include the lack of patient blinding to treatment assignment and the absence of clarity about why rates of MDD response and remission over time were relatively low in both treatment groups.

A possible approach to optimize antidepressant choices could involve integration of pharmacogenomic data into larger predictive models that include clinical and demographic variables, Dr. Iosifescu notes.

“The development of such complex models is challenging, but it is now possible given the recent substantial advances in the proficiency of computational tools,” he writes.

The study was funded by the U.S. Department of Veterans Affairs (VA), Health Services Research and Development Service, and the Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center. Dr. Oslin reports having received grants from the VA Office of Research and Development and Janssen Pharmaceuticals and nonfinancial support from Myriad Genetics during the conduct of the study. Dr. Iosifescu report having received personal fees from Alkermes, Allergan, Axsome, Biogen, the Centers for Psychiatric Excellence, Jazz, Lundbeck, Precision Neuroscience, Sage, and Sunovion and grants from Alkermes, AstraZeneca, Brainsway, Litecure, Neosync, Otsuka, Roche, and Shire.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Growing evidence gardening cultivates mental health

Article Type
Changed
Mon, 07/25/2022 - 09:07

Taking up gardening is linked to improved mood and decreased stress, new research suggests.

The results of the small pilot study add to the growing body of evidence supporting the therapeutic value of gardening, study investigator Charles Guy, PhD, professor emeritus, University of Florida Institute of Food and Agricultural Sciences, Gainesville, told this news organization.

“If we can see therapeutic benefits among healthy individuals in a rigorously designed study, where variability was as controlled as you will see in this field, then now is the time to invest in some large-scale multi-institutional studies,” Dr. Guy added.

The study was published online in PLOS ONE.
 

Horticulture as therapy

Horticulture therapy involves engaging in gardening and plant-based activities facilitated by a trained therapist. Previous studies found that this intervention reduces apathy and improves cognitive function in some populations.

The current study included healthy, nonsmoking, and non–drug-using women, whose average age was about 32.5 years and whose body mass index was less than 32. The participants had no chronic conditions and were not allergic to pollen or plants.

Virtually all previous studies of therapeutic gardening included participants who had been diagnosed with conditions such as depression, chronic pain, or PTSD. “If we can see a therapeutic benefit with perfectly healthy people, then this is likely to have a therapeutic effect with whatever clinical population you might be interested in looking at,” said Dr. Guy.

In addition, including only women reduced variability, which is important in a small study, he said.

The researchers randomly assigned 20 participants to the gardening intervention and 20 to an art intervention. Each intervention consisted of twice-weekly 60-minute sessions for 4 weeks and a single follow-up session.

The art group was asked not to visit art galleries, museums, arts and crafts events, or art-related websites. Those in the gardening group were told not to visit parks or botanical gardens, not to engage in gardening activities, and not to visit gardening websites.

Activities in both groups involved a similar level of physical, cognitive, and social engagement. Gardeners were taught how to plant seeds and transplant and harvest edible crops, such as tomatoes, beans, and basil. Those in the art group learned papermaking and storytelling through drawing, printmaking, and mixed media collage.

At the beginning and end of the study, participants completed six questionnaires: the Profile of Mood States 2-A (POMS) short form, the Perceived Stress Scale (PSS), the Beck Depression Inventory II (BDI-II), the State-Trait Anxiety Inventory for Adults, the Satisfaction With Participation in Discretionary Social Activities, and the 36-item Short-Form Survey.

Participants wore wrist cuff blood pressure and heart rate monitors.

The analysis included 15 persons in the gardening group and 17 in the art group.

Participants in both interventions improved on several scales. For example, the mean preintervention POMS TMD (T score) for gardeners was 53.1, which was reduced to a mean of 46.9 post intervention (P = .018). In the art group, the means score was 53.5 before the intervention and 47.0 after the intervention (P = .009).

For the PSS, mean scores went from 14.9 to 9.4 (P = .002) for gardening and from 15.8 to 10.0 (P = .001) for artmaking.

For the BDI-II, mean scores dropped from 8.2 to 2.8 (P = .001) for gardening and from 9.0 to 5.1 (P = .009) for art.

However, gardening was associated with less trait anxiety than artmaking. “We concluded that both interventions were roughly equally therapeutic, with one glaring exception, and that was with trait anxiety, where the gardening resulted in statistical separation from the art group,” said Dr. Guy.

There appeared to be dose responses for total mood disturbance, perceived stress, and depression symptomatology for both gardening and artmaking.

Neither intervention affected heart rate or blood pressure. A larger sample might be needed to detect treatment differences in healthy women, the investigators noted.

The therapeutic benefit of gardening may lie in the role of plants in human evolution, during which “we relied on plants for shelter; we relied on them for protection; we relied on them obviously for nutrition,” said Dr. Guy.

The study results support carrying out large, well-designed, rigorously designed trials “that will definitively and conclusively demonstrate treatment effects with quantitative descriptions of those treatment effects with respect to dosage,” he said.
 

 

 

Good for the mind

Commenting on the study, Sir Richard Thompson, MD, past president, Royal College of Physicians, London, who has written about the health benefits of gardening, said this new study provides “more evidence that both gardening and art therapy are good for the mind” with mostly equal benefits for the two interventions.

Anuradha Dullewe Wijeyeratne
Dr. Richard Thompson

“A much larger study would be needed to strengthen their case, but it fits in with much of the literature,” said Dr. Thompson.

However, he acknowledged the difficulty of carrying out scientifically robust studies in the field of alternative medicine, which “tends to be frowned upon” by some scientists.

Dr. Thompson identified some drawbacks of the study. In trying to measure so many parameters, the authors “may have had to resort to complex statistical analyses,” which may have led to some outcome changes being statistically positive by chance.

He noted that the study was small and that the gardening arm was “artificial” in that it was carried out in a greenhouse. “Maybe being outside would have been more beneficial; it would be interesting to test that hypothesis.”

As well, he pointed out initial differences between the two groups, including income and initial blood pressure, but he doubts these were significant.

He agreed that changes in cardiovascular parameters wouldn’t be expected in healthy young women, “as there’s little room for improvement.

“I wonder whether more improvement might have been seen in participants who were already suffering from anxiety, depression, etc.”

The study was supported by the Horticulture Research Institute, the Gene and Barbara Batson Endowed Nursery Fund, Florida Nursery Growers and Landscape Association, the Institute of Food and Agricultural Sciences, Wilmot Botanical Gardens, the Center for Arts in Medicine, Health Shands Arts in Medicine, and the department of environmental horticulture at the University of Florida. The authors disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Taking up gardening is linked to improved mood and decreased stress, new research suggests.

The results of the small pilot study add to the growing body of evidence supporting the therapeutic value of gardening, study investigator Charles Guy, PhD, professor emeritus, University of Florida Institute of Food and Agricultural Sciences, Gainesville, told this news organization.

“If we can see therapeutic benefits among healthy individuals in a rigorously designed study, where variability was as controlled as you will see in this field, then now is the time to invest in some large-scale multi-institutional studies,” Dr. Guy added.

The study was published online in PLOS ONE.
 

Horticulture as therapy

Horticulture therapy involves engaging in gardening and plant-based activities facilitated by a trained therapist. Previous studies found that this intervention reduces apathy and improves cognitive function in some populations.

The current study included healthy, nonsmoking, and non–drug-using women, whose average age was about 32.5 years and whose body mass index was less than 32. The participants had no chronic conditions and were not allergic to pollen or plants.

Virtually all previous studies of therapeutic gardening included participants who had been diagnosed with conditions such as depression, chronic pain, or PTSD. “If we can see a therapeutic benefit with perfectly healthy people, then this is likely to have a therapeutic effect with whatever clinical population you might be interested in looking at,” said Dr. Guy.

In addition, including only women reduced variability, which is important in a small study, he said.

The researchers randomly assigned 20 participants to the gardening intervention and 20 to an art intervention. Each intervention consisted of twice-weekly 60-minute sessions for 4 weeks and a single follow-up session.

The art group was asked not to visit art galleries, museums, arts and crafts events, or art-related websites. Those in the gardening group were told not to visit parks or botanical gardens, not to engage in gardening activities, and not to visit gardening websites.

Activities in both groups involved a similar level of physical, cognitive, and social engagement. Gardeners were taught how to plant seeds and transplant and harvest edible crops, such as tomatoes, beans, and basil. Those in the art group learned papermaking and storytelling through drawing, printmaking, and mixed media collage.

At the beginning and end of the study, participants completed six questionnaires: the Profile of Mood States 2-A (POMS) short form, the Perceived Stress Scale (PSS), the Beck Depression Inventory II (BDI-II), the State-Trait Anxiety Inventory for Adults, the Satisfaction With Participation in Discretionary Social Activities, and the 36-item Short-Form Survey.

Participants wore wrist cuff blood pressure and heart rate monitors.

The analysis included 15 persons in the gardening group and 17 in the art group.

Participants in both interventions improved on several scales. For example, the mean preintervention POMS TMD (T score) for gardeners was 53.1, which was reduced to a mean of 46.9 post intervention (P = .018). In the art group, the means score was 53.5 before the intervention and 47.0 after the intervention (P = .009).

For the PSS, mean scores went from 14.9 to 9.4 (P = .002) for gardening and from 15.8 to 10.0 (P = .001) for artmaking.

For the BDI-II, mean scores dropped from 8.2 to 2.8 (P = .001) for gardening and from 9.0 to 5.1 (P = .009) for art.

However, gardening was associated with less trait anxiety than artmaking. “We concluded that both interventions were roughly equally therapeutic, with one glaring exception, and that was with trait anxiety, where the gardening resulted in statistical separation from the art group,” said Dr. Guy.

There appeared to be dose responses for total mood disturbance, perceived stress, and depression symptomatology for both gardening and artmaking.

Neither intervention affected heart rate or blood pressure. A larger sample might be needed to detect treatment differences in healthy women, the investigators noted.

The therapeutic benefit of gardening may lie in the role of plants in human evolution, during which “we relied on plants for shelter; we relied on them for protection; we relied on them obviously for nutrition,” said Dr. Guy.

The study results support carrying out large, well-designed, rigorously designed trials “that will definitively and conclusively demonstrate treatment effects with quantitative descriptions of those treatment effects with respect to dosage,” he said.
 

 

 

Good for the mind

Commenting on the study, Sir Richard Thompson, MD, past president, Royal College of Physicians, London, who has written about the health benefits of gardening, said this new study provides “more evidence that both gardening and art therapy are good for the mind” with mostly equal benefits for the two interventions.

Anuradha Dullewe Wijeyeratne
Dr. Richard Thompson

“A much larger study would be needed to strengthen their case, but it fits in with much of the literature,” said Dr. Thompson.

However, he acknowledged the difficulty of carrying out scientifically robust studies in the field of alternative medicine, which “tends to be frowned upon” by some scientists.

Dr. Thompson identified some drawbacks of the study. In trying to measure so many parameters, the authors “may have had to resort to complex statistical analyses,” which may have led to some outcome changes being statistically positive by chance.

He noted that the study was small and that the gardening arm was “artificial” in that it was carried out in a greenhouse. “Maybe being outside would have been more beneficial; it would be interesting to test that hypothesis.”

As well, he pointed out initial differences between the two groups, including income and initial blood pressure, but he doubts these were significant.

He agreed that changes in cardiovascular parameters wouldn’t be expected in healthy young women, “as there’s little room for improvement.

“I wonder whether more improvement might have been seen in participants who were already suffering from anxiety, depression, etc.”

The study was supported by the Horticulture Research Institute, the Gene and Barbara Batson Endowed Nursery Fund, Florida Nursery Growers and Landscape Association, the Institute of Food and Agricultural Sciences, Wilmot Botanical Gardens, the Center for Arts in Medicine, Health Shands Arts in Medicine, and the department of environmental horticulture at the University of Florida. The authors disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Taking up gardening is linked to improved mood and decreased stress, new research suggests.

The results of the small pilot study add to the growing body of evidence supporting the therapeutic value of gardening, study investigator Charles Guy, PhD, professor emeritus, University of Florida Institute of Food and Agricultural Sciences, Gainesville, told this news organization.

“If we can see therapeutic benefits among healthy individuals in a rigorously designed study, where variability was as controlled as you will see in this field, then now is the time to invest in some large-scale multi-institutional studies,” Dr. Guy added.

The study was published online in PLOS ONE.
 

Horticulture as therapy

Horticulture therapy involves engaging in gardening and plant-based activities facilitated by a trained therapist. Previous studies found that this intervention reduces apathy and improves cognitive function in some populations.

The current study included healthy, nonsmoking, and non–drug-using women, whose average age was about 32.5 years and whose body mass index was less than 32. The participants had no chronic conditions and were not allergic to pollen or plants.

Virtually all previous studies of therapeutic gardening included participants who had been diagnosed with conditions such as depression, chronic pain, or PTSD. “If we can see a therapeutic benefit with perfectly healthy people, then this is likely to have a therapeutic effect with whatever clinical population you might be interested in looking at,” said Dr. Guy.

In addition, including only women reduced variability, which is important in a small study, he said.

The researchers randomly assigned 20 participants to the gardening intervention and 20 to an art intervention. Each intervention consisted of twice-weekly 60-minute sessions for 4 weeks and a single follow-up session.

The art group was asked not to visit art galleries, museums, arts and crafts events, or art-related websites. Those in the gardening group were told not to visit parks or botanical gardens, not to engage in gardening activities, and not to visit gardening websites.

Activities in both groups involved a similar level of physical, cognitive, and social engagement. Gardeners were taught how to plant seeds and transplant and harvest edible crops, such as tomatoes, beans, and basil. Those in the art group learned papermaking and storytelling through drawing, printmaking, and mixed media collage.

At the beginning and end of the study, participants completed six questionnaires: the Profile of Mood States 2-A (POMS) short form, the Perceived Stress Scale (PSS), the Beck Depression Inventory II (BDI-II), the State-Trait Anxiety Inventory for Adults, the Satisfaction With Participation in Discretionary Social Activities, and the 36-item Short-Form Survey.

Participants wore wrist cuff blood pressure and heart rate monitors.

The analysis included 15 persons in the gardening group and 17 in the art group.

Participants in both interventions improved on several scales. For example, the mean preintervention POMS TMD (T score) for gardeners was 53.1, which was reduced to a mean of 46.9 post intervention (P = .018). In the art group, the means score was 53.5 before the intervention and 47.0 after the intervention (P = .009).

For the PSS, mean scores went from 14.9 to 9.4 (P = .002) for gardening and from 15.8 to 10.0 (P = .001) for artmaking.

For the BDI-II, mean scores dropped from 8.2 to 2.8 (P = .001) for gardening and from 9.0 to 5.1 (P = .009) for art.

However, gardening was associated with less trait anxiety than artmaking. “We concluded that both interventions were roughly equally therapeutic, with one glaring exception, and that was with trait anxiety, where the gardening resulted in statistical separation from the art group,” said Dr. Guy.

There appeared to be dose responses for total mood disturbance, perceived stress, and depression symptomatology for both gardening and artmaking.

Neither intervention affected heart rate or blood pressure. A larger sample might be needed to detect treatment differences in healthy women, the investigators noted.

The therapeutic benefit of gardening may lie in the role of plants in human evolution, during which “we relied on plants for shelter; we relied on them for protection; we relied on them obviously for nutrition,” said Dr. Guy.

The study results support carrying out large, well-designed, rigorously designed trials “that will definitively and conclusively demonstrate treatment effects with quantitative descriptions of those treatment effects with respect to dosage,” he said.
 

 

 

Good for the mind

Commenting on the study, Sir Richard Thompson, MD, past president, Royal College of Physicians, London, who has written about the health benefits of gardening, said this new study provides “more evidence that both gardening and art therapy are good for the mind” with mostly equal benefits for the two interventions.

Anuradha Dullewe Wijeyeratne
Dr. Richard Thompson

“A much larger study would be needed to strengthen their case, but it fits in with much of the literature,” said Dr. Thompson.

However, he acknowledged the difficulty of carrying out scientifically robust studies in the field of alternative medicine, which “tends to be frowned upon” by some scientists.

Dr. Thompson identified some drawbacks of the study. In trying to measure so many parameters, the authors “may have had to resort to complex statistical analyses,” which may have led to some outcome changes being statistically positive by chance.

He noted that the study was small and that the gardening arm was “artificial” in that it was carried out in a greenhouse. “Maybe being outside would have been more beneficial; it would be interesting to test that hypothesis.”

As well, he pointed out initial differences between the two groups, including income and initial blood pressure, but he doubts these were significant.

He agreed that changes in cardiovascular parameters wouldn’t be expected in healthy young women, “as there’s little room for improvement.

“I wonder whether more improvement might have been seen in participants who were already suffering from anxiety, depression, etc.”

The study was supported by the Horticulture Research Institute, the Gene and Barbara Batson Endowed Nursery Fund, Florida Nursery Growers and Landscape Association, the Institute of Food and Agricultural Sciences, Wilmot Botanical Gardens, the Center for Arts in Medicine, Health Shands Arts in Medicine, and the department of environmental horticulture at the University of Florida. The authors disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PLOS ONE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Alcohol’s detrimental impact on the brain explained?

Article Type
Changed
Thu, 12/15/2022 - 15:37

Iron accumulation in the brain as a result of alcohol consumption may explain why even moderate drinking is linked to compromised cognitive function.

Results of a large observational study suggest brain iron accumulation is a “plausible pathway” through which alcohol negatively affects cognition, study Anya Topiwala, MD, PhD, senior clinical researcher, Nuffield Department of Population Health, University of Oxford, England, said in an interview.

Study participants who drank 56 grams of alcohol a week had higher brain iron levels. The U.K. guideline for “low risk” alcohol consumption is less than 14 units weekly, or 112 grams.

“We are finding harmful associations with iron within those low-risk alcohol intake guidelines,” said Dr. Topiwala.

The study was published online  in PLOS Medicine.
 

Early intervention opportunity?

Previous research suggests higher brain iron may be involved in the pathophysiology of Alzheimer’s and Parkinson’s diseases. However, it’s unclear whether deposition plays a role in alcohol’s effect on the brain and if it does, whether this could present an opportunity for early intervention with, for example, chelating agents.

The study included 20,729 participants in the UK Biobank study, which recruited volunteers from 2006 to 2010. Participants had a mean age of 54.8 years, and 48.6% were female.

Participants self-identified as current, never, or previous alcohol consumers. For current drinkers, researchers calculated the total weekly number of U.K. units of alcohol consumed. One unit is 8 grams. A standard drink in the United States is 14 grams. They categorized weekly consumption into quintiles and used the lowest quintile as the reference category.

Participants underwent MRI to determine brain iron levels. Areas of interest were deep brain structures in the basal ganglia.

Mean weekly alcohol consumption was 17.7 units, which is higher than U.K. guidelines for low-risk consumption. “Half of the sample were drinking above what is recommended,” said Dr. Topiwala.

Alcohol consumption was associated with markers of higher iron in the bilateral putamen (beta, 0.08 standard deviation; 95% confidence interval, 0.06-0.09; P < .001), caudate (beta, 0.05; 95% CI, 0.04-0.07; P < .001), and substantia nigra (beta, 0.03; 95% CI; 0.02-0.05; P < .001).
 

Poorer performance

Drinking more than 7 units (56 grams) weekly was associated with higher susceptibility for all brain regions, except the thalamus.

Controlling for menopause status did not alter associations between alcohol and susceptibility for any brain region. This was also the case when excluding blood pressure and cholesterol as covariates.

There were significant interactions with age in the bilateral putamen and caudate but not with sex, smoking, or Townsend Deprivation Index, which includes such factors as unemployment and living conditions.

To gather data on liver iron levels, participants underwent abdominal imaging at the same time as brain imaging. Dr. Topiwala explained that the liver is a primary storage center for iron, so it was used as “a kind of surrogate marker” of iron in the body.

The researchers showed an indirect effect of alcohol through systemic iron. A 1 SD increase in weekly alcohol consumption was associated with a 0.05 mg/g (95% CI, 0.02-0.07; P < .001) increase in liver iron. In addition, a 1 mg/g increase in liver iron was associated with a 0.44 (95% CI, 0.35-0.52; P < .001) SD increase in left putamen susceptibility.

In this sample, 32% (95% CI, 22-49; P < .001) of alcohol’s total effect on left putamen susceptibility was mediated via higher systemic iron levels.

To minimize the impact of other factors influencing the association between alcohol consumption and brain iron – and the possibility that people with more brain iron drink more – researchers used Mendelian randomization that considers genetically predicted alcohol intake. This analysis supported findings of associations between alcohol consumption and brain iron.

Participants completed a cognitive battery, which included trail-making tests that reflect executive function, puzzle tests that assess fluid intelligence or logic and reasoning, and task-based tests using the “Snap” card game to measure reaction time.

Investigators found the more iron that was present in certain brain regions, the poorer participants’ cognitive performance.

Patients should know about the risks of moderate alcohol intake so they can make decisions about drinking, said Dr. Topiwala. “They should be aware that 14 units of alcohol per week is not a zero risk.”
 

 

 

Novel research

Commenting for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, noted the study’s large size as a strength of the research.

She noted previous research has shown an association between higher iron levels and alcohol dependence and worse cognitive function, but the potential connection of brain iron levels, moderate alcohol consumption, and cognition has not been studied to date.

“This paper aims to look at whether there is a potential biological link between moderate alcohol consumption and cognition through iron-related pathways.”

The authors suggest more work is needed to understand whether alcohol consumption impacts iron-related biologies to affect downstream cognition, said Dr. Snyder. “Although this study does not answer that question, it does highlight some important questions.”

Study authors received funding from Wellcome Trust, UK Medical Research Council, National Institute for Health Research (NIHR) Oxford Biomedical Research Centre, BHF Centre of Research Excellence, British Heart Foundation, NIHR Cambridge Biomedical Research Centre, U.S. Department of Veterans Affairs, China Scholarship Council, and Li Ka Shing Centre for Health Information and Discovery. Dr. Topiwala has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Iron accumulation in the brain as a result of alcohol consumption may explain why even moderate drinking is linked to compromised cognitive function.

Results of a large observational study suggest brain iron accumulation is a “plausible pathway” through which alcohol negatively affects cognition, study Anya Topiwala, MD, PhD, senior clinical researcher, Nuffield Department of Population Health, University of Oxford, England, said in an interview.

Study participants who drank 56 grams of alcohol a week had higher brain iron levels. The U.K. guideline for “low risk” alcohol consumption is less than 14 units weekly, or 112 grams.

“We are finding harmful associations with iron within those low-risk alcohol intake guidelines,” said Dr. Topiwala.

The study was published online  in PLOS Medicine.
 

Early intervention opportunity?

Previous research suggests higher brain iron may be involved in the pathophysiology of Alzheimer’s and Parkinson’s diseases. However, it’s unclear whether deposition plays a role in alcohol’s effect on the brain and if it does, whether this could present an opportunity for early intervention with, for example, chelating agents.

The study included 20,729 participants in the UK Biobank study, which recruited volunteers from 2006 to 2010. Participants had a mean age of 54.8 years, and 48.6% were female.

Participants self-identified as current, never, or previous alcohol consumers. For current drinkers, researchers calculated the total weekly number of U.K. units of alcohol consumed. One unit is 8 grams. A standard drink in the United States is 14 grams. They categorized weekly consumption into quintiles and used the lowest quintile as the reference category.

Participants underwent MRI to determine brain iron levels. Areas of interest were deep brain structures in the basal ganglia.

Mean weekly alcohol consumption was 17.7 units, which is higher than U.K. guidelines for low-risk consumption. “Half of the sample were drinking above what is recommended,” said Dr. Topiwala.

Alcohol consumption was associated with markers of higher iron in the bilateral putamen (beta, 0.08 standard deviation; 95% confidence interval, 0.06-0.09; P < .001), caudate (beta, 0.05; 95% CI, 0.04-0.07; P < .001), and substantia nigra (beta, 0.03; 95% CI; 0.02-0.05; P < .001).
 

Poorer performance

Drinking more than 7 units (56 grams) weekly was associated with higher susceptibility for all brain regions, except the thalamus.

Controlling for menopause status did not alter associations between alcohol and susceptibility for any brain region. This was also the case when excluding blood pressure and cholesterol as covariates.

There were significant interactions with age in the bilateral putamen and caudate but not with sex, smoking, or Townsend Deprivation Index, which includes such factors as unemployment and living conditions.

To gather data on liver iron levels, participants underwent abdominal imaging at the same time as brain imaging. Dr. Topiwala explained that the liver is a primary storage center for iron, so it was used as “a kind of surrogate marker” of iron in the body.

The researchers showed an indirect effect of alcohol through systemic iron. A 1 SD increase in weekly alcohol consumption was associated with a 0.05 mg/g (95% CI, 0.02-0.07; P < .001) increase in liver iron. In addition, a 1 mg/g increase in liver iron was associated with a 0.44 (95% CI, 0.35-0.52; P < .001) SD increase in left putamen susceptibility.

In this sample, 32% (95% CI, 22-49; P < .001) of alcohol’s total effect on left putamen susceptibility was mediated via higher systemic iron levels.

To minimize the impact of other factors influencing the association between alcohol consumption and brain iron – and the possibility that people with more brain iron drink more – researchers used Mendelian randomization that considers genetically predicted alcohol intake. This analysis supported findings of associations between alcohol consumption and brain iron.

Participants completed a cognitive battery, which included trail-making tests that reflect executive function, puzzle tests that assess fluid intelligence or logic and reasoning, and task-based tests using the “Snap” card game to measure reaction time.

Investigators found the more iron that was present in certain brain regions, the poorer participants’ cognitive performance.

Patients should know about the risks of moderate alcohol intake so they can make decisions about drinking, said Dr. Topiwala. “They should be aware that 14 units of alcohol per week is not a zero risk.”
 

 

 

Novel research

Commenting for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, noted the study’s large size as a strength of the research.

She noted previous research has shown an association between higher iron levels and alcohol dependence and worse cognitive function, but the potential connection of brain iron levels, moderate alcohol consumption, and cognition has not been studied to date.

“This paper aims to look at whether there is a potential biological link between moderate alcohol consumption and cognition through iron-related pathways.”

The authors suggest more work is needed to understand whether alcohol consumption impacts iron-related biologies to affect downstream cognition, said Dr. Snyder. “Although this study does not answer that question, it does highlight some important questions.”

Study authors received funding from Wellcome Trust, UK Medical Research Council, National Institute for Health Research (NIHR) Oxford Biomedical Research Centre, BHF Centre of Research Excellence, British Heart Foundation, NIHR Cambridge Biomedical Research Centre, U.S. Department of Veterans Affairs, China Scholarship Council, and Li Ka Shing Centre for Health Information and Discovery. Dr. Topiwala has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Iron accumulation in the brain as a result of alcohol consumption may explain why even moderate drinking is linked to compromised cognitive function.

Results of a large observational study suggest brain iron accumulation is a “plausible pathway” through which alcohol negatively affects cognition, study Anya Topiwala, MD, PhD, senior clinical researcher, Nuffield Department of Population Health, University of Oxford, England, said in an interview.

Study participants who drank 56 grams of alcohol a week had higher brain iron levels. The U.K. guideline for “low risk” alcohol consumption is less than 14 units weekly, or 112 grams.

“We are finding harmful associations with iron within those low-risk alcohol intake guidelines,” said Dr. Topiwala.

The study was published online  in PLOS Medicine.
 

Early intervention opportunity?

Previous research suggests higher brain iron may be involved in the pathophysiology of Alzheimer’s and Parkinson’s diseases. However, it’s unclear whether deposition plays a role in alcohol’s effect on the brain and if it does, whether this could present an opportunity for early intervention with, for example, chelating agents.

The study included 20,729 participants in the UK Biobank study, which recruited volunteers from 2006 to 2010. Participants had a mean age of 54.8 years, and 48.6% were female.

Participants self-identified as current, never, or previous alcohol consumers. For current drinkers, researchers calculated the total weekly number of U.K. units of alcohol consumed. One unit is 8 grams. A standard drink in the United States is 14 grams. They categorized weekly consumption into quintiles and used the lowest quintile as the reference category.

Participants underwent MRI to determine brain iron levels. Areas of interest were deep brain structures in the basal ganglia.

Mean weekly alcohol consumption was 17.7 units, which is higher than U.K. guidelines for low-risk consumption. “Half of the sample were drinking above what is recommended,” said Dr. Topiwala.

Alcohol consumption was associated with markers of higher iron in the bilateral putamen (beta, 0.08 standard deviation; 95% confidence interval, 0.06-0.09; P < .001), caudate (beta, 0.05; 95% CI, 0.04-0.07; P < .001), and substantia nigra (beta, 0.03; 95% CI; 0.02-0.05; P < .001).
 

Poorer performance

Drinking more than 7 units (56 grams) weekly was associated with higher susceptibility for all brain regions, except the thalamus.

Controlling for menopause status did not alter associations between alcohol and susceptibility for any brain region. This was also the case when excluding blood pressure and cholesterol as covariates.

There were significant interactions with age in the bilateral putamen and caudate but not with sex, smoking, or Townsend Deprivation Index, which includes such factors as unemployment and living conditions.

To gather data on liver iron levels, participants underwent abdominal imaging at the same time as brain imaging. Dr. Topiwala explained that the liver is a primary storage center for iron, so it was used as “a kind of surrogate marker” of iron in the body.

The researchers showed an indirect effect of alcohol through systemic iron. A 1 SD increase in weekly alcohol consumption was associated with a 0.05 mg/g (95% CI, 0.02-0.07; P < .001) increase in liver iron. In addition, a 1 mg/g increase in liver iron was associated with a 0.44 (95% CI, 0.35-0.52; P < .001) SD increase in left putamen susceptibility.

In this sample, 32% (95% CI, 22-49; P < .001) of alcohol’s total effect on left putamen susceptibility was mediated via higher systemic iron levels.

To minimize the impact of other factors influencing the association between alcohol consumption and brain iron – and the possibility that people with more brain iron drink more – researchers used Mendelian randomization that considers genetically predicted alcohol intake. This analysis supported findings of associations between alcohol consumption and brain iron.

Participants completed a cognitive battery, which included trail-making tests that reflect executive function, puzzle tests that assess fluid intelligence or logic and reasoning, and task-based tests using the “Snap” card game to measure reaction time.

Investigators found the more iron that was present in certain brain regions, the poorer participants’ cognitive performance.

Patients should know about the risks of moderate alcohol intake so they can make decisions about drinking, said Dr. Topiwala. “They should be aware that 14 units of alcohol per week is not a zero risk.”
 

 

 

Novel research

Commenting for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, noted the study’s large size as a strength of the research.

She noted previous research has shown an association between higher iron levels and alcohol dependence and worse cognitive function, but the potential connection of brain iron levels, moderate alcohol consumption, and cognition has not been studied to date.

“This paper aims to look at whether there is a potential biological link between moderate alcohol consumption and cognition through iron-related pathways.”

The authors suggest more work is needed to understand whether alcohol consumption impacts iron-related biologies to affect downstream cognition, said Dr. Snyder. “Although this study does not answer that question, it does highlight some important questions.”

Study authors received funding from Wellcome Trust, UK Medical Research Council, National Institute for Health Research (NIHR) Oxford Biomedical Research Centre, BHF Centre of Research Excellence, British Heart Foundation, NIHR Cambridge Biomedical Research Centre, U.S. Department of Veterans Affairs, China Scholarship Council, and Li Ka Shing Centre for Health Information and Discovery. Dr. Topiwala has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PLOS MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cognitive impairment may predict physical disability in MS

Article Type
Changed
Tue, 08/02/2022 - 14:59

Cognitive impairment is a good predictor of physical disability progression in patients with multiple sclerosis (MS), new research suggests. In an analysis of more than 1,600 patients with secondary-progressive MS (SPMS), the likelihood of needing a wheelchair was almost doubled in those who had the worst scores on cognitive testing measures, compared with their counterparts who had the best scores.

“These findings should change our world view of MS,” study investigator Gavin Giovannoni, PhD, professor of neurology, Blizard Institute, Faculty of Medicine and Dentistry, Queen Mary University of London, told attendees at the Congress of the European Academy of Neurology.

Dr. Gavin Giovannoni


On the basis of the results, clinicians should consider testing cognitive processing speed in patients with MS to identify those who are at increased risk for disease progression, Dr. Giovannoni noted. “I urge anybody who runs an MS service to think about putting in place mechanisms in their clinic” to measure cognition of patients over time, he said.
 

Expand data

Cognitive impairment occurs very early in the course of MS and is part of the disease, although to a greater or lesser degree depending on the patient, Dr. Giovannoni noted. Such impairment has a significant impact on quality of life for patients dealing with this disease, he added.

EXPAND was a phase 3 study of siponimod. Results showed the now-approved oral selective sphingosine 1–phosphate receptor modulator significantly reduced the risk for disability progression in patients with SPMS.

Using the EXPAND clinical trial database, the current researchers assessed 1,628 participants for an association between cognitive processing speed, as measured with the Symbol Digit Modality Test (SDMT), and physical disability progression, as measured with the Expanded Disability Status Scale (EDSS). A score of 7 or more on the EDSS indicates wheelchair dependence.

Dr. Giovannoni noted that cognitive processing speed is considered an indirect measure of thalamic network efficiency and functional brain reserve.

Investigators looked at both the core study, in which all patients continued on treatment or placebo for up to 37 months, and the core plus extension part, in which patients received treatment for up to 5 years.

They separated SDMT scores into quartiles: from worst (n = 435) to two intermediate quartiles (n = 808) to the best quartile (n = 385).
 

Wheelchair dependence

In addition, the researchers examined the predictive value by baseline SDMT, adjusting for treatment, age, gender, baseline EDSS score, baseline SCMT quartile, and treatment-by-baseline SCMT quartile interaction. On-study SDMT change (month 0-24) was also assessed after adjusting for treatment, age, gender, baseline EDS, baseline SCMT, and on-study change in SCMT quartile.

In the core study, those in the worst SDMT quartile at baseline were numerically more likely to reach deterioration to EDSS 7 or greater (wheelchair dependent), compared with patients in the best SDMT quartile (hazard ratio, 1.31; 95% confidence interval, .72-2.38; P = .371).

The short-term predictive value of baseline SDMT for reaching sustained EDSS of at least 7 was more obvious in the placebo arm than in the treatment arm.

Dr. Giovannoni said this is likely due to the treatment effect of siponimod preventing relatively more events in the worse quartile, and so reducing the risk for wheelchair dependency.

In the core plus extension part, there was an almost twofold increased risk for wheelchair dependence in the worse versus best SDMT groups (HR, 1.81; 95% CI, 1.17-2.78; P = .007).

Both baseline SDMT (HR, 1.81; P = .007) and on-study change in SDMT (HR, 1.73; P = .046) predicted wheelchair dependence in the long-term.
 

 

 

‘More important than a walking stick’

Measuring cognitive change over time “may be a more important predictor than a walking stick in terms of quality of life and outcomes, and it affects clinical decisionmaking,” said Dr. Giovannoni.

The findings are not novel, as post hoc analyses of other studies showed similar results. However, this new analysis adds more evidence to the importance of cognition in MS, Dr. Giovannoni noted.

“I have patients with EDSS of 0 or 1 who are profoundly disabled because of cognition. You shouldn’t just assume someone is not disabled because they don’t have physical disability,” he said.

However, Dr. Giovannoni noted that the study found an association and does not necessarily indicate a cause.
 

‘Valuable’ insights

Antonia Lefter, MD, of NeuroHope, Monza Oncologic Hospital, Bucharest, Romania, cochaired the session highlighting the research. Commenting on the study, she called this analysis from the “renowned” EXPAND study “valuable.”

In addition, it “underscores” the importance of assessing cognitive processing speed, as it may predict long-term disability progression in patients with SPMS, Dr. Lefter said.

The study was funded by Novartis Pharma AG, Basel, Switzerland. Dr. Giovannoni, a steering committee member of the EXPAND trial, reported receiving consulting fees from AbbVie, Actelion, Atara Bio, Biogen, Celgene, Sanofi-Genzyme, Genentech, GlaxoSmithKline, Merck-Serono, Novartis, Roche, and Reva. He has also received compensation for research from Biogen, Roche, Merck-Serono, Novartis, Sanofi-Genzyme, and Takeda. Dr. Lefter has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(8)
Publications
Topics
Sections

Cognitive impairment is a good predictor of physical disability progression in patients with multiple sclerosis (MS), new research suggests. In an analysis of more than 1,600 patients with secondary-progressive MS (SPMS), the likelihood of needing a wheelchair was almost doubled in those who had the worst scores on cognitive testing measures, compared with their counterparts who had the best scores.

“These findings should change our world view of MS,” study investigator Gavin Giovannoni, PhD, professor of neurology, Blizard Institute, Faculty of Medicine and Dentistry, Queen Mary University of London, told attendees at the Congress of the European Academy of Neurology.

Dr. Gavin Giovannoni


On the basis of the results, clinicians should consider testing cognitive processing speed in patients with MS to identify those who are at increased risk for disease progression, Dr. Giovannoni noted. “I urge anybody who runs an MS service to think about putting in place mechanisms in their clinic” to measure cognition of patients over time, he said.
 

Expand data

Cognitive impairment occurs very early in the course of MS and is part of the disease, although to a greater or lesser degree depending on the patient, Dr. Giovannoni noted. Such impairment has a significant impact on quality of life for patients dealing with this disease, he added.

EXPAND was a phase 3 study of siponimod. Results showed the now-approved oral selective sphingosine 1–phosphate receptor modulator significantly reduced the risk for disability progression in patients with SPMS.

Using the EXPAND clinical trial database, the current researchers assessed 1,628 participants for an association between cognitive processing speed, as measured with the Symbol Digit Modality Test (SDMT), and physical disability progression, as measured with the Expanded Disability Status Scale (EDSS). A score of 7 or more on the EDSS indicates wheelchair dependence.

Dr. Giovannoni noted that cognitive processing speed is considered an indirect measure of thalamic network efficiency and functional brain reserve.

Investigators looked at both the core study, in which all patients continued on treatment or placebo for up to 37 months, and the core plus extension part, in which patients received treatment for up to 5 years.

They separated SDMT scores into quartiles: from worst (n = 435) to two intermediate quartiles (n = 808) to the best quartile (n = 385).
 

Wheelchair dependence

In addition, the researchers examined the predictive value by baseline SDMT, adjusting for treatment, age, gender, baseline EDSS score, baseline SCMT quartile, and treatment-by-baseline SCMT quartile interaction. On-study SDMT change (month 0-24) was also assessed after adjusting for treatment, age, gender, baseline EDS, baseline SCMT, and on-study change in SCMT quartile.

In the core study, those in the worst SDMT quartile at baseline were numerically more likely to reach deterioration to EDSS 7 or greater (wheelchair dependent), compared with patients in the best SDMT quartile (hazard ratio, 1.31; 95% confidence interval, .72-2.38; P = .371).

The short-term predictive value of baseline SDMT for reaching sustained EDSS of at least 7 was more obvious in the placebo arm than in the treatment arm.

Dr. Giovannoni said this is likely due to the treatment effect of siponimod preventing relatively more events in the worse quartile, and so reducing the risk for wheelchair dependency.

In the core plus extension part, there was an almost twofold increased risk for wheelchair dependence in the worse versus best SDMT groups (HR, 1.81; 95% CI, 1.17-2.78; P = .007).

Both baseline SDMT (HR, 1.81; P = .007) and on-study change in SDMT (HR, 1.73; P = .046) predicted wheelchair dependence in the long-term.
 

 

 

‘More important than a walking stick’

Measuring cognitive change over time “may be a more important predictor than a walking stick in terms of quality of life and outcomes, and it affects clinical decisionmaking,” said Dr. Giovannoni.

The findings are not novel, as post hoc analyses of other studies showed similar results. However, this new analysis adds more evidence to the importance of cognition in MS, Dr. Giovannoni noted.

“I have patients with EDSS of 0 or 1 who are profoundly disabled because of cognition. You shouldn’t just assume someone is not disabled because they don’t have physical disability,” he said.

However, Dr. Giovannoni noted that the study found an association and does not necessarily indicate a cause.
 

‘Valuable’ insights

Antonia Lefter, MD, of NeuroHope, Monza Oncologic Hospital, Bucharest, Romania, cochaired the session highlighting the research. Commenting on the study, she called this analysis from the “renowned” EXPAND study “valuable.”

In addition, it “underscores” the importance of assessing cognitive processing speed, as it may predict long-term disability progression in patients with SPMS, Dr. Lefter said.

The study was funded by Novartis Pharma AG, Basel, Switzerland. Dr. Giovannoni, a steering committee member of the EXPAND trial, reported receiving consulting fees from AbbVie, Actelion, Atara Bio, Biogen, Celgene, Sanofi-Genzyme, Genentech, GlaxoSmithKline, Merck-Serono, Novartis, Roche, and Reva. He has also received compensation for research from Biogen, Roche, Merck-Serono, Novartis, Sanofi-Genzyme, and Takeda. Dr. Lefter has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Cognitive impairment is a good predictor of physical disability progression in patients with multiple sclerosis (MS), new research suggests. In an analysis of more than 1,600 patients with secondary-progressive MS (SPMS), the likelihood of needing a wheelchair was almost doubled in those who had the worst scores on cognitive testing measures, compared with their counterparts who had the best scores.

“These findings should change our world view of MS,” study investigator Gavin Giovannoni, PhD, professor of neurology, Blizard Institute, Faculty of Medicine and Dentistry, Queen Mary University of London, told attendees at the Congress of the European Academy of Neurology.

Dr. Gavin Giovannoni


On the basis of the results, clinicians should consider testing cognitive processing speed in patients with MS to identify those who are at increased risk for disease progression, Dr. Giovannoni noted. “I urge anybody who runs an MS service to think about putting in place mechanisms in their clinic” to measure cognition of patients over time, he said.
 

Expand data

Cognitive impairment occurs very early in the course of MS and is part of the disease, although to a greater or lesser degree depending on the patient, Dr. Giovannoni noted. Such impairment has a significant impact on quality of life for patients dealing with this disease, he added.

EXPAND was a phase 3 study of siponimod. Results showed the now-approved oral selective sphingosine 1–phosphate receptor modulator significantly reduced the risk for disability progression in patients with SPMS.

Using the EXPAND clinical trial database, the current researchers assessed 1,628 participants for an association between cognitive processing speed, as measured with the Symbol Digit Modality Test (SDMT), and physical disability progression, as measured with the Expanded Disability Status Scale (EDSS). A score of 7 or more on the EDSS indicates wheelchair dependence.

Dr. Giovannoni noted that cognitive processing speed is considered an indirect measure of thalamic network efficiency and functional brain reserve.

Investigators looked at both the core study, in which all patients continued on treatment or placebo for up to 37 months, and the core plus extension part, in which patients received treatment for up to 5 years.

They separated SDMT scores into quartiles: from worst (n = 435) to two intermediate quartiles (n = 808) to the best quartile (n = 385).
 

Wheelchair dependence

In addition, the researchers examined the predictive value by baseline SDMT, adjusting for treatment, age, gender, baseline EDSS score, baseline SCMT quartile, and treatment-by-baseline SCMT quartile interaction. On-study SDMT change (month 0-24) was also assessed after adjusting for treatment, age, gender, baseline EDS, baseline SCMT, and on-study change in SCMT quartile.

In the core study, those in the worst SDMT quartile at baseline were numerically more likely to reach deterioration to EDSS 7 or greater (wheelchair dependent), compared with patients in the best SDMT quartile (hazard ratio, 1.31; 95% confidence interval, .72-2.38; P = .371).

The short-term predictive value of baseline SDMT for reaching sustained EDSS of at least 7 was more obvious in the placebo arm than in the treatment arm.

Dr. Giovannoni said this is likely due to the treatment effect of siponimod preventing relatively more events in the worse quartile, and so reducing the risk for wheelchair dependency.

In the core plus extension part, there was an almost twofold increased risk for wheelchair dependence in the worse versus best SDMT groups (HR, 1.81; 95% CI, 1.17-2.78; P = .007).

Both baseline SDMT (HR, 1.81; P = .007) and on-study change in SDMT (HR, 1.73; P = .046) predicted wheelchair dependence in the long-term.
 

 

 

‘More important than a walking stick’

Measuring cognitive change over time “may be a more important predictor than a walking stick in terms of quality of life and outcomes, and it affects clinical decisionmaking,” said Dr. Giovannoni.

The findings are not novel, as post hoc analyses of other studies showed similar results. However, this new analysis adds more evidence to the importance of cognition in MS, Dr. Giovannoni noted.

“I have patients with EDSS of 0 or 1 who are profoundly disabled because of cognition. You shouldn’t just assume someone is not disabled because they don’t have physical disability,” he said.

However, Dr. Giovannoni noted that the study found an association and does not necessarily indicate a cause.
 

‘Valuable’ insights

Antonia Lefter, MD, of NeuroHope, Monza Oncologic Hospital, Bucharest, Romania, cochaired the session highlighting the research. Commenting on the study, she called this analysis from the “renowned” EXPAND study “valuable.”

In addition, it “underscores” the importance of assessing cognitive processing speed, as it may predict long-term disability progression in patients with SPMS, Dr. Lefter said.

The study was funded by Novartis Pharma AG, Basel, Switzerland. Dr. Giovannoni, a steering committee member of the EXPAND trial, reported receiving consulting fees from AbbVie, Actelion, Atara Bio, Biogen, Celgene, Sanofi-Genzyme, Genentech, GlaxoSmithKline, Merck-Serono, Novartis, Roche, and Reva. He has also received compensation for research from Biogen, Roche, Merck-Serono, Novartis, Sanofi-Genzyme, and Takeda. Dr. Lefter has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(8)
Issue
Neurology Reviews - 30(8)
Publications
Publications
Topics
Article Type
Sections
Article Source

From EAN 2022

Citation Override
Publish date: July 11, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Children with migraine at high risk of comorbid anxiety, depression

Article Type
Changed
Tue, 08/02/2022 - 14:57

Children and adolescents with migraine are about twice as likely to have an anxiety or depressive disorder as those without migraine, results from a new review and meta-analysis suggest.

“This is compelling, high-level evidence showing there’s this established comorbidity between migraine and anxiety and depressive symptoms and disorders in this age group,” co-investigator Serena L. Orr, MD, a pediatric neurologist and headache specialist at Alberta Children’s Hospital and assistant professor in the department of pediatrics, University of Calgary (Alta.), told this news organization.

The results “should compel every clinician who is seeing a child or adolescent with migraine to screen for anxiety and depression and to manage that if it’s present. That should be the standard of care with this level of evidence,” Dr. Orr said.

The findings were presented at the American Headache Society (AHS) Annual Meeting 2022.
 

Incidence divergence

Previous studies have suggested that 10%-20% of children and adolescents will experience migraine at some point before adulthood, with the prevalence increasing after puberty.

While the female-to-male ratio is about 1:1 before puberty, there is a “big divergence in incidence curves” afterward – with the female-to-male ratio reaching 2-3:1 in adulthood, Dr. Orr noted. Experts believe hormones drive this divergence, she said, noting that male adults with migraine have lower testosterone levels than male adults without migraine.

Dr. Orr and her colleagues were keen to investigate the relationship between child migraine and anxiety symptoms and disorders, as well as between child migraine and depression symptoms and disorders. They searched the literature for related case-control, cross-sectional, and cohort studies with participants of ages up to 18 years.

The researchers selected 80 studies to include in the review. Most of the studies were carried out in the past 30 to 40 years and were in English and other languages. Both community-based and clinical studies were included.

Of the total, 73 studies reported on the association between the exposures and migraine, and 51 were amenable to quantitative pooling.

Results from a meta-analysis that included 16 studies that compared children and adolescents who had migraine with their healthy peers showed a significant association between migraine and anxiety symptoms (standardized mean difference, 1.13; 95% confidence interval, 0.64-1.63; P < .0001).

Compared with children who did not have migraine, those with migraine had almost twice the odds of an anxiety disorder in 15 studies (odds ratio, 1.93; 95% CI, 1.49-2.50; P < .0001).

In addition, there was an association between migraine and depressive symptoms in 17 relevant studies (SMD, 0.67; 95% CI, 0.46-0.87; P < .0001). Participants with versus without migraine also had higher odds of depressive disorders in 18 studies (OR, 2.01; 95% CI, 1.46-2.78; P < .0001).

Effect sizes were similar between community-based and clinic studies. Dr. Orr said it is important to note that the analysis wasn’t restricted to studies with “just kids with really high disease burden who are going to naturally be more predisposed to psychiatric comorbidity.”
 

‘Shocking’ lack of research

The researchers were also interested in determining whether having migraine along with anxiety or depression symptoms or disorders could affect headache-specific outcomes and whether such patients’ conditions would be more refractory to treatment. However, these outcomes were “all over the place” in the 18 relevant studies, Dr. Orr reported.

“Some looked at headache frequency, some at disability, some at school functioning; so, we were not able to put them into a meta-analysis,” she said.

Only two studies examined whether anxiety or depression earlier in childhood predisposes to subsequent migraine, so that issue is still unresolved, Dr. Orr added.

The investigators also assessed whether outcomes with migraine are similar to those with other headache types, such as tension-type headaches. “We did not find a difference at the symptom or disorder level, but there were fewer of those studies” – and these, too, were heterogeneous, said Dr. Orr.

The researchers did not find any studies of the association between migraine and trauma, which Dr. Orr said was “shocking.”

“In the broader pediatric chronic-pain literature, there’s research showing that having a trauma or stress-related disorder is associated with more chronic pain and worse chronic pain outcomes, but we could not find a study that specifically looked at that question in migraine,” she added.

Emerging evidence suggests there may be a bidirectional relationship between migraine and anxiety/depression, at least in adults. Dr. Orr said having these symptoms appears to raise the risk for migraine, but whether that’s environmental or driven by shared genetics isn’t clear.

Experiencing chronic pain may also predispose individuals to anxiety and depression, “but we need more studies on this.”

In addition to screening children with migraine for anxiety and depression, clinicians should advocate for better access to mental health resources for patients with these comorbidities, Dr. Orr noted.

She added that a limitation of the review was that 82.5% of the studies reported unadjusted associations and that 26.3% of the studies were of low quality.
 

High-level evidence

Sara Pavitt, MD, chief of the Pediatric Headache Program and assistant professor in the department of neurology, the University of Texas at Austin, said the investigators “should be applauded” for providing “high-level evidence” to better understand the relationship between migraine and anxiety and depression in pediatric patients.

Such information has been “lacking” for this patient population, said Dr. Pavitt, who was not involved with the research.

She noted that screening kids for mood disorders is challenging, given the relatively few pediatric mental health care providers. A referral for a psychiatric follow-up can mean a 9- to 12-month wait – or even longer for children who do not have insurance or use Medicare.

“Providers need to have more incentives to care for patients with Medicare or lack of insurance – these patients are often excluded from practices because reimbursement is so poor,” Dr. Pavitt said.

Additional pediatric studies are needed to understand how other mental health disorders, such as panic disorder, phobias, and posttraumatic stress disorder, may be related to migraine, she added.

The study received no outside funding. Dr. Orr has received grants from the Canadian Institutes of Health Research and royalties from Cambridge University Press for book publication, and she is on editorial boards of Headache, Neurology, and the American Migraine Foundation. Dr. Pavitt serves on an advisory board for Theranica, which produces a neuromodulation device for acute migraine treatment, although this is not directly relevant to this review.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(8)
Publications
Topics
Sections

Children and adolescents with migraine are about twice as likely to have an anxiety or depressive disorder as those without migraine, results from a new review and meta-analysis suggest.

“This is compelling, high-level evidence showing there’s this established comorbidity between migraine and anxiety and depressive symptoms and disorders in this age group,” co-investigator Serena L. Orr, MD, a pediatric neurologist and headache specialist at Alberta Children’s Hospital and assistant professor in the department of pediatrics, University of Calgary (Alta.), told this news organization.

The results “should compel every clinician who is seeing a child or adolescent with migraine to screen for anxiety and depression and to manage that if it’s present. That should be the standard of care with this level of evidence,” Dr. Orr said.

The findings were presented at the American Headache Society (AHS) Annual Meeting 2022.
 

Incidence divergence

Previous studies have suggested that 10%-20% of children and adolescents will experience migraine at some point before adulthood, with the prevalence increasing after puberty.

While the female-to-male ratio is about 1:1 before puberty, there is a “big divergence in incidence curves” afterward – with the female-to-male ratio reaching 2-3:1 in adulthood, Dr. Orr noted. Experts believe hormones drive this divergence, she said, noting that male adults with migraine have lower testosterone levels than male adults without migraine.

Dr. Orr and her colleagues were keen to investigate the relationship between child migraine and anxiety symptoms and disorders, as well as between child migraine and depression symptoms and disorders. They searched the literature for related case-control, cross-sectional, and cohort studies with participants of ages up to 18 years.

The researchers selected 80 studies to include in the review. Most of the studies were carried out in the past 30 to 40 years and were in English and other languages. Both community-based and clinical studies were included.

Of the total, 73 studies reported on the association between the exposures and migraine, and 51 were amenable to quantitative pooling.

Results from a meta-analysis that included 16 studies that compared children and adolescents who had migraine with their healthy peers showed a significant association between migraine and anxiety symptoms (standardized mean difference, 1.13; 95% confidence interval, 0.64-1.63; P < .0001).

Compared with children who did not have migraine, those with migraine had almost twice the odds of an anxiety disorder in 15 studies (odds ratio, 1.93; 95% CI, 1.49-2.50; P < .0001).

In addition, there was an association between migraine and depressive symptoms in 17 relevant studies (SMD, 0.67; 95% CI, 0.46-0.87; P < .0001). Participants with versus without migraine also had higher odds of depressive disorders in 18 studies (OR, 2.01; 95% CI, 1.46-2.78; P < .0001).

Effect sizes were similar between community-based and clinic studies. Dr. Orr said it is important to note that the analysis wasn’t restricted to studies with “just kids with really high disease burden who are going to naturally be more predisposed to psychiatric comorbidity.”
 

‘Shocking’ lack of research

The researchers were also interested in determining whether having migraine along with anxiety or depression symptoms or disorders could affect headache-specific outcomes and whether such patients’ conditions would be more refractory to treatment. However, these outcomes were “all over the place” in the 18 relevant studies, Dr. Orr reported.

“Some looked at headache frequency, some at disability, some at school functioning; so, we were not able to put them into a meta-analysis,” she said.

Only two studies examined whether anxiety or depression earlier in childhood predisposes to subsequent migraine, so that issue is still unresolved, Dr. Orr added.

The investigators also assessed whether outcomes with migraine are similar to those with other headache types, such as tension-type headaches. “We did not find a difference at the symptom or disorder level, but there were fewer of those studies” – and these, too, were heterogeneous, said Dr. Orr.

The researchers did not find any studies of the association between migraine and trauma, which Dr. Orr said was “shocking.”

“In the broader pediatric chronic-pain literature, there’s research showing that having a trauma or stress-related disorder is associated with more chronic pain and worse chronic pain outcomes, but we could not find a study that specifically looked at that question in migraine,” she added.

Emerging evidence suggests there may be a bidirectional relationship between migraine and anxiety/depression, at least in adults. Dr. Orr said having these symptoms appears to raise the risk for migraine, but whether that’s environmental or driven by shared genetics isn’t clear.

Experiencing chronic pain may also predispose individuals to anxiety and depression, “but we need more studies on this.”

In addition to screening children with migraine for anxiety and depression, clinicians should advocate for better access to mental health resources for patients with these comorbidities, Dr. Orr noted.

She added that a limitation of the review was that 82.5% of the studies reported unadjusted associations and that 26.3% of the studies were of low quality.
 

High-level evidence

Sara Pavitt, MD, chief of the Pediatric Headache Program and assistant professor in the department of neurology, the University of Texas at Austin, said the investigators “should be applauded” for providing “high-level evidence” to better understand the relationship between migraine and anxiety and depression in pediatric patients.

Such information has been “lacking” for this patient population, said Dr. Pavitt, who was not involved with the research.

She noted that screening kids for mood disorders is challenging, given the relatively few pediatric mental health care providers. A referral for a psychiatric follow-up can mean a 9- to 12-month wait – or even longer for children who do not have insurance or use Medicare.

“Providers need to have more incentives to care for patients with Medicare or lack of insurance – these patients are often excluded from practices because reimbursement is so poor,” Dr. Pavitt said.

Additional pediatric studies are needed to understand how other mental health disorders, such as panic disorder, phobias, and posttraumatic stress disorder, may be related to migraine, she added.

The study received no outside funding. Dr. Orr has received grants from the Canadian Institutes of Health Research and royalties from Cambridge University Press for book publication, and she is on editorial boards of Headache, Neurology, and the American Migraine Foundation. Dr. Pavitt serves on an advisory board for Theranica, which produces a neuromodulation device for acute migraine treatment, although this is not directly relevant to this review.

A version of this article first appeared on Medscape.com.

Children and adolescents with migraine are about twice as likely to have an anxiety or depressive disorder as those without migraine, results from a new review and meta-analysis suggest.

“This is compelling, high-level evidence showing there’s this established comorbidity between migraine and anxiety and depressive symptoms and disorders in this age group,” co-investigator Serena L. Orr, MD, a pediatric neurologist and headache specialist at Alberta Children’s Hospital and assistant professor in the department of pediatrics, University of Calgary (Alta.), told this news organization.

The results “should compel every clinician who is seeing a child or adolescent with migraine to screen for anxiety and depression and to manage that if it’s present. That should be the standard of care with this level of evidence,” Dr. Orr said.

The findings were presented at the American Headache Society (AHS) Annual Meeting 2022.
 

Incidence divergence

Previous studies have suggested that 10%-20% of children and adolescents will experience migraine at some point before adulthood, with the prevalence increasing after puberty.

While the female-to-male ratio is about 1:1 before puberty, there is a “big divergence in incidence curves” afterward – with the female-to-male ratio reaching 2-3:1 in adulthood, Dr. Orr noted. Experts believe hormones drive this divergence, she said, noting that male adults with migraine have lower testosterone levels than male adults without migraine.

Dr. Orr and her colleagues were keen to investigate the relationship between child migraine and anxiety symptoms and disorders, as well as between child migraine and depression symptoms and disorders. They searched the literature for related case-control, cross-sectional, and cohort studies with participants of ages up to 18 years.

The researchers selected 80 studies to include in the review. Most of the studies were carried out in the past 30 to 40 years and were in English and other languages. Both community-based and clinical studies were included.

Of the total, 73 studies reported on the association between the exposures and migraine, and 51 were amenable to quantitative pooling.

Results from a meta-analysis that included 16 studies that compared children and adolescents who had migraine with their healthy peers showed a significant association between migraine and anxiety symptoms (standardized mean difference, 1.13; 95% confidence interval, 0.64-1.63; P < .0001).

Compared with children who did not have migraine, those with migraine had almost twice the odds of an anxiety disorder in 15 studies (odds ratio, 1.93; 95% CI, 1.49-2.50; P < .0001).

In addition, there was an association between migraine and depressive symptoms in 17 relevant studies (SMD, 0.67; 95% CI, 0.46-0.87; P < .0001). Participants with versus without migraine also had higher odds of depressive disorders in 18 studies (OR, 2.01; 95% CI, 1.46-2.78; P < .0001).

Effect sizes were similar between community-based and clinic studies. Dr. Orr said it is important to note that the analysis wasn’t restricted to studies with “just kids with really high disease burden who are going to naturally be more predisposed to psychiatric comorbidity.”
 

‘Shocking’ lack of research

The researchers were also interested in determining whether having migraine along with anxiety or depression symptoms or disorders could affect headache-specific outcomes and whether such patients’ conditions would be more refractory to treatment. However, these outcomes were “all over the place” in the 18 relevant studies, Dr. Orr reported.

“Some looked at headache frequency, some at disability, some at school functioning; so, we were not able to put them into a meta-analysis,” she said.

Only two studies examined whether anxiety or depression earlier in childhood predisposes to subsequent migraine, so that issue is still unresolved, Dr. Orr added.

The investigators also assessed whether outcomes with migraine are similar to those with other headache types, such as tension-type headaches. “We did not find a difference at the symptom or disorder level, but there were fewer of those studies” – and these, too, were heterogeneous, said Dr. Orr.

The researchers did not find any studies of the association between migraine and trauma, which Dr. Orr said was “shocking.”

“In the broader pediatric chronic-pain literature, there’s research showing that having a trauma or stress-related disorder is associated with more chronic pain and worse chronic pain outcomes, but we could not find a study that specifically looked at that question in migraine,” she added.

Emerging evidence suggests there may be a bidirectional relationship between migraine and anxiety/depression, at least in adults. Dr. Orr said having these symptoms appears to raise the risk for migraine, but whether that’s environmental or driven by shared genetics isn’t clear.

Experiencing chronic pain may also predispose individuals to anxiety and depression, “but we need more studies on this.”

In addition to screening children with migraine for anxiety and depression, clinicians should advocate for better access to mental health resources for patients with these comorbidities, Dr. Orr noted.

She added that a limitation of the review was that 82.5% of the studies reported unadjusted associations and that 26.3% of the studies were of low quality.
 

High-level evidence

Sara Pavitt, MD, chief of the Pediatric Headache Program and assistant professor in the department of neurology, the University of Texas at Austin, said the investigators “should be applauded” for providing “high-level evidence” to better understand the relationship between migraine and anxiety and depression in pediatric patients.

Such information has been “lacking” for this patient population, said Dr. Pavitt, who was not involved with the research.

She noted that screening kids for mood disorders is challenging, given the relatively few pediatric mental health care providers. A referral for a psychiatric follow-up can mean a 9- to 12-month wait – or even longer for children who do not have insurance or use Medicare.

“Providers need to have more incentives to care for patients with Medicare or lack of insurance – these patients are often excluded from practices because reimbursement is so poor,” Dr. Pavitt said.

Additional pediatric studies are needed to understand how other mental health disorders, such as panic disorder, phobias, and posttraumatic stress disorder, may be related to migraine, she added.

The study received no outside funding. Dr. Orr has received grants from the Canadian Institutes of Health Research and royalties from Cambridge University Press for book publication, and she is on editorial boards of Headache, Neurology, and the American Migraine Foundation. Dr. Pavitt serves on an advisory board for Theranica, which produces a neuromodulation device for acute migraine treatment, although this is not directly relevant to this review.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(8)
Issue
Neurology Reviews - 30(8)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: June 30, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Promising new tool for better migraine management in primary care

Article Type
Changed
Tue, 07/05/2022 - 08:15

A new tool can help streamline diagnosis and treatment of migraine in the primary care setting, research suggests.

Early results from a small pilot study showed that the tool, essentially a medical record “best-practice alert,” reduces specialist referrals and MRI studies.

The idea behind the tool is to give primary care physicians “fingertip access” to prompts on patients’ electronic health record, leading to best migraine management and treatment, said coinvestigator, Scott M. Friedenberg, MD, vice chair of clinical practice, Geisinger Medical Center, Danville, Pa.

When clinicians enter a headache diagnosis into a patient’s EHR, a pop-up asks a handful of questions and “prompts them with the right medications so if they just click a button, they can order the medications straight away,” Dr. Friedenberg said.

The findings were presented at the annual meeting of the American Headache Society.
 

Fewer referrals, MRI testing

Researchers reviewed charts for 693 general neurology referrals. About 20% of the patients were referred for headache. In about 80% of these cases, the final diagnosis was migraine and/or chronic daily headache.

The physicians had documented criteria for identifying migraine, such as sensitivity to light, nausea, and missed social activity or work, in fewer than 1% of cases. There’s roughly an 80% chance that if a headache meets two of these three criteria, it is a migraine, Dr. Friedenberg noted.

About 60% of the participants with headache were referred with no treatment trial. About 20% were referred after having tried two medicines, and 30% were referred after trying one medicine.

“In many cases, we’re being asked to evaluate people with primary headache or uncomplicated headache that has not been treated,” said Dr. Friedenberg.

The investigators developed the tool, and its most recent iteration was tested by 10 physicians at two sites for 3 months. These doctors did not receive education on headache, they were just taught how to use the tool.

Results showed that referrals for neurology consults dropped 77% and MRI ordering dropped 35% after use of the tool. This translated into a savings of $192,000.

However, using the tool didn’t significantly affect prescribing habits of the physicians.
 

Migraine frequently undertreated

“When you drill it down, the only thing that changed were medications they were comfortable with, so they increased steroids and nonsteroidal prescribing, but preventives didn’t change, narcotics didn’t change, and CGRP [calcitonin gene-related peptide] inhibitors didn’t change,” Dr. Friedenberg said.

Although believing patients are “not bad enough to treat” might help explain why clinicians did not change prescribing habits, the reality is that many patients have migraine and should be treated, he added.

Dr. Friedenberg pointed out that previous research suggests that 60% or more of patients with a primary headache or migraine are undertreated.

The tool should increase awareness about, and comfort level with, diagnosing and treating migraine among primary care doctors, he noted. “We hope it will make it easier for them to do the right thing and have neurology as a readily available partner,” said Dr. Friedenberg.

“Primary care doctors are incredibly busy and incredibly pressured, and anything you can do to help facilitate that is a positive,” he added.

The researchers now plan to train pharmacists to comanage headache along with primary care doctors, as is done, for example, for patients with diabetes. This should result in a reduction in physician burden, said Dr. Friedenberg.

The next step is to conduct a larger study at the 38 sites in the Geisinger health complex. Half the sites will use the new tool, and the other half will continue to use their current headache management process.

“The study will compare everything from MRI ordering to neurology referrals and prescribing, how often patients go to the emergency department, how often they have a clinic visit, whether the provider is satisfied with the tool, and if the patient’s headaches are getting better,” Dr. Friedenberg said.
 

 

 

Lessons for clinical practice

Jessica Ailani, MD, director at MedStar Georgetown Headache Center and associate professor in the department of neurology at Georgetown University, cochaired the session in which the research was presented and called the project “really fantastic.”

The study offers “many lessons” for clinical practice and showed that the tool was effective in improving diagnosis of migraine, said Dr. Ailani, who is also secretary of the American Headache Society.

“There’s a long wait time to see specialists, and most migraine can be diagnosed and basic management can be done by primary care physicians,” she said.

“The next step would be to work on a way to improve prescriptions of migraine-specific treatments,” she added.

Dr. Ailani noted that the AHS would be keen to find ways to engage in “collaborative work” with the investigators.

The investigators and Dr. Ailani reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A new tool can help streamline diagnosis and treatment of migraine in the primary care setting, research suggests.

Early results from a small pilot study showed that the tool, essentially a medical record “best-practice alert,” reduces specialist referrals and MRI studies.

The idea behind the tool is to give primary care physicians “fingertip access” to prompts on patients’ electronic health record, leading to best migraine management and treatment, said coinvestigator, Scott M. Friedenberg, MD, vice chair of clinical practice, Geisinger Medical Center, Danville, Pa.

When clinicians enter a headache diagnosis into a patient’s EHR, a pop-up asks a handful of questions and “prompts them with the right medications so if they just click a button, they can order the medications straight away,” Dr. Friedenberg said.

The findings were presented at the annual meeting of the American Headache Society.
 

Fewer referrals, MRI testing

Researchers reviewed charts for 693 general neurology referrals. About 20% of the patients were referred for headache. In about 80% of these cases, the final diagnosis was migraine and/or chronic daily headache.

The physicians had documented criteria for identifying migraine, such as sensitivity to light, nausea, and missed social activity or work, in fewer than 1% of cases. There’s roughly an 80% chance that if a headache meets two of these three criteria, it is a migraine, Dr. Friedenberg noted.

About 60% of the participants with headache were referred with no treatment trial. About 20% were referred after having tried two medicines, and 30% were referred after trying one medicine.

“In many cases, we’re being asked to evaluate people with primary headache or uncomplicated headache that has not been treated,” said Dr. Friedenberg.

The investigators developed the tool, and its most recent iteration was tested by 10 physicians at two sites for 3 months. These doctors did not receive education on headache, they were just taught how to use the tool.

Results showed that referrals for neurology consults dropped 77% and MRI ordering dropped 35% after use of the tool. This translated into a savings of $192,000.

However, using the tool didn’t significantly affect prescribing habits of the physicians.
 

Migraine frequently undertreated

“When you drill it down, the only thing that changed were medications they were comfortable with, so they increased steroids and nonsteroidal prescribing, but preventives didn’t change, narcotics didn’t change, and CGRP [calcitonin gene-related peptide] inhibitors didn’t change,” Dr. Friedenberg said.

Although believing patients are “not bad enough to treat” might help explain why clinicians did not change prescribing habits, the reality is that many patients have migraine and should be treated, he added.

Dr. Friedenberg pointed out that previous research suggests that 60% or more of patients with a primary headache or migraine are undertreated.

The tool should increase awareness about, and comfort level with, diagnosing and treating migraine among primary care doctors, he noted. “We hope it will make it easier for them to do the right thing and have neurology as a readily available partner,” said Dr. Friedenberg.

“Primary care doctors are incredibly busy and incredibly pressured, and anything you can do to help facilitate that is a positive,” he added.

The researchers now plan to train pharmacists to comanage headache along with primary care doctors, as is done, for example, for patients with diabetes. This should result in a reduction in physician burden, said Dr. Friedenberg.

The next step is to conduct a larger study at the 38 sites in the Geisinger health complex. Half the sites will use the new tool, and the other half will continue to use their current headache management process.

“The study will compare everything from MRI ordering to neurology referrals and prescribing, how often patients go to the emergency department, how often they have a clinic visit, whether the provider is satisfied with the tool, and if the patient’s headaches are getting better,” Dr. Friedenberg said.
 

 

 

Lessons for clinical practice

Jessica Ailani, MD, director at MedStar Georgetown Headache Center and associate professor in the department of neurology at Georgetown University, cochaired the session in which the research was presented and called the project “really fantastic.”

The study offers “many lessons” for clinical practice and showed that the tool was effective in improving diagnosis of migraine, said Dr. Ailani, who is also secretary of the American Headache Society.

“There’s a long wait time to see specialists, and most migraine can be diagnosed and basic management can be done by primary care physicians,” she said.

“The next step would be to work on a way to improve prescriptions of migraine-specific treatments,” she added.

Dr. Ailani noted that the AHS would be keen to find ways to engage in “collaborative work” with the investigators.

The investigators and Dr. Ailani reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A new tool can help streamline diagnosis and treatment of migraine in the primary care setting, research suggests.

Early results from a small pilot study showed that the tool, essentially a medical record “best-practice alert,” reduces specialist referrals and MRI studies.

The idea behind the tool is to give primary care physicians “fingertip access” to prompts on patients’ electronic health record, leading to best migraine management and treatment, said coinvestigator, Scott M. Friedenberg, MD, vice chair of clinical practice, Geisinger Medical Center, Danville, Pa.

When clinicians enter a headache diagnosis into a patient’s EHR, a pop-up asks a handful of questions and “prompts them with the right medications so if they just click a button, they can order the medications straight away,” Dr. Friedenberg said.

The findings were presented at the annual meeting of the American Headache Society.
 

Fewer referrals, MRI testing

Researchers reviewed charts for 693 general neurology referrals. About 20% of the patients were referred for headache. In about 80% of these cases, the final diagnosis was migraine and/or chronic daily headache.

The physicians had documented criteria for identifying migraine, such as sensitivity to light, nausea, and missed social activity or work, in fewer than 1% of cases. There’s roughly an 80% chance that if a headache meets two of these three criteria, it is a migraine, Dr. Friedenberg noted.

About 60% of the participants with headache were referred with no treatment trial. About 20% were referred after having tried two medicines, and 30% were referred after trying one medicine.

“In many cases, we’re being asked to evaluate people with primary headache or uncomplicated headache that has not been treated,” said Dr. Friedenberg.

The investigators developed the tool, and its most recent iteration was tested by 10 physicians at two sites for 3 months. These doctors did not receive education on headache, they were just taught how to use the tool.

Results showed that referrals for neurology consults dropped 77% and MRI ordering dropped 35% after use of the tool. This translated into a savings of $192,000.

However, using the tool didn’t significantly affect prescribing habits of the physicians.
 

Migraine frequently undertreated

“When you drill it down, the only thing that changed were medications they were comfortable with, so they increased steroids and nonsteroidal prescribing, but preventives didn’t change, narcotics didn’t change, and CGRP [calcitonin gene-related peptide] inhibitors didn’t change,” Dr. Friedenberg said.

Although believing patients are “not bad enough to treat” might help explain why clinicians did not change prescribing habits, the reality is that many patients have migraine and should be treated, he added.

Dr. Friedenberg pointed out that previous research suggests that 60% or more of patients with a primary headache or migraine are undertreated.

The tool should increase awareness about, and comfort level with, diagnosing and treating migraine among primary care doctors, he noted. “We hope it will make it easier for them to do the right thing and have neurology as a readily available partner,” said Dr. Friedenberg.

“Primary care doctors are incredibly busy and incredibly pressured, and anything you can do to help facilitate that is a positive,” he added.

The researchers now plan to train pharmacists to comanage headache along with primary care doctors, as is done, for example, for patients with diabetes. This should result in a reduction in physician burden, said Dr. Friedenberg.

The next step is to conduct a larger study at the 38 sites in the Geisinger health complex. Half the sites will use the new tool, and the other half will continue to use their current headache management process.

“The study will compare everything from MRI ordering to neurology referrals and prescribing, how often patients go to the emergency department, how often they have a clinic visit, whether the provider is satisfied with the tool, and if the patient’s headaches are getting better,” Dr. Friedenberg said.
 

 

 

Lessons for clinical practice

Jessica Ailani, MD, director at MedStar Georgetown Headache Center and associate professor in the department of neurology at Georgetown University, cochaired the session in which the research was presented and called the project “really fantastic.”

The study offers “many lessons” for clinical practice and showed that the tool was effective in improving diagnosis of migraine, said Dr. Ailani, who is also secretary of the American Headache Society.

“There’s a long wait time to see specialists, and most migraine can be diagnosed and basic management can be done by primary care physicians,” she said.

“The next step would be to work on a way to improve prescriptions of migraine-specific treatments,” she added.

Dr. Ailani noted that the AHS would be keen to find ways to engage in “collaborative work” with the investigators.

The investigators and Dr. Ailani reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AHS 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article