Menopause an independent risk factor for schizophrenia relapse

Article Type
Changed
Fri, 10/28/2022 - 16:24

Menopause appears to be an independent risk factor for relapse in women with schizophrenia spectrum disorders (SSDs), new research suggests.
 

Investigators studied a cohort of close to 62,000 people with SSDs, stratifying individuals by sex and age, and found that starting between the ages of 45 and 50 years – when the menopausal transition is underway – women were more frequently hospitalized for psychosis, compared with men and women younger than 45 years.

In addition, the protective effect of antipsychotic medication was highest in women younger than 45 years and lowest in women aged 45 years or older, even at higher doses.

Dr. Iris Sommer

“Women with schizophrenia who are older than 45 are a vulnerable group for relapse, and higher doses of antipsychotics are not the answer,” lead author Iris Sommer, MD, PhD, professor, department of neuroscience, University Medical Center of Groningen, the Netherlands, told this news organization.

The study was published online in Schizophrenia Bulletin.
 

Vulnerable period

There is an association between estrogen levels and disease severity throughout the life stages of women with SSDs, with lower estrogen levels associated with psychosis, for example, during low estrogenic phases of the menstrual cycle, the investigators note.

“After menopause, estrogen levels remain low, which is associated with a deterioration in the clinical course; therefore, women with SSD have sex-specific psychiatric needs that differ according to their life stage,” they add.

“Estrogens inhibit an important liver enzyme (cytochrome P-450 [CYP1A2]), which leads to higher blood levels of several antipsychotics like olanzapine and clozapine,” said Dr. Sommer. In addition, estrogens make the stomach less acidic, “leading to easier resorption of medication.”

As a clinician, Dr. Sommer said that she has “often witnessed a worsening of symptoms [of psychosis] after menopause.” As a researcher, she “knew that estrogens can have ameliorating effects on brain health, especially in schizophrenia.”

She and her colleagues were motivated to research the issue because there is a “remarkable paucity” of quantitative data on a “vulnerable period that all women with schizophrenia will experience.”
 

Detailed, quantitative data

The researchers sought to provide “detailed, quantitative data on life-stage dependent clinical changes occurring in women with SSD, using an intra-individual design to prevent confounding.”

They drew on data from a nationwide, register-based cohort study of all hospitalized patients with SSD between 1972 and 2014 in Finland (n = 61,889), with follow-up from Jan. 1, 1996, to Dec. 31, 2017.

People were stratified according to age (younger than 45 years and 45 years or older), with the same person contributing person-time to both age groups. The cohort was also subdivided into 5-year age groups, starting at age 20 years and ending at age 69 years.

The primary outcome measure was relapse (that is, inpatient hospitalization because of psychosis).

The researchers focused specifically on monotherapies, excluding time periods when two or more antipsychotics were used concomitantly. They also looked at antipsychotic nonuse periods.

Antipsychotic monotherapies were categorized into defined daily doses per day (DDDs/d):

  • less than 0.4
  • 0.4 to 0.6
  • 0.6 to 0.9
  • 0.9 to less than 1.1
  • 1.1 to less than 1.4
  • 1.4 to less than 1.6
  • 1.6 or more

The researchers restricted the main analyses to the four most frequently used oral antipsychotic monotherapies: clozapine, olanzapine, quetiapine, and risperidone.
 

The turning tide

The cohort consisted of more men than women (31,104 vs. 30,785, respectively), with a mean (standard deviation) age of 49.8 (16.6) years in women vs. 43.6 (14.8) in men.

Among both sexes, olanzapine was the most prescribed antipsychotic (roughly one-quarter of patients). In women, the next most common antipsychotic was risperidone, followed by quetiapine and clozapine, whereas in men, the second most common antipsychotic was clozapine, followed by risperidone and quetiapine.

When the researchers compared men and women younger than 45 years, there were “few consistent differences” in proportions hospitalized for psychosis.

Starting at age 45 years and continuing through the oldest age group (65-69 years), higher proportions of women were hospitalized for psychosis, compared with their male peers (all Ps < .00001). 

Women 45 or older had significantly higher risk for relapse associated with standard dose use, compared with the other groups.

When the researchers compared men and women older and younger than 45 years, women younger than 45 years showed lower adjusted hazard ratios (aHRs) at doses between of 0.6-0.9 DDDs/d, whereas for doses over 1.1 DDDs/d, women aged 45 years or older showed “remarkably higher” aHRs, compared with women younger than 45 years and men aged 45 years or older, with a difference that increased with increasing dose.

In women, the efficacy of the antipsychotics was decreased at these DDDs/d.

“We ... showed that antipsychotic monotherapy is most effective in preventing relapse in women below 45, as compared to women above that age, and also as compared to men of all ages,” the authors summarize. But after age 45 years, “the tide seems to turn for women,” compared with younger women and with men of the same age group.

One of several study limitations was the use of age as an estimation of menopausal status, they note.
 

Don’t just raise the dose

Commenting on the research, Mary Seeman, MD, professor emerita, department of psychiatry, University of Toronto, noted the study corroborates her group’s findings regarding the effect of menopause on antipsychotic response.

“When the efficacy of previously effective antipsychotic doses wanes at menopause, raising the dose is not the treatment of choice because it increases the risk of weight gain, cardiovascular, and cerebrovascular events,” said Dr. Seeman, who was not involved with the current research.

“Changing to an antipsychotic that is less affected by estrogen loss may work better,” she continued, noting that amisulpride and aripiprazole “work well post menopause.”

Additional interventions may include changing to a depot or skin-patch antipsychotic that “obviates first-pass metabolism,” adding hormone replacement or a selective estrogen receptor modulator or including phytoestrogens (bioidenticals) in the diet.

The study yields research recommendations, including comparing the effectiveness of different antipsychotics in postmenopausal women with SSDs, recruiting pre- and postmenopausal women in trials of antipsychotic drugs, and stratifying by hormonal status when analyzing results of antipsychotic trials, Dr. Seeman said.

This work was supported by the Finnish Ministry of Social Affairs and Health through the developmental fund for Niuvanniemi Hospital and the Academy of Finland. The Dutch Medical Research Association supported Dr. Sommer. Dr. Sommer declares no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Seeman declares no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Menopause appears to be an independent risk factor for relapse in women with schizophrenia spectrum disorders (SSDs), new research suggests.
 

Investigators studied a cohort of close to 62,000 people with SSDs, stratifying individuals by sex and age, and found that starting between the ages of 45 and 50 years – when the menopausal transition is underway – women were more frequently hospitalized for psychosis, compared with men and women younger than 45 years.

In addition, the protective effect of antipsychotic medication was highest in women younger than 45 years and lowest in women aged 45 years or older, even at higher doses.

Dr. Iris Sommer

“Women with schizophrenia who are older than 45 are a vulnerable group for relapse, and higher doses of antipsychotics are not the answer,” lead author Iris Sommer, MD, PhD, professor, department of neuroscience, University Medical Center of Groningen, the Netherlands, told this news organization.

The study was published online in Schizophrenia Bulletin.
 

Vulnerable period

There is an association between estrogen levels and disease severity throughout the life stages of women with SSDs, with lower estrogen levels associated with psychosis, for example, during low estrogenic phases of the menstrual cycle, the investigators note.

“After menopause, estrogen levels remain low, which is associated with a deterioration in the clinical course; therefore, women with SSD have sex-specific psychiatric needs that differ according to their life stage,” they add.

“Estrogens inhibit an important liver enzyme (cytochrome P-450 [CYP1A2]), which leads to higher blood levels of several antipsychotics like olanzapine and clozapine,” said Dr. Sommer. In addition, estrogens make the stomach less acidic, “leading to easier resorption of medication.”

As a clinician, Dr. Sommer said that she has “often witnessed a worsening of symptoms [of psychosis] after menopause.” As a researcher, she “knew that estrogens can have ameliorating effects on brain health, especially in schizophrenia.”

She and her colleagues were motivated to research the issue because there is a “remarkable paucity” of quantitative data on a “vulnerable period that all women with schizophrenia will experience.”
 

Detailed, quantitative data

The researchers sought to provide “detailed, quantitative data on life-stage dependent clinical changes occurring in women with SSD, using an intra-individual design to prevent confounding.”

They drew on data from a nationwide, register-based cohort study of all hospitalized patients with SSD between 1972 and 2014 in Finland (n = 61,889), with follow-up from Jan. 1, 1996, to Dec. 31, 2017.

People were stratified according to age (younger than 45 years and 45 years or older), with the same person contributing person-time to both age groups. The cohort was also subdivided into 5-year age groups, starting at age 20 years and ending at age 69 years.

The primary outcome measure was relapse (that is, inpatient hospitalization because of psychosis).

The researchers focused specifically on monotherapies, excluding time periods when two or more antipsychotics were used concomitantly. They also looked at antipsychotic nonuse periods.

Antipsychotic monotherapies were categorized into defined daily doses per day (DDDs/d):

  • less than 0.4
  • 0.4 to 0.6
  • 0.6 to 0.9
  • 0.9 to less than 1.1
  • 1.1 to less than 1.4
  • 1.4 to less than 1.6
  • 1.6 or more

The researchers restricted the main analyses to the four most frequently used oral antipsychotic monotherapies: clozapine, olanzapine, quetiapine, and risperidone.
 

The turning tide

The cohort consisted of more men than women (31,104 vs. 30,785, respectively), with a mean (standard deviation) age of 49.8 (16.6) years in women vs. 43.6 (14.8) in men.

Among both sexes, olanzapine was the most prescribed antipsychotic (roughly one-quarter of patients). In women, the next most common antipsychotic was risperidone, followed by quetiapine and clozapine, whereas in men, the second most common antipsychotic was clozapine, followed by risperidone and quetiapine.

When the researchers compared men and women younger than 45 years, there were “few consistent differences” in proportions hospitalized for psychosis.

Starting at age 45 years and continuing through the oldest age group (65-69 years), higher proportions of women were hospitalized for psychosis, compared with their male peers (all Ps < .00001). 

Women 45 or older had significantly higher risk for relapse associated with standard dose use, compared with the other groups.

When the researchers compared men and women older and younger than 45 years, women younger than 45 years showed lower adjusted hazard ratios (aHRs) at doses between of 0.6-0.9 DDDs/d, whereas for doses over 1.1 DDDs/d, women aged 45 years or older showed “remarkably higher” aHRs, compared with women younger than 45 years and men aged 45 years or older, with a difference that increased with increasing dose.

In women, the efficacy of the antipsychotics was decreased at these DDDs/d.

“We ... showed that antipsychotic monotherapy is most effective in preventing relapse in women below 45, as compared to women above that age, and also as compared to men of all ages,” the authors summarize. But after age 45 years, “the tide seems to turn for women,” compared with younger women and with men of the same age group.

One of several study limitations was the use of age as an estimation of menopausal status, they note.
 

Don’t just raise the dose

Commenting on the research, Mary Seeman, MD, professor emerita, department of psychiatry, University of Toronto, noted the study corroborates her group’s findings regarding the effect of menopause on antipsychotic response.

“When the efficacy of previously effective antipsychotic doses wanes at menopause, raising the dose is not the treatment of choice because it increases the risk of weight gain, cardiovascular, and cerebrovascular events,” said Dr. Seeman, who was not involved with the current research.

“Changing to an antipsychotic that is less affected by estrogen loss may work better,” she continued, noting that amisulpride and aripiprazole “work well post menopause.”

Additional interventions may include changing to a depot or skin-patch antipsychotic that “obviates first-pass metabolism,” adding hormone replacement or a selective estrogen receptor modulator or including phytoestrogens (bioidenticals) in the diet.

The study yields research recommendations, including comparing the effectiveness of different antipsychotics in postmenopausal women with SSDs, recruiting pre- and postmenopausal women in trials of antipsychotic drugs, and stratifying by hormonal status when analyzing results of antipsychotic trials, Dr. Seeman said.

This work was supported by the Finnish Ministry of Social Affairs and Health through the developmental fund for Niuvanniemi Hospital and the Academy of Finland. The Dutch Medical Research Association supported Dr. Sommer. Dr. Sommer declares no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Seeman declares no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Menopause appears to be an independent risk factor for relapse in women with schizophrenia spectrum disorders (SSDs), new research suggests.
 

Investigators studied a cohort of close to 62,000 people with SSDs, stratifying individuals by sex and age, and found that starting between the ages of 45 and 50 years – when the menopausal transition is underway – women were more frequently hospitalized for psychosis, compared with men and women younger than 45 years.

In addition, the protective effect of antipsychotic medication was highest in women younger than 45 years and lowest in women aged 45 years or older, even at higher doses.

Dr. Iris Sommer

“Women with schizophrenia who are older than 45 are a vulnerable group for relapse, and higher doses of antipsychotics are not the answer,” lead author Iris Sommer, MD, PhD, professor, department of neuroscience, University Medical Center of Groningen, the Netherlands, told this news organization.

The study was published online in Schizophrenia Bulletin.
 

Vulnerable period

There is an association between estrogen levels and disease severity throughout the life stages of women with SSDs, with lower estrogen levels associated with psychosis, for example, during low estrogenic phases of the menstrual cycle, the investigators note.

“After menopause, estrogen levels remain low, which is associated with a deterioration in the clinical course; therefore, women with SSD have sex-specific psychiatric needs that differ according to their life stage,” they add.

“Estrogens inhibit an important liver enzyme (cytochrome P-450 [CYP1A2]), which leads to higher blood levels of several antipsychotics like olanzapine and clozapine,” said Dr. Sommer. In addition, estrogens make the stomach less acidic, “leading to easier resorption of medication.”

As a clinician, Dr. Sommer said that she has “often witnessed a worsening of symptoms [of psychosis] after menopause.” As a researcher, she “knew that estrogens can have ameliorating effects on brain health, especially in schizophrenia.”

She and her colleagues were motivated to research the issue because there is a “remarkable paucity” of quantitative data on a “vulnerable period that all women with schizophrenia will experience.”
 

Detailed, quantitative data

The researchers sought to provide “detailed, quantitative data on life-stage dependent clinical changes occurring in women with SSD, using an intra-individual design to prevent confounding.”

They drew on data from a nationwide, register-based cohort study of all hospitalized patients with SSD between 1972 and 2014 in Finland (n = 61,889), with follow-up from Jan. 1, 1996, to Dec. 31, 2017.

People were stratified according to age (younger than 45 years and 45 years or older), with the same person contributing person-time to both age groups. The cohort was also subdivided into 5-year age groups, starting at age 20 years and ending at age 69 years.

The primary outcome measure was relapse (that is, inpatient hospitalization because of psychosis).

The researchers focused specifically on monotherapies, excluding time periods when two or more antipsychotics were used concomitantly. They also looked at antipsychotic nonuse periods.

Antipsychotic monotherapies were categorized into defined daily doses per day (DDDs/d):

  • less than 0.4
  • 0.4 to 0.6
  • 0.6 to 0.9
  • 0.9 to less than 1.1
  • 1.1 to less than 1.4
  • 1.4 to less than 1.6
  • 1.6 or more

The researchers restricted the main analyses to the four most frequently used oral antipsychotic monotherapies: clozapine, olanzapine, quetiapine, and risperidone.
 

The turning tide

The cohort consisted of more men than women (31,104 vs. 30,785, respectively), with a mean (standard deviation) age of 49.8 (16.6) years in women vs. 43.6 (14.8) in men.

Among both sexes, olanzapine was the most prescribed antipsychotic (roughly one-quarter of patients). In women, the next most common antipsychotic was risperidone, followed by quetiapine and clozapine, whereas in men, the second most common antipsychotic was clozapine, followed by risperidone and quetiapine.

When the researchers compared men and women younger than 45 years, there were “few consistent differences” in proportions hospitalized for psychosis.

Starting at age 45 years and continuing through the oldest age group (65-69 years), higher proportions of women were hospitalized for psychosis, compared with their male peers (all Ps < .00001). 

Women 45 or older had significantly higher risk for relapse associated with standard dose use, compared with the other groups.

When the researchers compared men and women older and younger than 45 years, women younger than 45 years showed lower adjusted hazard ratios (aHRs) at doses between of 0.6-0.9 DDDs/d, whereas for doses over 1.1 DDDs/d, women aged 45 years or older showed “remarkably higher” aHRs, compared with women younger than 45 years and men aged 45 years or older, with a difference that increased with increasing dose.

In women, the efficacy of the antipsychotics was decreased at these DDDs/d.

“We ... showed that antipsychotic monotherapy is most effective in preventing relapse in women below 45, as compared to women above that age, and also as compared to men of all ages,” the authors summarize. But after age 45 years, “the tide seems to turn for women,” compared with younger women and with men of the same age group.

One of several study limitations was the use of age as an estimation of menopausal status, they note.
 

Don’t just raise the dose

Commenting on the research, Mary Seeman, MD, professor emerita, department of psychiatry, University of Toronto, noted the study corroborates her group’s findings regarding the effect of menopause on antipsychotic response.

“When the efficacy of previously effective antipsychotic doses wanes at menopause, raising the dose is not the treatment of choice because it increases the risk of weight gain, cardiovascular, and cerebrovascular events,” said Dr. Seeman, who was not involved with the current research.

“Changing to an antipsychotic that is less affected by estrogen loss may work better,” she continued, noting that amisulpride and aripiprazole “work well post menopause.”

Additional interventions may include changing to a depot or skin-patch antipsychotic that “obviates first-pass metabolism,” adding hormone replacement or a selective estrogen receptor modulator or including phytoestrogens (bioidenticals) in the diet.

The study yields research recommendations, including comparing the effectiveness of different antipsychotics in postmenopausal women with SSDs, recruiting pre- and postmenopausal women in trials of antipsychotic drugs, and stratifying by hormonal status when analyzing results of antipsychotic trials, Dr. Seeman said.

This work was supported by the Finnish Ministry of Social Affairs and Health through the developmental fund for Niuvanniemi Hospital and the Academy of Finland. The Dutch Medical Research Association supported Dr. Sommer. Dr. Sommer declares no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Seeman declares no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCHIZOPHRENIA BULLETIN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Listen up: Birdsong may calm anxiety, paranoia

Article Type
Changed
Mon, 10/31/2022 - 08:49

Listening to birdsong appears to have a positive and significant impact on mental health and mood, new research suggests.

Investigators found that people who listened to recordings of birds singing experienced a significant reduction in anxiety and paranoia. In contrast, the researchers also found that recordings of traffic noises, including car engines, sirens, and construction, increased depressive states.

“The results suggest that it may be worthwhile to investigate the targeted use of natural sounds such as birdsong in a clinical setting – for example, in hospital waiting rooms or in psychiatric settings,” study investigator Emil Stobbe, MSc, a predoctoral fellow at the Max Planck Institute for Human Development, Berlin, said in an interview.

“If someone is seeking an easily accessible intervention to lower distress, listening to an audio clip of birds singing might be a great option,” he added.

The study was published online in Scientific Reports.
 

Nature’s calming effect

The aim of the research was “to investigate how the physical environment impact brain and mental health,” Mr. Stobbe said.

Mr. Stobbe said that there is significantly more research examining visual properties of the physical environment but that the auditory domain is not as well researched, although, he added, that the beneficial effects of interactions with nature are “well studied.”

He noted that anxiety and paranoia can be experienced by many individuals even though they may be unaware that they are experiencing these states.

“We wanted to investigate if the beneficial effects of nature can also exert their impact on these states. In theory, birds can be representational for natural and vital environment, which, in turn, transfer the positive effects of nature on birdsong listeners,” he said.

A previous study compared nature versus city soundscape conditions and showed that the nature soundscape improved participants’ cognitive performance but did not improve mood. The present study added diversity to the soundscapes and focused not only on cognition and general mood but also on state paranoia, “which can be measured in a change-sensitive manner” and “has been shown to increase in response to traffic noise.”

The researchers hypothesized that birdsong would have a greater beneficial effect on mood and paranoia and on cognitive performance compared with traffic noise. They also investigated whether greater versus lower diversity of bird species or noise sources within the soundscapes “would be a relevant factor modulating the effects.”

The researchers recruited participants (n = 295) from a crowdsourcing platform. Participants’ mean age was late 20s (standard deviations ranged from 6.30 to 7.72), with a greater proportion of male versus female participants.

To be included, participants were required to have no history of mental illness, hearing difficulties, substance/drug intake, or suicidal thoughts/tendencies.

The outcomes of interest (mood, paranoia, cognitive performance) were measured before and after soundscape exposure and each soundscape had a low- versus high-diversity version. This resulted in several analyses that compared two types of sounds (birdsongs vs. traffic noise) x two levels of diversity (low vs. high diversity) and two time points (pre- vs. post exposure).

The exposure to sounds lasted for 6 minutes, after which they were asked to report (on a 0-100 visual scale) how diverse/monotone, beautiful, and pleasant they perceived the soundscape to be.
 

 

 

Reduction in depressive symptoms

Participants were divided into four groups: low-diversity traffic noise soundscape (n = 83), high-diversity traffic noise soundscape (n = 60), low-diversity birdsong soundscape (n = 63), and high-diversity birdsong soundscape (n = 80)

In addition to listening to the sounds, participants completed questionnaires measuring mood (depression and anxiety) and paranoia as well as a test of digit span cognitive performance (both the forward and the backward versions).

The type, diversity, and type x diversity all revealed significant effect sizes (F[3, 276] = 78.6; P < .001; eta-squared = 0.461; F[3, 276] = 3.16; P = .025; eta-squared = 0.033; and F[3, 276] = 2.66; P = .028, respectively), “suggesting that all of these factors, as well as their interaction, had a significant impact on the perception of soundscapes (that is, ratings on monotony/diversity, beauty, and pleasantness).”

A post hoc examination showed that depressive symptoms significantly increased within the low- and high-diversity urban soundscapes but decreased significantly in the high-diversity birdsong soundscapes (T[1, 60] = –2.57; P = .012; d = –0.29).

For anxiety, the post hoc within-group analyses found no effects within low- and high-diversity traffic noise conditions (T[1, 82] = –1.37; P = .174; d = –0.15 and T[1, 68] = 0.49; P = .629; d = 0.06, respectively). By contrast, there were significant declines in both birdsong conditions (low diversity: T[1, 62] = –6.13; P < .001; d = –0.77; high diversity: T[1, 60] = –6.32; P < .001; d =  –0.70).

Similarly, there were no changes in participants with paranoia when they listened to either low- or high-diversity traffic noises (T[1, 82] = –0.55; P = .583; d = –0.06 and T[1, 68] = 0.67; P = .507; d = 0.08, respectively). On the other hand, both birdsong conditions yielded reductions in paranoia (low diversity: T[1, 62] = –5.90; P < .001; d = –0.74; high diversity: T[1, 60] =  –4.11; P < .001; d = –0.46).

None of the soundscapes had any effect on cognition.

“In theory, birds can be representational for natural and vital environments which, in turn, transfer the positive effects of nature on birdsong listeners,” said Mr. Stobbe.

“Taken together, the findings of the current study provide another facet of why interactions with nature can be beneficial for our mental health, and it is highly important to preserve nature,” he added.

Mr. Stobbe said that future research should focus on investigating mixed soundscapes including examining whether the presence of natural sounds in urban settings lower stressors such as traffic noise.
 

An understudied area

Commenting for this article, Ken Duckworth, MD, chief medical officer of the National Alliance on Mental Illness called the study “interesting but limited.”

Dr. Duckworth, who was not involved in the research said that the “benefits of nature are understudied” and agreed with the investigators that it is potentially important to study the use of birdsongs in psychiatric facilities. “Future studies could also correlate the role of birdsong with the mental health benefits/aspects of ‘being in nature,’ which has been found to have some effect.”

Open Access funding was enabled and organized by Projekt DEAL. The authors and Dr. Duckworth declared no competing interests.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Listening to birdsong appears to have a positive and significant impact on mental health and mood, new research suggests.

Investigators found that people who listened to recordings of birds singing experienced a significant reduction in anxiety and paranoia. In contrast, the researchers also found that recordings of traffic noises, including car engines, sirens, and construction, increased depressive states.

“The results suggest that it may be worthwhile to investigate the targeted use of natural sounds such as birdsong in a clinical setting – for example, in hospital waiting rooms or in psychiatric settings,” study investigator Emil Stobbe, MSc, a predoctoral fellow at the Max Planck Institute for Human Development, Berlin, said in an interview.

“If someone is seeking an easily accessible intervention to lower distress, listening to an audio clip of birds singing might be a great option,” he added.

The study was published online in Scientific Reports.
 

Nature’s calming effect

The aim of the research was “to investigate how the physical environment impact brain and mental health,” Mr. Stobbe said.

Mr. Stobbe said that there is significantly more research examining visual properties of the physical environment but that the auditory domain is not as well researched, although, he added, that the beneficial effects of interactions with nature are “well studied.”

He noted that anxiety and paranoia can be experienced by many individuals even though they may be unaware that they are experiencing these states.

“We wanted to investigate if the beneficial effects of nature can also exert their impact on these states. In theory, birds can be representational for natural and vital environment, which, in turn, transfer the positive effects of nature on birdsong listeners,” he said.

A previous study compared nature versus city soundscape conditions and showed that the nature soundscape improved participants’ cognitive performance but did not improve mood. The present study added diversity to the soundscapes and focused not only on cognition and general mood but also on state paranoia, “which can be measured in a change-sensitive manner” and “has been shown to increase in response to traffic noise.”

The researchers hypothesized that birdsong would have a greater beneficial effect on mood and paranoia and on cognitive performance compared with traffic noise. They also investigated whether greater versus lower diversity of bird species or noise sources within the soundscapes “would be a relevant factor modulating the effects.”

The researchers recruited participants (n = 295) from a crowdsourcing platform. Participants’ mean age was late 20s (standard deviations ranged from 6.30 to 7.72), with a greater proportion of male versus female participants.

To be included, participants were required to have no history of mental illness, hearing difficulties, substance/drug intake, or suicidal thoughts/tendencies.

The outcomes of interest (mood, paranoia, cognitive performance) were measured before and after soundscape exposure and each soundscape had a low- versus high-diversity version. This resulted in several analyses that compared two types of sounds (birdsongs vs. traffic noise) x two levels of diversity (low vs. high diversity) and two time points (pre- vs. post exposure).

The exposure to sounds lasted for 6 minutes, after which they were asked to report (on a 0-100 visual scale) how diverse/monotone, beautiful, and pleasant they perceived the soundscape to be.
 

 

 

Reduction in depressive symptoms

Participants were divided into four groups: low-diversity traffic noise soundscape (n = 83), high-diversity traffic noise soundscape (n = 60), low-diversity birdsong soundscape (n = 63), and high-diversity birdsong soundscape (n = 80)

In addition to listening to the sounds, participants completed questionnaires measuring mood (depression and anxiety) and paranoia as well as a test of digit span cognitive performance (both the forward and the backward versions).

The type, diversity, and type x diversity all revealed significant effect sizes (F[3, 276] = 78.6; P < .001; eta-squared = 0.461; F[3, 276] = 3.16; P = .025; eta-squared = 0.033; and F[3, 276] = 2.66; P = .028, respectively), “suggesting that all of these factors, as well as their interaction, had a significant impact on the perception of soundscapes (that is, ratings on monotony/diversity, beauty, and pleasantness).”

A post hoc examination showed that depressive symptoms significantly increased within the low- and high-diversity urban soundscapes but decreased significantly in the high-diversity birdsong soundscapes (T[1, 60] = –2.57; P = .012; d = –0.29).

For anxiety, the post hoc within-group analyses found no effects within low- and high-diversity traffic noise conditions (T[1, 82] = –1.37; P = .174; d = –0.15 and T[1, 68] = 0.49; P = .629; d = 0.06, respectively). By contrast, there were significant declines in both birdsong conditions (low diversity: T[1, 62] = –6.13; P < .001; d = –0.77; high diversity: T[1, 60] = –6.32; P < .001; d =  –0.70).

Similarly, there were no changes in participants with paranoia when they listened to either low- or high-diversity traffic noises (T[1, 82] = –0.55; P = .583; d = –0.06 and T[1, 68] = 0.67; P = .507; d = 0.08, respectively). On the other hand, both birdsong conditions yielded reductions in paranoia (low diversity: T[1, 62] = –5.90; P < .001; d = –0.74; high diversity: T[1, 60] =  –4.11; P < .001; d = –0.46).

None of the soundscapes had any effect on cognition.

“In theory, birds can be representational for natural and vital environments which, in turn, transfer the positive effects of nature on birdsong listeners,” said Mr. Stobbe.

“Taken together, the findings of the current study provide another facet of why interactions with nature can be beneficial for our mental health, and it is highly important to preserve nature,” he added.

Mr. Stobbe said that future research should focus on investigating mixed soundscapes including examining whether the presence of natural sounds in urban settings lower stressors such as traffic noise.
 

An understudied area

Commenting for this article, Ken Duckworth, MD, chief medical officer of the National Alliance on Mental Illness called the study “interesting but limited.”

Dr. Duckworth, who was not involved in the research said that the “benefits of nature are understudied” and agreed with the investigators that it is potentially important to study the use of birdsongs in psychiatric facilities. “Future studies could also correlate the role of birdsong with the mental health benefits/aspects of ‘being in nature,’ which has been found to have some effect.”

Open Access funding was enabled and organized by Projekt DEAL. The authors and Dr. Duckworth declared no competing interests.

A version of this article first appeared on Medscape.com.

Listening to birdsong appears to have a positive and significant impact on mental health and mood, new research suggests.

Investigators found that people who listened to recordings of birds singing experienced a significant reduction in anxiety and paranoia. In contrast, the researchers also found that recordings of traffic noises, including car engines, sirens, and construction, increased depressive states.

“The results suggest that it may be worthwhile to investigate the targeted use of natural sounds such as birdsong in a clinical setting – for example, in hospital waiting rooms or in psychiatric settings,” study investigator Emil Stobbe, MSc, a predoctoral fellow at the Max Planck Institute for Human Development, Berlin, said in an interview.

“If someone is seeking an easily accessible intervention to lower distress, listening to an audio clip of birds singing might be a great option,” he added.

The study was published online in Scientific Reports.
 

Nature’s calming effect

The aim of the research was “to investigate how the physical environment impact brain and mental health,” Mr. Stobbe said.

Mr. Stobbe said that there is significantly more research examining visual properties of the physical environment but that the auditory domain is not as well researched, although, he added, that the beneficial effects of interactions with nature are “well studied.”

He noted that anxiety and paranoia can be experienced by many individuals even though they may be unaware that they are experiencing these states.

“We wanted to investigate if the beneficial effects of nature can also exert their impact on these states. In theory, birds can be representational for natural and vital environment, which, in turn, transfer the positive effects of nature on birdsong listeners,” he said.

A previous study compared nature versus city soundscape conditions and showed that the nature soundscape improved participants’ cognitive performance but did not improve mood. The present study added diversity to the soundscapes and focused not only on cognition and general mood but also on state paranoia, “which can be measured in a change-sensitive manner” and “has been shown to increase in response to traffic noise.”

The researchers hypothesized that birdsong would have a greater beneficial effect on mood and paranoia and on cognitive performance compared with traffic noise. They also investigated whether greater versus lower diversity of bird species or noise sources within the soundscapes “would be a relevant factor modulating the effects.”

The researchers recruited participants (n = 295) from a crowdsourcing platform. Participants’ mean age was late 20s (standard deviations ranged from 6.30 to 7.72), with a greater proportion of male versus female participants.

To be included, participants were required to have no history of mental illness, hearing difficulties, substance/drug intake, or suicidal thoughts/tendencies.

The outcomes of interest (mood, paranoia, cognitive performance) were measured before and after soundscape exposure and each soundscape had a low- versus high-diversity version. This resulted in several analyses that compared two types of sounds (birdsongs vs. traffic noise) x two levels of diversity (low vs. high diversity) and two time points (pre- vs. post exposure).

The exposure to sounds lasted for 6 minutes, after which they were asked to report (on a 0-100 visual scale) how diverse/monotone, beautiful, and pleasant they perceived the soundscape to be.
 

 

 

Reduction in depressive symptoms

Participants were divided into four groups: low-diversity traffic noise soundscape (n = 83), high-diversity traffic noise soundscape (n = 60), low-diversity birdsong soundscape (n = 63), and high-diversity birdsong soundscape (n = 80)

In addition to listening to the sounds, participants completed questionnaires measuring mood (depression and anxiety) and paranoia as well as a test of digit span cognitive performance (both the forward and the backward versions).

The type, diversity, and type x diversity all revealed significant effect sizes (F[3, 276] = 78.6; P < .001; eta-squared = 0.461; F[3, 276] = 3.16; P = .025; eta-squared = 0.033; and F[3, 276] = 2.66; P = .028, respectively), “suggesting that all of these factors, as well as their interaction, had a significant impact on the perception of soundscapes (that is, ratings on monotony/diversity, beauty, and pleasantness).”

A post hoc examination showed that depressive symptoms significantly increased within the low- and high-diversity urban soundscapes but decreased significantly in the high-diversity birdsong soundscapes (T[1, 60] = –2.57; P = .012; d = –0.29).

For anxiety, the post hoc within-group analyses found no effects within low- and high-diversity traffic noise conditions (T[1, 82] = –1.37; P = .174; d = –0.15 and T[1, 68] = 0.49; P = .629; d = 0.06, respectively). By contrast, there were significant declines in both birdsong conditions (low diversity: T[1, 62] = –6.13; P < .001; d = –0.77; high diversity: T[1, 60] = –6.32; P < .001; d =  –0.70).

Similarly, there were no changes in participants with paranoia when they listened to either low- or high-diversity traffic noises (T[1, 82] = –0.55; P = .583; d = –0.06 and T[1, 68] = 0.67; P = .507; d = 0.08, respectively). On the other hand, both birdsong conditions yielded reductions in paranoia (low diversity: T[1, 62] = –5.90; P < .001; d = –0.74; high diversity: T[1, 60] =  –4.11; P < .001; d = –0.46).

None of the soundscapes had any effect on cognition.

“In theory, birds can be representational for natural and vital environments which, in turn, transfer the positive effects of nature on birdsong listeners,” said Mr. Stobbe.

“Taken together, the findings of the current study provide another facet of why interactions with nature can be beneficial for our mental health, and it is highly important to preserve nature,” he added.

Mr. Stobbe said that future research should focus on investigating mixed soundscapes including examining whether the presence of natural sounds in urban settings lower stressors such as traffic noise.
 

An understudied area

Commenting for this article, Ken Duckworth, MD, chief medical officer of the National Alliance on Mental Illness called the study “interesting but limited.”

Dr. Duckworth, who was not involved in the research said that the “benefits of nature are understudied” and agreed with the investigators that it is potentially important to study the use of birdsongs in psychiatric facilities. “Future studies could also correlate the role of birdsong with the mental health benefits/aspects of ‘being in nature,’ which has been found to have some effect.”

Open Access funding was enabled and organized by Projekt DEAL. The authors and Dr. Duckworth declared no competing interests.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCIENTIFIC REPORTS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Antibiotic may enhance noninvasive brain stimulation for depression

Article Type
Changed
Wed, 10/26/2022 - 15:03

Administering D-cycloserine (DCS) along with transmagnetic stimulation (TMS) may be a promising strategy to improve outcomes in major depressive disorder (MDD), new research suggests.

Dr. Alexander McGirr

“The take-home message is that this proof-of-concept study opens up a new avenue of treatment research so that in the future, we may be able to provide our patients with safe and well-tolerated medications and enhance noninvasive brain stimulation treatments for depression,” senior author Alexander McGirr, MD, PhD, assistant professor of psychiatry, University of Calgary (Alta.), told this news organization.

Dr. Scott Aaronson

“Once the safety and efficacy of this strategy have been confirmed with larger multisite studies, this could be deployed within existing health care infrastructure,” he said.

The study was published online in JAMA Psychiatry.

Synaptic plasticity

Repetitive transmagnetic stimulation (rTMS) and the more recently developed intermittent theta-burst stimulation (iTBS) are noninvasive brain stimulation modalities that have the largest evidence base in improving MDD. Although efficacious, an “unacceptable proportion of patients do not significantly improve” with these approaches, the authors write.

“We believe that iTBS improves depression through a process called synaptic plasticity, or how neurons adapt to stimulation, but we know that synaptic plasticity is impacted by the illness,” Dr. McGirr explained. This “could be the reason that only some patients benefit.”

One potential strategy to enhance neuroplasticity is to administer an adjunctive N-methyl D-aspartate (NMDA) receptor agonist during stimulation, since the NMDA receptor is a “key regulator of synaptic plasticity,” the authors state. In fact, synaptic plasticity with continuous and intermittent TBS is NMDA-receptor–dependent.

“DCS is an NMDA receptor partial agonist, and so at the low dose we used in our trial (100 mg), it can facilitate NMDA receptor signaling. The hypothesis was that pairing it with iTBS would enhance synaptic plasticity and clinical outcomes,” Dr. McGirr said.

The group’s previous research demonstrated that targeting the NMDA receptor with low-dose DCS “normalizes long-term motor cortex plasticity in individuals with MDD.” It also led to greater persistence of iTBS-induced changes compared to placebo.

However, “a demonstration that these physiological effects have an impact on treatment outcomes is lacking,” the authors note.

To address this gap, the researchers conducted a 4-week double-blind, placebo-controlled trial in which 50 participants (mean [standard deviation] age, 40.8 [13.4] years; 62% women) were randomly assigned on a 1:1 basis to receive either iTBS plus DCS or iTBS plus placebo (n = 25 per group) for the first 2 weeks of the trial, followed by iTBS without an adjunct for the third and fourth weeks.

Participants were required to be experiencing a major depressive episode and to have failed to respond to at least one adequate antidepressant trial or psychotherapy (but not more than four adequate antidepressant trials during the current episode).

Patients with acute suicidality, psychosis, recent substance use disorder, benzodiazepine use, seizures, unstable medical conditions, history of nonresponse to rTMS or electroconvulsive therapy, or comorbid psychiatric conditions, as well as those for whom psychotherapy was initiated within 3 months of enrollment or during the trial, were excluded.

Depression was measured by the Montgomery-Åsberg Depression Rating Scale (MADRS) (changes in score constituted the primary outcome) and the 17-item Hamilton Depression Rating Scale (17-HDRS).

“Secondary outcomes included clinical response, clinical remission, and Clinical Global Impression (CGI) scores,” the authors state.
 

 

 

“Promising” findings

Most participants in the iTBS plus placebo group were White (80%); 12% were Asian, and 8% were classified as “other.” A smaller proportion of participants in the iTBS plus DCS group were White (68%); the next smallest group was Asian (16%), followed by Hispanic (12%), and “other” (4%).

Participants presented with moderate-severe depressive symptoms, as measured by both the HRDS-17 and the MADRS. The placebo and intervention groups had similar scores at baseline. Resting motor threshold did not differ significantly between the groups, either at baseline or between the weeks with and without adjunctive treatment.

Greater improvements in MADRS scores were found in the intervention group than in the placebo groups (mean difference, –6.15 [95% confidence interval, –2.43 to –9.88]; Hedges g, 0.99 [0.34-1.62]).

A larger treatment effect was found after 4 weeks of treatment than after 2 weeks, although the adjuvant was present for the first 2 weeks. “We speculate that, despite ongoing iTBS, this reflects an erosion of the placebo effect, as 15 of 25 participants (60%) in the iTBS plus placebo group plateaued or had a worsening MADRS score, compared with 9 of 25 participants (36%) in the iTBS plus DCS group,” the authors write.

The intervention group showed higher rates of clinical response compared to the placebo group (73.9% vs. 29.3%, respectively), as well as higher rates of clinical remission (39.1% vs. 4.2%, respectively), as reflected in lower CGI-severity ratings and greater CGI-improvement ratings.

There were no serious adverse events during the trial.

The authors note several limitations, including the small sample size and the fact that participants received the adjunctive treatment for only 2 weeks. Longer treatment courses “require dedicated study.” And the short length of the trial (only 4 weeks) meant the difference between “treatment acceleration” and “treatment enhancement” could not be determined.

Nevertheless, the results are “promising” and suggest additional investigation into “intersectional approaches with other dosing regimens and precision medicine targeting approaches,” the authors state.
 

Synergistic approach

Commenting on the study, Scott Aaronson, MD, chief science officer, Institute for Advanced Diagnostics and Therapeutics, Sheppard Pratt, Towson, Md., called the findings “heartening.” He noted that the study “demonstrates a creative approach of combining an FDA-approved antibiotic with NMDA partial agonist activity – D-cycloserine – with a brief course of iTBS with the aim of enhancing the neuronal plasticity iTBS creates.”

Dr. Aaronson, who is also an adjunct professor at the University of Maryland, Baltimore, and was not involved with the study, added, “This is an early demonstration of the ability to further exploit neuronal changes from neurostimulation by synergistic use of a pharmacologic intervention.”

The study was supported in part by a Young Investigator Award from the Brain and Behavior Research Foundation and the Campus Alberta Innovates Program Chair in Neurostimulation. Dr. McGirr has a patent for PCT/CA2022/050839 pending with MCGRx Corp and is a shareholder of MCGRx Corp. The other authors’ disclosures are listed on the original article. Dr. Aaronson is a consultant for Neuronetics.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Administering D-cycloserine (DCS) along with transmagnetic stimulation (TMS) may be a promising strategy to improve outcomes in major depressive disorder (MDD), new research suggests.

Dr. Alexander McGirr

“The take-home message is that this proof-of-concept study opens up a new avenue of treatment research so that in the future, we may be able to provide our patients with safe and well-tolerated medications and enhance noninvasive brain stimulation treatments for depression,” senior author Alexander McGirr, MD, PhD, assistant professor of psychiatry, University of Calgary (Alta.), told this news organization.

Dr. Scott Aaronson

“Once the safety and efficacy of this strategy have been confirmed with larger multisite studies, this could be deployed within existing health care infrastructure,” he said.

The study was published online in JAMA Psychiatry.

Synaptic plasticity

Repetitive transmagnetic stimulation (rTMS) and the more recently developed intermittent theta-burst stimulation (iTBS) are noninvasive brain stimulation modalities that have the largest evidence base in improving MDD. Although efficacious, an “unacceptable proportion of patients do not significantly improve” with these approaches, the authors write.

“We believe that iTBS improves depression through a process called synaptic plasticity, or how neurons adapt to stimulation, but we know that synaptic plasticity is impacted by the illness,” Dr. McGirr explained. This “could be the reason that only some patients benefit.”

One potential strategy to enhance neuroplasticity is to administer an adjunctive N-methyl D-aspartate (NMDA) receptor agonist during stimulation, since the NMDA receptor is a “key regulator of synaptic plasticity,” the authors state. In fact, synaptic plasticity with continuous and intermittent TBS is NMDA-receptor–dependent.

“DCS is an NMDA receptor partial agonist, and so at the low dose we used in our trial (100 mg), it can facilitate NMDA receptor signaling. The hypothesis was that pairing it with iTBS would enhance synaptic plasticity and clinical outcomes,” Dr. McGirr said.

The group’s previous research demonstrated that targeting the NMDA receptor with low-dose DCS “normalizes long-term motor cortex plasticity in individuals with MDD.” It also led to greater persistence of iTBS-induced changes compared to placebo.

However, “a demonstration that these physiological effects have an impact on treatment outcomes is lacking,” the authors note.

To address this gap, the researchers conducted a 4-week double-blind, placebo-controlled trial in which 50 participants (mean [standard deviation] age, 40.8 [13.4] years; 62% women) were randomly assigned on a 1:1 basis to receive either iTBS plus DCS or iTBS plus placebo (n = 25 per group) for the first 2 weeks of the trial, followed by iTBS without an adjunct for the third and fourth weeks.

Participants were required to be experiencing a major depressive episode and to have failed to respond to at least one adequate antidepressant trial or psychotherapy (but not more than four adequate antidepressant trials during the current episode).

Patients with acute suicidality, psychosis, recent substance use disorder, benzodiazepine use, seizures, unstable medical conditions, history of nonresponse to rTMS or electroconvulsive therapy, or comorbid psychiatric conditions, as well as those for whom psychotherapy was initiated within 3 months of enrollment or during the trial, were excluded.

Depression was measured by the Montgomery-Åsberg Depression Rating Scale (MADRS) (changes in score constituted the primary outcome) and the 17-item Hamilton Depression Rating Scale (17-HDRS).

“Secondary outcomes included clinical response, clinical remission, and Clinical Global Impression (CGI) scores,” the authors state.
 

 

 

“Promising” findings

Most participants in the iTBS plus placebo group were White (80%); 12% were Asian, and 8% were classified as “other.” A smaller proportion of participants in the iTBS plus DCS group were White (68%); the next smallest group was Asian (16%), followed by Hispanic (12%), and “other” (4%).

Participants presented with moderate-severe depressive symptoms, as measured by both the HRDS-17 and the MADRS. The placebo and intervention groups had similar scores at baseline. Resting motor threshold did not differ significantly between the groups, either at baseline or between the weeks with and without adjunctive treatment.

Greater improvements in MADRS scores were found in the intervention group than in the placebo groups (mean difference, –6.15 [95% confidence interval, –2.43 to –9.88]; Hedges g, 0.99 [0.34-1.62]).

A larger treatment effect was found after 4 weeks of treatment than after 2 weeks, although the adjuvant was present for the first 2 weeks. “We speculate that, despite ongoing iTBS, this reflects an erosion of the placebo effect, as 15 of 25 participants (60%) in the iTBS plus placebo group plateaued or had a worsening MADRS score, compared with 9 of 25 participants (36%) in the iTBS plus DCS group,” the authors write.

The intervention group showed higher rates of clinical response compared to the placebo group (73.9% vs. 29.3%, respectively), as well as higher rates of clinical remission (39.1% vs. 4.2%, respectively), as reflected in lower CGI-severity ratings and greater CGI-improvement ratings.

There were no serious adverse events during the trial.

The authors note several limitations, including the small sample size and the fact that participants received the adjunctive treatment for only 2 weeks. Longer treatment courses “require dedicated study.” And the short length of the trial (only 4 weeks) meant the difference between “treatment acceleration” and “treatment enhancement” could not be determined.

Nevertheless, the results are “promising” and suggest additional investigation into “intersectional approaches with other dosing regimens and precision medicine targeting approaches,” the authors state.
 

Synergistic approach

Commenting on the study, Scott Aaronson, MD, chief science officer, Institute for Advanced Diagnostics and Therapeutics, Sheppard Pratt, Towson, Md., called the findings “heartening.” He noted that the study “demonstrates a creative approach of combining an FDA-approved antibiotic with NMDA partial agonist activity – D-cycloserine – with a brief course of iTBS with the aim of enhancing the neuronal plasticity iTBS creates.”

Dr. Aaronson, who is also an adjunct professor at the University of Maryland, Baltimore, and was not involved with the study, added, “This is an early demonstration of the ability to further exploit neuronal changes from neurostimulation by synergistic use of a pharmacologic intervention.”

The study was supported in part by a Young Investigator Award from the Brain and Behavior Research Foundation and the Campus Alberta Innovates Program Chair in Neurostimulation. Dr. McGirr has a patent for PCT/CA2022/050839 pending with MCGRx Corp and is a shareholder of MCGRx Corp. The other authors’ disclosures are listed on the original article. Dr. Aaronson is a consultant for Neuronetics.

A version of this article first appeared on Medscape.com.

Administering D-cycloserine (DCS) along with transmagnetic stimulation (TMS) may be a promising strategy to improve outcomes in major depressive disorder (MDD), new research suggests.

Dr. Alexander McGirr

“The take-home message is that this proof-of-concept study opens up a new avenue of treatment research so that in the future, we may be able to provide our patients with safe and well-tolerated medications and enhance noninvasive brain stimulation treatments for depression,” senior author Alexander McGirr, MD, PhD, assistant professor of psychiatry, University of Calgary (Alta.), told this news organization.

Dr. Scott Aaronson

“Once the safety and efficacy of this strategy have been confirmed with larger multisite studies, this could be deployed within existing health care infrastructure,” he said.

The study was published online in JAMA Psychiatry.

Synaptic plasticity

Repetitive transmagnetic stimulation (rTMS) and the more recently developed intermittent theta-burst stimulation (iTBS) are noninvasive brain stimulation modalities that have the largest evidence base in improving MDD. Although efficacious, an “unacceptable proportion of patients do not significantly improve” with these approaches, the authors write.

“We believe that iTBS improves depression through a process called synaptic plasticity, or how neurons adapt to stimulation, but we know that synaptic plasticity is impacted by the illness,” Dr. McGirr explained. This “could be the reason that only some patients benefit.”

One potential strategy to enhance neuroplasticity is to administer an adjunctive N-methyl D-aspartate (NMDA) receptor agonist during stimulation, since the NMDA receptor is a “key regulator of synaptic plasticity,” the authors state. In fact, synaptic plasticity with continuous and intermittent TBS is NMDA-receptor–dependent.

“DCS is an NMDA receptor partial agonist, and so at the low dose we used in our trial (100 mg), it can facilitate NMDA receptor signaling. The hypothesis was that pairing it with iTBS would enhance synaptic plasticity and clinical outcomes,” Dr. McGirr said.

The group’s previous research demonstrated that targeting the NMDA receptor with low-dose DCS “normalizes long-term motor cortex plasticity in individuals with MDD.” It also led to greater persistence of iTBS-induced changes compared to placebo.

However, “a demonstration that these physiological effects have an impact on treatment outcomes is lacking,” the authors note.

To address this gap, the researchers conducted a 4-week double-blind, placebo-controlled trial in which 50 participants (mean [standard deviation] age, 40.8 [13.4] years; 62% women) were randomly assigned on a 1:1 basis to receive either iTBS plus DCS or iTBS plus placebo (n = 25 per group) for the first 2 weeks of the trial, followed by iTBS without an adjunct for the third and fourth weeks.

Participants were required to be experiencing a major depressive episode and to have failed to respond to at least one adequate antidepressant trial or psychotherapy (but not more than four adequate antidepressant trials during the current episode).

Patients with acute suicidality, psychosis, recent substance use disorder, benzodiazepine use, seizures, unstable medical conditions, history of nonresponse to rTMS or electroconvulsive therapy, or comorbid psychiatric conditions, as well as those for whom psychotherapy was initiated within 3 months of enrollment or during the trial, were excluded.

Depression was measured by the Montgomery-Åsberg Depression Rating Scale (MADRS) (changes in score constituted the primary outcome) and the 17-item Hamilton Depression Rating Scale (17-HDRS).

“Secondary outcomes included clinical response, clinical remission, and Clinical Global Impression (CGI) scores,” the authors state.
 

 

 

“Promising” findings

Most participants in the iTBS plus placebo group were White (80%); 12% were Asian, and 8% were classified as “other.” A smaller proportion of participants in the iTBS plus DCS group were White (68%); the next smallest group was Asian (16%), followed by Hispanic (12%), and “other” (4%).

Participants presented with moderate-severe depressive symptoms, as measured by both the HRDS-17 and the MADRS. The placebo and intervention groups had similar scores at baseline. Resting motor threshold did not differ significantly between the groups, either at baseline or between the weeks with and without adjunctive treatment.

Greater improvements in MADRS scores were found in the intervention group than in the placebo groups (mean difference, –6.15 [95% confidence interval, –2.43 to –9.88]; Hedges g, 0.99 [0.34-1.62]).

A larger treatment effect was found after 4 weeks of treatment than after 2 weeks, although the adjuvant was present for the first 2 weeks. “We speculate that, despite ongoing iTBS, this reflects an erosion of the placebo effect, as 15 of 25 participants (60%) in the iTBS plus placebo group plateaued or had a worsening MADRS score, compared with 9 of 25 participants (36%) in the iTBS plus DCS group,” the authors write.

The intervention group showed higher rates of clinical response compared to the placebo group (73.9% vs. 29.3%, respectively), as well as higher rates of clinical remission (39.1% vs. 4.2%, respectively), as reflected in lower CGI-severity ratings and greater CGI-improvement ratings.

There were no serious adverse events during the trial.

The authors note several limitations, including the small sample size and the fact that participants received the adjunctive treatment for only 2 weeks. Longer treatment courses “require dedicated study.” And the short length of the trial (only 4 weeks) meant the difference between “treatment acceleration” and “treatment enhancement” could not be determined.

Nevertheless, the results are “promising” and suggest additional investigation into “intersectional approaches with other dosing regimens and precision medicine targeting approaches,” the authors state.
 

Synergistic approach

Commenting on the study, Scott Aaronson, MD, chief science officer, Institute for Advanced Diagnostics and Therapeutics, Sheppard Pratt, Towson, Md., called the findings “heartening.” He noted that the study “demonstrates a creative approach of combining an FDA-approved antibiotic with NMDA partial agonist activity – D-cycloserine – with a brief course of iTBS with the aim of enhancing the neuronal plasticity iTBS creates.”

Dr. Aaronson, who is also an adjunct professor at the University of Maryland, Baltimore, and was not involved with the study, added, “This is an early demonstration of the ability to further exploit neuronal changes from neurostimulation by synergistic use of a pharmacologic intervention.”

The study was supported in part by a Young Investigator Award from the Brain and Behavior Research Foundation and the Campus Alberta Innovates Program Chair in Neurostimulation. Dr. McGirr has a patent for PCT/CA2022/050839 pending with MCGRx Corp and is a shareholder of MCGRx Corp. The other authors’ disclosures are listed on the original article. Dr. Aaronson is a consultant for Neuronetics.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Suicide notes offer ‘unique window’ into motives, risks in the elderly

Article Type
Changed
Fri, 10/14/2022 - 13:45

Suicide notes left by elderly people provide a unique opportunity to better understand and prevent suicide in this often vulnerable population.

A new analysis of notes penned by seniors who died by suicide reveals several common themes. These include feeling as if they were a burden, feelings of guilt, experiencing mental illness, loneliness, or isolation, as well as poor health and/or disability.

“The most important message [in our findings] is that there is hope,” study investigator Ari B. Cuperfain, MD, Temerty Faculty of Medicine, University of Toronto, told this news organization.

“Suicide risk is modifiable, and we encourage that care providers sensitively explore thoughts of suicide in patients expressing depressive thoughts or difficulty coping with other life stressors,” he said.

The study was published online in The American Journal of Geriatric Psychiatry.
 

Opportunity for intervention

Most previous studies of late-life suicide have focused on risk factors rather than the themes and meaning underlying individuals’ distress.

Dr. Cuperfain’s group had previously analyzed suicide notes to “explore the relationship between suicide and an individual’s experience with mental health care in all age groups,” he said. For the current study, the investigators analyzed the subset of notes written exclusively by older adults.

The researchers “hypothesized that suicide notes could provide a unique window into the thought processes of older adults during a critical window for mental health intervention,” he added.

Although effective screening for suicidality in older adults can mitigate suicide risk, the frequency of suicide screening decreases with increasing age, the authors noted.

In addition, suicide attempts are typically more often fatal in older adults than in the general population. Understanding the motivations for suicide in this vulnerable population can inform potential interventions.

The researchers used a constructivist grounded theory framework to analyze suicide notes available from their previous study and notes obtained from the Office of the Coroner in Toronto from adults aged 65 years and older (n = 29; mean [SD], age 76.2 [8.3] years; 79% men).

The investigators began with open coding of the notes, “specifying a short name to a segment of data that summarizes and accounts for each piece of data.” They then used a series of techniques to identify terms and themes (repeated patterns of ideas reflected in the data).

Once themes had been determined, they identified “pathways between these themes and the final act of suicide.”
 

Common themes

Four major themes emerged in the analysis of the suicide notes.

Recurring terms included “pain,” “[poor] sleep,” or “[wakeful] nights,” as well as “sorry” (either about past actions or about the suicide), and “I just can’t” (referring to the inability to carry on).

The suicide notes “provided the older writers’ conceptual schema for suicide, elucidating the cognitive process linking their narratives to the acts of suicide.” Examples included the following:

  • Suicide as a way out or solution to an insoluble problem.
  • Suicide as the final act in a long road of suffering.
  • Suicide as the logical culmination of life (the person “lived a good life”).

“Our study enriches the understanding of ‘why’ rather than just ‘which’ older adults die by suicide,” the authors noted.

“Care providers can help older adults at risk of suicide through a combination of treatment options. These include physician involvement to manage depression, psychosis, or pain, psychotherapy to reframe certain ways of thinking, or social activities to reduce isolation,” Dr. Cuperfain said.

“By understanding the experiences of older adults and what is underlying their suicidal thoughts, these interventions can be tailored specifically for the individual experiencing distress,” he added.
 

Untangling suicide drivers

Commenting on the study, Yeates Conwell, MD, professor and vice chair, department of psychiatry, University of Rochester (N.Y.) Medical Center, said that “by analyzing the suicide notes of older people who died by suicide, the paper lends a unique perspective to our understanding of why they may have taken their lives.”

University of Rochester Medical Center
Dr. Yeates Conwell

Dr. Conwell, director of the geriatric psychiatry program and codirector of the Center for the Study and Prevention of Suicide, University of Rochester, and author of an accompanying editorial, said that “by including the decedent’s own voice, the analysis of notes is a useful complement to other approaches used for the study of suicide in this age group”.

However, “like all other approaches, it is subject to potential biases in interpretation. The meaning in the notes must be derived with careful consideration of context, both what is said and what is not said, and the likelihood that both overt and covert messages are contained in and between their lines,” cautioned Dr. Conwell.

“Acknowledging the strength and limitations of each approach to the study of suicide death, together they can help untangle its drivers and support the search for effective, acceptable, and scalable prevention interventions. No one approach alone, however, will reveal the ‘cause’ of suicide,” Dr. Conwell wrote.

No source of study funding was provided. Dr. Cuperfain reports no relevant financial relationships. The other authors’ disclosures are listed on the original article. Dr. Conwell reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Suicide notes left by elderly people provide a unique opportunity to better understand and prevent suicide in this often vulnerable population.

A new analysis of notes penned by seniors who died by suicide reveals several common themes. These include feeling as if they were a burden, feelings of guilt, experiencing mental illness, loneliness, or isolation, as well as poor health and/or disability.

“The most important message [in our findings] is that there is hope,” study investigator Ari B. Cuperfain, MD, Temerty Faculty of Medicine, University of Toronto, told this news organization.

“Suicide risk is modifiable, and we encourage that care providers sensitively explore thoughts of suicide in patients expressing depressive thoughts or difficulty coping with other life stressors,” he said.

The study was published online in The American Journal of Geriatric Psychiatry.
 

Opportunity for intervention

Most previous studies of late-life suicide have focused on risk factors rather than the themes and meaning underlying individuals’ distress.

Dr. Cuperfain’s group had previously analyzed suicide notes to “explore the relationship between suicide and an individual’s experience with mental health care in all age groups,” he said. For the current study, the investigators analyzed the subset of notes written exclusively by older adults.

The researchers “hypothesized that suicide notes could provide a unique window into the thought processes of older adults during a critical window for mental health intervention,” he added.

Although effective screening for suicidality in older adults can mitigate suicide risk, the frequency of suicide screening decreases with increasing age, the authors noted.

In addition, suicide attempts are typically more often fatal in older adults than in the general population. Understanding the motivations for suicide in this vulnerable population can inform potential interventions.

The researchers used a constructivist grounded theory framework to analyze suicide notes available from their previous study and notes obtained from the Office of the Coroner in Toronto from adults aged 65 years and older (n = 29; mean [SD], age 76.2 [8.3] years; 79% men).

The investigators began with open coding of the notes, “specifying a short name to a segment of data that summarizes and accounts for each piece of data.” They then used a series of techniques to identify terms and themes (repeated patterns of ideas reflected in the data).

Once themes had been determined, they identified “pathways between these themes and the final act of suicide.”
 

Common themes

Four major themes emerged in the analysis of the suicide notes.

Recurring terms included “pain,” “[poor] sleep,” or “[wakeful] nights,” as well as “sorry” (either about past actions or about the suicide), and “I just can’t” (referring to the inability to carry on).

The suicide notes “provided the older writers’ conceptual schema for suicide, elucidating the cognitive process linking their narratives to the acts of suicide.” Examples included the following:

  • Suicide as a way out or solution to an insoluble problem.
  • Suicide as the final act in a long road of suffering.
  • Suicide as the logical culmination of life (the person “lived a good life”).

“Our study enriches the understanding of ‘why’ rather than just ‘which’ older adults die by suicide,” the authors noted.

“Care providers can help older adults at risk of suicide through a combination of treatment options. These include physician involvement to manage depression, psychosis, or pain, psychotherapy to reframe certain ways of thinking, or social activities to reduce isolation,” Dr. Cuperfain said.

“By understanding the experiences of older adults and what is underlying their suicidal thoughts, these interventions can be tailored specifically for the individual experiencing distress,” he added.
 

Untangling suicide drivers

Commenting on the study, Yeates Conwell, MD, professor and vice chair, department of psychiatry, University of Rochester (N.Y.) Medical Center, said that “by analyzing the suicide notes of older people who died by suicide, the paper lends a unique perspective to our understanding of why they may have taken their lives.”

University of Rochester Medical Center
Dr. Yeates Conwell

Dr. Conwell, director of the geriatric psychiatry program and codirector of the Center for the Study and Prevention of Suicide, University of Rochester, and author of an accompanying editorial, said that “by including the decedent’s own voice, the analysis of notes is a useful complement to other approaches used for the study of suicide in this age group”.

However, “like all other approaches, it is subject to potential biases in interpretation. The meaning in the notes must be derived with careful consideration of context, both what is said and what is not said, and the likelihood that both overt and covert messages are contained in and between their lines,” cautioned Dr. Conwell.

“Acknowledging the strength and limitations of each approach to the study of suicide death, together they can help untangle its drivers and support the search for effective, acceptable, and scalable prevention interventions. No one approach alone, however, will reveal the ‘cause’ of suicide,” Dr. Conwell wrote.

No source of study funding was provided. Dr. Cuperfain reports no relevant financial relationships. The other authors’ disclosures are listed on the original article. Dr. Conwell reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Suicide notes left by elderly people provide a unique opportunity to better understand and prevent suicide in this often vulnerable population.

A new analysis of notes penned by seniors who died by suicide reveals several common themes. These include feeling as if they were a burden, feelings of guilt, experiencing mental illness, loneliness, or isolation, as well as poor health and/or disability.

“The most important message [in our findings] is that there is hope,” study investigator Ari B. Cuperfain, MD, Temerty Faculty of Medicine, University of Toronto, told this news organization.

“Suicide risk is modifiable, and we encourage that care providers sensitively explore thoughts of suicide in patients expressing depressive thoughts or difficulty coping with other life stressors,” he said.

The study was published online in The American Journal of Geriatric Psychiatry.
 

Opportunity for intervention

Most previous studies of late-life suicide have focused on risk factors rather than the themes and meaning underlying individuals’ distress.

Dr. Cuperfain’s group had previously analyzed suicide notes to “explore the relationship between suicide and an individual’s experience with mental health care in all age groups,” he said. For the current study, the investigators analyzed the subset of notes written exclusively by older adults.

The researchers “hypothesized that suicide notes could provide a unique window into the thought processes of older adults during a critical window for mental health intervention,” he added.

Although effective screening for suicidality in older adults can mitigate suicide risk, the frequency of suicide screening decreases with increasing age, the authors noted.

In addition, suicide attempts are typically more often fatal in older adults than in the general population. Understanding the motivations for suicide in this vulnerable population can inform potential interventions.

The researchers used a constructivist grounded theory framework to analyze suicide notes available from their previous study and notes obtained from the Office of the Coroner in Toronto from adults aged 65 years and older (n = 29; mean [SD], age 76.2 [8.3] years; 79% men).

The investigators began with open coding of the notes, “specifying a short name to a segment of data that summarizes and accounts for each piece of data.” They then used a series of techniques to identify terms and themes (repeated patterns of ideas reflected in the data).

Once themes had been determined, they identified “pathways between these themes and the final act of suicide.”
 

Common themes

Four major themes emerged in the analysis of the suicide notes.

Recurring terms included “pain,” “[poor] sleep,” or “[wakeful] nights,” as well as “sorry” (either about past actions or about the suicide), and “I just can’t” (referring to the inability to carry on).

The suicide notes “provided the older writers’ conceptual schema for suicide, elucidating the cognitive process linking their narratives to the acts of suicide.” Examples included the following:

  • Suicide as a way out or solution to an insoluble problem.
  • Suicide as the final act in a long road of suffering.
  • Suicide as the logical culmination of life (the person “lived a good life”).

“Our study enriches the understanding of ‘why’ rather than just ‘which’ older adults die by suicide,” the authors noted.

“Care providers can help older adults at risk of suicide through a combination of treatment options. These include physician involvement to manage depression, psychosis, or pain, psychotherapy to reframe certain ways of thinking, or social activities to reduce isolation,” Dr. Cuperfain said.

“By understanding the experiences of older adults and what is underlying their suicidal thoughts, these interventions can be tailored specifically for the individual experiencing distress,” he added.
 

Untangling suicide drivers

Commenting on the study, Yeates Conwell, MD, professor and vice chair, department of psychiatry, University of Rochester (N.Y.) Medical Center, said that “by analyzing the suicide notes of older people who died by suicide, the paper lends a unique perspective to our understanding of why they may have taken their lives.”

University of Rochester Medical Center
Dr. Yeates Conwell

Dr. Conwell, director of the geriatric psychiatry program and codirector of the Center for the Study and Prevention of Suicide, University of Rochester, and author of an accompanying editorial, said that “by including the decedent’s own voice, the analysis of notes is a useful complement to other approaches used for the study of suicide in this age group”.

However, “like all other approaches, it is subject to potential biases in interpretation. The meaning in the notes must be derived with careful consideration of context, both what is said and what is not said, and the likelihood that both overt and covert messages are contained in and between their lines,” cautioned Dr. Conwell.

“Acknowledging the strength and limitations of each approach to the study of suicide death, together they can help untangle its drivers and support the search for effective, acceptable, and scalable prevention interventions. No one approach alone, however, will reveal the ‘cause’ of suicide,” Dr. Conwell wrote.

No source of study funding was provided. Dr. Cuperfain reports no relevant financial relationships. The other authors’ disclosures are listed on the original article. Dr. Conwell reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GERIATRIC PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Gut microbiota disruption a driver of aggression in schizophrenia?

Article Type
Changed
Mon, 10/10/2022 - 15:09

Disturbances in the gut may help explain why some patients with schizophrenia are aggressive whereas others are not, new research suggests. However, at least one expert expressed concerns over the study’s conclusions.

Results from a study of 50 inpatients with schizophrenia showed significantly higher pro-inflammation, pro-oxidation, and leaky gut biomarkers in those with aggression vs. their peers who did not display aggression.

In addition, those with aggression showed less alpha diversity and evenness of the fecal bacterial community, lower levels of several beneficial gut bacteria, and higher levels of the fecal genera Prevotella.

Six short-chain fatty acids (SCFAs) and six neurotransmitters were also lower in the aggression vs. no-aggression groups.

“The present study was the first to compare the state of inflammation, oxidation, intestinal microbiota, and metabolites” in inpatients with schizophrenia and aggression, compared with those who did not show aggression, write the investigators, led by Hongxin Deng, department of psychiatry, Zhumadian (China) Psychiatric Hospital.

“Results indicate pro-inflammation, pro-oxidation and leaky gut phenotypes relating to enteric dysbacteriosis and microbial SCFAs feature the aggression in [individuals with schizophrenia], which provides clues for future microbial-based or anti-inflammatory/oxidative therapies on aggression,” they add.

The findings were published online in BMC Psychiatry.
 

Unknown pathogenesis

Although emerging evidence suggests that schizophrenia “may augment the propensity for aggression incidence about fourfold to sevenfold,” the pathogenesis of aggression “remains largely unknown,” the investigators note.

The same researchers previously found an association between the systemic pro-inflammation response and the onset or severity of aggression in schizophrenia, “possibly caused by leaky gut-induced bacterial translocation.”

The researchers suggest that peripheral cytokines “could cross the blood-brain barrier, thus precipitating changes in mood and behavior through hypothalamic-pituitary-adrenal axis.”

However, they note that the pro-inflammation phenotype is “often a synergistic effect of multiple causes.” Of these, chronic pro-oxidative stress has been shown to contribute to aggression onset in intermittent explosive disorder, but this association has rarely been confirmed in patients with schizophrenia.

In addition, increasing evidence points to enteric dysbacteriosis and dysbiosis of intestinal flora metabolites, including SCFAs or neurotransmitters, as potentially “integral parts of psychiatric disorders’ pathophysiology” by changing the state of both oxidative stress and inflammation.

The investigators hypothesized that the systemic pro-inflammation phenotype in aggression-affected schizophrenia cases “involves alterations to gut microbiota and its metabolites, leaky gut, and oxidative stress.” However, the profiles of these variables and their interrelationships have been “poorly investigated” in inpatients with schizophrenia and aggression.

To fill this gap, they assessed adult psychiatric inpatients with schizophrenia and aggressive behaviors and inpatients with schizophrenia but no aggressive behavior within 1 week before admission (n = 25 per group; mean age, 33.52 years, and 32.88 years, respectively; 68% and 64% women, respectively).

They collected stool samples from each patient and used enzyme-linked immunoassay (ELISA) to detect fecal calprotectin protein, an indicator of intestinal inflammation. They also collected fasting peripheral blood samples, using ELISA to detect several biomarkers.

The researchers also used the Modified Overt Aggression Scale (MOAS) to characterize aggressive behaviors and the Positive and Negative Syndrome Scale to characterize psychiatric symptoms.
 

 

 

‘Vital role’

Significantly higher biomarkers for systemic pro-inflammation, pro-oxidation and leaky gut were found in the aggression vs the no-aggression group (all P < .05).

After controlling for potential confounders, the researchers also found positive associations between MOAS scores and biomarkers, both serum and fecal.

There were also positive associations between serum 8-hydroxy-20-deoxy-guanosine (8-OH-DG) or 8-isoprostane (8-ISO) and systemic inflammatory biomarkers (all R > 0; P < .05).

In addition, the alpha diversity and evenness of the fecal bacterial community were lower in the aggression vs. no aggression groups.

When the researchers compared the relative abundance of the top 15 genera composition of intestinal microflora in the two groups, Bacteroides, Faecalibacterium, Blautia, Bifidobacterium, Collinsella, and Eubacterium coprostanoligenes were “remarkably reduced” in the group with aggression, whereas the abundance of fecal genera Prevotella was significantly increased (all corrected P < .001).

In the patients who had schizophrenia with aggression, levels of six SCFAs and six neurotransmitters were much lower than in the patients with schizophrenia but no aggression (all P < .05).

Inpatients with schizophrenia and aggression “had dramatically increased serum level of 8-OH-DG (nucleic acid oxidation biomarker) and 8-ISO (lipid oxidation biomarker) than those without, and further correlation analysis also showed positive correlativity between pro-oxidation and systemic pro-inflammation response or aggression severity,” the investigators write.

The findings “collectively suggest the cocontributory role of systemic pro-inflammation and pro-oxidation in the development of aggression” in schizophrenia, they add. “Gut dysbacteriosis with leaky gut seems to play a vital role in the pathophysiology.”
 

Correlation vs. causality

Commenting for this article, Emeran Mayer, MD, distinguished research professor of medicine at the G. Oppenheimer Center for Neurobiology of Stress and Resilience and UCLA Brain Gut Microbiome Center, Los Angeles, said that “at first glance, it is interesting that the behavioral trait of aggression but not the diagnosis of schizophrenia showed the differences in markers of systemic inflammation, increased gut permeability, and microbiome parameters.”

However, like many such descriptive studies, the research is flawed by comparing two patient groups and concluding causality between the biomarkers and the behavior traits, added Dr. Mayer, who was not involved with the study.

The study’s shortcomings include its small sample size as well as several confounding factors – particularly diet, sleep, exercise, and stress and anxiety levels – that were not considered, he said. The study also lacked a control group with high levels of aggression but without schizophrenia.

“The observed changes in intestinal permeability, unscientifically referred to as ‘leaky gut,’ as well as the gut microbiome differences, could be secondary to chronically increased sympathetic nervous system activation in the high aggression group,” Dr. Mayer said. “This is an interesting hypothesis which should be discussed and should have been addressed in this study.”

The differences in gut microbial composition and SCFA production “could be secondary to differences in plant-based diet components,” Dr. Mayer speculated, wondering how well dietary intake was controlled.

“Overall, it is an interesting descriptive study, which unfortunately does not contribute significantly to a better understanding of the role of the brain-gut microbiome system in schizophrenic patients,” he said.

The study was funded by a grant from China Postdoctoral Science Foundation. The investigators have reported no relevant financial relationships. Dr. Mayer is a scientific advisory board member of Danone, Axial Therapeutics, Viome, Amare, Mahana Therapeutics, Pendulum, Bloom Biosciences, and APC Microbiome Ireland.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Disturbances in the gut may help explain why some patients with schizophrenia are aggressive whereas others are not, new research suggests. However, at least one expert expressed concerns over the study’s conclusions.

Results from a study of 50 inpatients with schizophrenia showed significantly higher pro-inflammation, pro-oxidation, and leaky gut biomarkers in those with aggression vs. their peers who did not display aggression.

In addition, those with aggression showed less alpha diversity and evenness of the fecal bacterial community, lower levels of several beneficial gut bacteria, and higher levels of the fecal genera Prevotella.

Six short-chain fatty acids (SCFAs) and six neurotransmitters were also lower in the aggression vs. no-aggression groups.

“The present study was the first to compare the state of inflammation, oxidation, intestinal microbiota, and metabolites” in inpatients with schizophrenia and aggression, compared with those who did not show aggression, write the investigators, led by Hongxin Deng, department of psychiatry, Zhumadian (China) Psychiatric Hospital.

“Results indicate pro-inflammation, pro-oxidation and leaky gut phenotypes relating to enteric dysbacteriosis and microbial SCFAs feature the aggression in [individuals with schizophrenia], which provides clues for future microbial-based or anti-inflammatory/oxidative therapies on aggression,” they add.

The findings were published online in BMC Psychiatry.
 

Unknown pathogenesis

Although emerging evidence suggests that schizophrenia “may augment the propensity for aggression incidence about fourfold to sevenfold,” the pathogenesis of aggression “remains largely unknown,” the investigators note.

The same researchers previously found an association between the systemic pro-inflammation response and the onset or severity of aggression in schizophrenia, “possibly caused by leaky gut-induced bacterial translocation.”

The researchers suggest that peripheral cytokines “could cross the blood-brain barrier, thus precipitating changes in mood and behavior through hypothalamic-pituitary-adrenal axis.”

However, they note that the pro-inflammation phenotype is “often a synergistic effect of multiple causes.” Of these, chronic pro-oxidative stress has been shown to contribute to aggression onset in intermittent explosive disorder, but this association has rarely been confirmed in patients with schizophrenia.

In addition, increasing evidence points to enteric dysbacteriosis and dysbiosis of intestinal flora metabolites, including SCFAs or neurotransmitters, as potentially “integral parts of psychiatric disorders’ pathophysiology” by changing the state of both oxidative stress and inflammation.

The investigators hypothesized that the systemic pro-inflammation phenotype in aggression-affected schizophrenia cases “involves alterations to gut microbiota and its metabolites, leaky gut, and oxidative stress.” However, the profiles of these variables and their interrelationships have been “poorly investigated” in inpatients with schizophrenia and aggression.

To fill this gap, they assessed adult psychiatric inpatients with schizophrenia and aggressive behaviors and inpatients with schizophrenia but no aggressive behavior within 1 week before admission (n = 25 per group; mean age, 33.52 years, and 32.88 years, respectively; 68% and 64% women, respectively).

They collected stool samples from each patient and used enzyme-linked immunoassay (ELISA) to detect fecal calprotectin protein, an indicator of intestinal inflammation. They also collected fasting peripheral blood samples, using ELISA to detect several biomarkers.

The researchers also used the Modified Overt Aggression Scale (MOAS) to characterize aggressive behaviors and the Positive and Negative Syndrome Scale to characterize psychiatric symptoms.
 

 

 

‘Vital role’

Significantly higher biomarkers for systemic pro-inflammation, pro-oxidation and leaky gut were found in the aggression vs the no-aggression group (all P < .05).

After controlling for potential confounders, the researchers also found positive associations between MOAS scores and biomarkers, both serum and fecal.

There were also positive associations between serum 8-hydroxy-20-deoxy-guanosine (8-OH-DG) or 8-isoprostane (8-ISO) and systemic inflammatory biomarkers (all R > 0; P < .05).

In addition, the alpha diversity and evenness of the fecal bacterial community were lower in the aggression vs. no aggression groups.

When the researchers compared the relative abundance of the top 15 genera composition of intestinal microflora in the two groups, Bacteroides, Faecalibacterium, Blautia, Bifidobacterium, Collinsella, and Eubacterium coprostanoligenes were “remarkably reduced” in the group with aggression, whereas the abundance of fecal genera Prevotella was significantly increased (all corrected P < .001).

In the patients who had schizophrenia with aggression, levels of six SCFAs and six neurotransmitters were much lower than in the patients with schizophrenia but no aggression (all P < .05).

Inpatients with schizophrenia and aggression “had dramatically increased serum level of 8-OH-DG (nucleic acid oxidation biomarker) and 8-ISO (lipid oxidation biomarker) than those without, and further correlation analysis also showed positive correlativity between pro-oxidation and systemic pro-inflammation response or aggression severity,” the investigators write.

The findings “collectively suggest the cocontributory role of systemic pro-inflammation and pro-oxidation in the development of aggression” in schizophrenia, they add. “Gut dysbacteriosis with leaky gut seems to play a vital role in the pathophysiology.”
 

Correlation vs. causality

Commenting for this article, Emeran Mayer, MD, distinguished research professor of medicine at the G. Oppenheimer Center for Neurobiology of Stress and Resilience and UCLA Brain Gut Microbiome Center, Los Angeles, said that “at first glance, it is interesting that the behavioral trait of aggression but not the diagnosis of schizophrenia showed the differences in markers of systemic inflammation, increased gut permeability, and microbiome parameters.”

However, like many such descriptive studies, the research is flawed by comparing two patient groups and concluding causality between the biomarkers and the behavior traits, added Dr. Mayer, who was not involved with the study.

The study’s shortcomings include its small sample size as well as several confounding factors – particularly diet, sleep, exercise, and stress and anxiety levels – that were not considered, he said. The study also lacked a control group with high levels of aggression but without schizophrenia.

“The observed changes in intestinal permeability, unscientifically referred to as ‘leaky gut,’ as well as the gut microbiome differences, could be secondary to chronically increased sympathetic nervous system activation in the high aggression group,” Dr. Mayer said. “This is an interesting hypothesis which should be discussed and should have been addressed in this study.”

The differences in gut microbial composition and SCFA production “could be secondary to differences in plant-based diet components,” Dr. Mayer speculated, wondering how well dietary intake was controlled.

“Overall, it is an interesting descriptive study, which unfortunately does not contribute significantly to a better understanding of the role of the brain-gut microbiome system in schizophrenic patients,” he said.

The study was funded by a grant from China Postdoctoral Science Foundation. The investigators have reported no relevant financial relationships. Dr. Mayer is a scientific advisory board member of Danone, Axial Therapeutics, Viome, Amare, Mahana Therapeutics, Pendulum, Bloom Biosciences, and APC Microbiome Ireland.

A version of this article first appeared on Medscape.com.

Disturbances in the gut may help explain why some patients with schizophrenia are aggressive whereas others are not, new research suggests. However, at least one expert expressed concerns over the study’s conclusions.

Results from a study of 50 inpatients with schizophrenia showed significantly higher pro-inflammation, pro-oxidation, and leaky gut biomarkers in those with aggression vs. their peers who did not display aggression.

In addition, those with aggression showed less alpha diversity and evenness of the fecal bacterial community, lower levels of several beneficial gut bacteria, and higher levels of the fecal genera Prevotella.

Six short-chain fatty acids (SCFAs) and six neurotransmitters were also lower in the aggression vs. no-aggression groups.

“The present study was the first to compare the state of inflammation, oxidation, intestinal microbiota, and metabolites” in inpatients with schizophrenia and aggression, compared with those who did not show aggression, write the investigators, led by Hongxin Deng, department of psychiatry, Zhumadian (China) Psychiatric Hospital.

“Results indicate pro-inflammation, pro-oxidation and leaky gut phenotypes relating to enteric dysbacteriosis and microbial SCFAs feature the aggression in [individuals with schizophrenia], which provides clues for future microbial-based or anti-inflammatory/oxidative therapies on aggression,” they add.

The findings were published online in BMC Psychiatry.
 

Unknown pathogenesis

Although emerging evidence suggests that schizophrenia “may augment the propensity for aggression incidence about fourfold to sevenfold,” the pathogenesis of aggression “remains largely unknown,” the investigators note.

The same researchers previously found an association between the systemic pro-inflammation response and the onset or severity of aggression in schizophrenia, “possibly caused by leaky gut-induced bacterial translocation.”

The researchers suggest that peripheral cytokines “could cross the blood-brain barrier, thus precipitating changes in mood and behavior through hypothalamic-pituitary-adrenal axis.”

However, they note that the pro-inflammation phenotype is “often a synergistic effect of multiple causes.” Of these, chronic pro-oxidative stress has been shown to contribute to aggression onset in intermittent explosive disorder, but this association has rarely been confirmed in patients with schizophrenia.

In addition, increasing evidence points to enteric dysbacteriosis and dysbiosis of intestinal flora metabolites, including SCFAs or neurotransmitters, as potentially “integral parts of psychiatric disorders’ pathophysiology” by changing the state of both oxidative stress and inflammation.

The investigators hypothesized that the systemic pro-inflammation phenotype in aggression-affected schizophrenia cases “involves alterations to gut microbiota and its metabolites, leaky gut, and oxidative stress.” However, the profiles of these variables and their interrelationships have been “poorly investigated” in inpatients with schizophrenia and aggression.

To fill this gap, they assessed adult psychiatric inpatients with schizophrenia and aggressive behaviors and inpatients with schizophrenia but no aggressive behavior within 1 week before admission (n = 25 per group; mean age, 33.52 years, and 32.88 years, respectively; 68% and 64% women, respectively).

They collected stool samples from each patient and used enzyme-linked immunoassay (ELISA) to detect fecal calprotectin protein, an indicator of intestinal inflammation. They also collected fasting peripheral blood samples, using ELISA to detect several biomarkers.

The researchers also used the Modified Overt Aggression Scale (MOAS) to characterize aggressive behaviors and the Positive and Negative Syndrome Scale to characterize psychiatric symptoms.
 

 

 

‘Vital role’

Significantly higher biomarkers for systemic pro-inflammation, pro-oxidation and leaky gut were found in the aggression vs the no-aggression group (all P < .05).

After controlling for potential confounders, the researchers also found positive associations between MOAS scores and biomarkers, both serum and fecal.

There were also positive associations between serum 8-hydroxy-20-deoxy-guanosine (8-OH-DG) or 8-isoprostane (8-ISO) and systemic inflammatory biomarkers (all R > 0; P < .05).

In addition, the alpha diversity and evenness of the fecal bacterial community were lower in the aggression vs. no aggression groups.

When the researchers compared the relative abundance of the top 15 genera composition of intestinal microflora in the two groups, Bacteroides, Faecalibacterium, Blautia, Bifidobacterium, Collinsella, and Eubacterium coprostanoligenes were “remarkably reduced” in the group with aggression, whereas the abundance of fecal genera Prevotella was significantly increased (all corrected P < .001).

In the patients who had schizophrenia with aggression, levels of six SCFAs and six neurotransmitters were much lower than in the patients with schizophrenia but no aggression (all P < .05).

Inpatients with schizophrenia and aggression “had dramatically increased serum level of 8-OH-DG (nucleic acid oxidation biomarker) and 8-ISO (lipid oxidation biomarker) than those without, and further correlation analysis also showed positive correlativity between pro-oxidation and systemic pro-inflammation response or aggression severity,” the investigators write.

The findings “collectively suggest the cocontributory role of systemic pro-inflammation and pro-oxidation in the development of aggression” in schizophrenia, they add. “Gut dysbacteriosis with leaky gut seems to play a vital role in the pathophysiology.”
 

Correlation vs. causality

Commenting for this article, Emeran Mayer, MD, distinguished research professor of medicine at the G. Oppenheimer Center for Neurobiology of Stress and Resilience and UCLA Brain Gut Microbiome Center, Los Angeles, said that “at first glance, it is interesting that the behavioral trait of aggression but not the diagnosis of schizophrenia showed the differences in markers of systemic inflammation, increased gut permeability, and microbiome parameters.”

However, like many such descriptive studies, the research is flawed by comparing two patient groups and concluding causality between the biomarkers and the behavior traits, added Dr. Mayer, who was not involved with the study.

The study’s shortcomings include its small sample size as well as several confounding factors – particularly diet, sleep, exercise, and stress and anxiety levels – that were not considered, he said. The study also lacked a control group with high levels of aggression but without schizophrenia.

“The observed changes in intestinal permeability, unscientifically referred to as ‘leaky gut,’ as well as the gut microbiome differences, could be secondary to chronically increased sympathetic nervous system activation in the high aggression group,” Dr. Mayer said. “This is an interesting hypothesis which should be discussed and should have been addressed in this study.”

The differences in gut microbial composition and SCFA production “could be secondary to differences in plant-based diet components,” Dr. Mayer speculated, wondering how well dietary intake was controlled.

“Overall, it is an interesting descriptive study, which unfortunately does not contribute significantly to a better understanding of the role of the brain-gut microbiome system in schizophrenic patients,” he said.

The study was funded by a grant from China Postdoctoral Science Foundation. The investigators have reported no relevant financial relationships. Dr. Mayer is a scientific advisory board member of Danone, Axial Therapeutics, Viome, Amare, Mahana Therapeutics, Pendulum, Bloom Biosciences, and APC Microbiome Ireland.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BMC PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Long-term antidepressant use tied to an increase in CVD, mortality risk

Article Type
Changed
Thu, 12/22/2022 - 14:01

 

Long-term antidepressant use is tied to an increased risk of adverse outcomes, including cardiovascular disease (CVD), cerebrovascular disease, coronary heart disease (CHD), and all-cause mortality, new research suggests.

The investigators drew on 10-year data from the UK Biobank on over 220,000 adults and compared the risk of developing adverse health outcomes among those taking antidepressants with the risk among those who were not taking antidepressants.

After adjusting for preexisting risk factors, they found that 10-year antidepressant use was associated with a twofold higher risk of CHD, an almost-twofold higher risk of CVD as well as CVD mortality, a higher risk of cerebrovascular disease, and more than double the risk of all-cause mortality.

On the other hand, at 10 years, antidepressant use was associated with a 23% lower risk of developing hypertension and a 32% lower risk of diabetes.

The main culprits were mirtazapine, venlafaxine, duloxetine, and trazodone, although SSRIs were also tied to increased risk.

“Our message for clinicians is that prescribing of antidepressants in the long term may not be harm free [and] we hope that this study will help doctors and patients have more informed conversations when they weigh up the potential risks and benefits of treatments for depression,” study investigator Narinder Bansal, MD, honorary research fellow, Centre for Academic Health and Centre for Academic Primary Care, University of Bristol (England), said in a news release.

“Regardless of whether the drugs are the underlying cause of these problems, our findings emphasize the importance of proactive cardiovascular monitoring and prevention in patients who have depression and are on antidepressants, given that both have been associated with higher risks,” she added.

The study was published online in the British Journal of Psychiatry Open.
 

Monitoring of CVD risk ‘critical’

Antidepressants are among the most widely prescribed drugs; 70 million prescriptions were dispensed in 2018 alone, representing a doubling of prescriptions for these agents in a decade, the investigators noted. “This striking rise in prescribing is attributed to long-term treatment rather than an increased incidence of depression.”

Most trials that have assessed antidepressant efficacy have been “poorly suited to examining adverse outcomes.” One reason for this is that many of the trials are short-term studies. Since depression is “strongly associated” with CVD risk factors, “careful assessment of the long-term cardiometabolic effects of antidepressant treatment is critical.”

Moreover, information about “a wide range of prospectively measured confounders ... is needed to provide robust estimates of the risks associated with long-term antidepressant use,” the authors noted.

The researchers examined the association between antidepressant use and four cardiometabolic morbidity outcomes – diabetes, hypertension, cerebrovascular disease, and CHD. In addition, they assessed two mortality outcomes – CVD mortality and all-cause mortality. Participants were divided into cohorts on the basis of outcome of interest.

The dataset contains detailed information on socioeconomic status, demographics, anthropometric, behavioral, and biochemical risk factors, disability, and health status and is linked to datasets of primary care records and deaths.

The study included 222,121 participants whose data had been linked to primary care records during 2018 (median age of participants, 56-57 years). About half were women, and 96% were of White ethnicity.

Participants were excluded if they had been prescribed antidepressants 12 months or less before baseline, if they had previously been diagnosed for the outcome of interest, if they had been previously prescribed psychotropic drugs, if they used cardiometabolic drugs at baseline, or if they had undergone treatment with antidepressant polytherapy.

Potential confounders included age, gender, body mass index, waist/hip ratio, smoking and alcohol intake status, physical activity, parental history of outcome, biochemical and hematologic biomarkers, socioeconomic status, and long-term illness, disability, or infirmity.
 

Mechanism unclear

By the end of the 5- and 10-year follow-up periods, an average of 8% and 6% of participants in each cohort, respectively, had been prescribed an antidepressant. SSRIs constituted the most commonly prescribed class (80%-82%), and citalopram was the most commonly prescribed SSRI (46%-47%). Mirtazapine was the most frequently prescribed non-SSRI antidepressant (44%-46%).

At 5 years, any antidepressant use was associated with an increased risk for diabetes, CHD, and all-cause mortality, but the findings were attenuated after further adjustment for confounders. In fact, SSRIs were associated with a reduced risk of diabetes at 5 years (hazard ratio, 0.64; 95% confidence interval, 0.49-0.83).

At 10 years, SSRIs were associated with an increased risk of cerebrovascular disease, CVD mortality, and all-cause mortality; non-SSRIs were associated with an increased risk of CHD, CVD, and all-cause mortality.

On the other hand, SSRIs were associated with a decrease in risk of diabetes and hypertension at 10 years (HR, 0.68; 95% CI, 0.53-0.87; and HR, 0.77; 95% CI, 0.66-0.89, respectively).

“While we have taken into account a wide range of pre-existing risk factors for cardiovascular disease, including those that are linked to depression such as excess weight, smoking, and low physical activity, it is difficult to fully control for the effects of depression in this kind of study, partly because there is considerable variability in the recording of depression severity in primary care,” said Dr. Bansal.

“This is important because many people taking antidepressants such as mirtazapine, venlafaxine, duloxetine and trazodone may have a more severe depression. This makes it difficult to fully separate the effects of the depression from the effects of medication,” she said.

Further research “is needed to assess whether the associations we have seen are genuinely due to the drugs; and, if so, why this might be,” she added.
 

Strengths, limitations

Commenting on the study, Roger McIntyre, MD, professor of psychiatry and pharmacology and head of the mood disorders psychopharmacology unit at the University of Toronto,, discussed the strengths and weaknesses of the study.

Dr. Roger S. McIntyre

The UK Biobank is a “well-described, well-phenotyped dataset of good quality,” said Dr. McIntyre, chairperson and executive director of the Brain and Cognitive Discover Foundation, Toronto, who was not involved with the study. Another strength is the “impressive number of variables the database contains, which enabled the authors to go much deeper into the topics.”

A “significant limitation” is the confounding that is inherent to the disorder itself – “people with depression have a much higher intrinsic risk of CVD, [cerebrovascular disease], and cardiovascular mortality,” Dr. McIntyre noted.

The researchers did not adjust for trauma or childhood maltreatment, “which are the biggest risk factors for both depression and CVD; and drug and alcohol misuse were also not accounted for.”

Additionally, “to determine whether something is an association or potentially causative, it must satisfy the Bradford-Hill criteria,” said Dr. McIntyre. “Since we’re moving more toward using these big databases and because we depend on them to give us long-term perspectives, we would want to see coherent, compelling Bradford-Hill criteria regarding causation. If you don’t have any, that’s fine too, but then it’s important to make clear that there is no clear causative line, just an association.”

The research was funded by the National Institute of Health Research School for Primary Care Research and was supported by the NI Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol. Dr. McIntyre has received research grant support from CI/GACD/National Natural Science Foundation of China and the Milken Institute and speaker/consultation fees from numerous companies. Dr. McIntyre is a CEO of Braxia Scientific.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Long-term antidepressant use is tied to an increased risk of adverse outcomes, including cardiovascular disease (CVD), cerebrovascular disease, coronary heart disease (CHD), and all-cause mortality, new research suggests.

The investigators drew on 10-year data from the UK Biobank on over 220,000 adults and compared the risk of developing adverse health outcomes among those taking antidepressants with the risk among those who were not taking antidepressants.

After adjusting for preexisting risk factors, they found that 10-year antidepressant use was associated with a twofold higher risk of CHD, an almost-twofold higher risk of CVD as well as CVD mortality, a higher risk of cerebrovascular disease, and more than double the risk of all-cause mortality.

On the other hand, at 10 years, antidepressant use was associated with a 23% lower risk of developing hypertension and a 32% lower risk of diabetes.

The main culprits were mirtazapine, venlafaxine, duloxetine, and trazodone, although SSRIs were also tied to increased risk.

“Our message for clinicians is that prescribing of antidepressants in the long term may not be harm free [and] we hope that this study will help doctors and patients have more informed conversations when they weigh up the potential risks and benefits of treatments for depression,” study investigator Narinder Bansal, MD, honorary research fellow, Centre for Academic Health and Centre for Academic Primary Care, University of Bristol (England), said in a news release.

“Regardless of whether the drugs are the underlying cause of these problems, our findings emphasize the importance of proactive cardiovascular monitoring and prevention in patients who have depression and are on antidepressants, given that both have been associated with higher risks,” she added.

The study was published online in the British Journal of Psychiatry Open.
 

Monitoring of CVD risk ‘critical’

Antidepressants are among the most widely prescribed drugs; 70 million prescriptions were dispensed in 2018 alone, representing a doubling of prescriptions for these agents in a decade, the investigators noted. “This striking rise in prescribing is attributed to long-term treatment rather than an increased incidence of depression.”

Most trials that have assessed antidepressant efficacy have been “poorly suited to examining adverse outcomes.” One reason for this is that many of the trials are short-term studies. Since depression is “strongly associated” with CVD risk factors, “careful assessment of the long-term cardiometabolic effects of antidepressant treatment is critical.”

Moreover, information about “a wide range of prospectively measured confounders ... is needed to provide robust estimates of the risks associated with long-term antidepressant use,” the authors noted.

The researchers examined the association between antidepressant use and four cardiometabolic morbidity outcomes – diabetes, hypertension, cerebrovascular disease, and CHD. In addition, they assessed two mortality outcomes – CVD mortality and all-cause mortality. Participants were divided into cohorts on the basis of outcome of interest.

The dataset contains detailed information on socioeconomic status, demographics, anthropometric, behavioral, and biochemical risk factors, disability, and health status and is linked to datasets of primary care records and deaths.

The study included 222,121 participants whose data had been linked to primary care records during 2018 (median age of participants, 56-57 years). About half were women, and 96% were of White ethnicity.

Participants were excluded if they had been prescribed antidepressants 12 months or less before baseline, if they had previously been diagnosed for the outcome of interest, if they had been previously prescribed psychotropic drugs, if they used cardiometabolic drugs at baseline, or if they had undergone treatment with antidepressant polytherapy.

Potential confounders included age, gender, body mass index, waist/hip ratio, smoking and alcohol intake status, physical activity, parental history of outcome, biochemical and hematologic biomarkers, socioeconomic status, and long-term illness, disability, or infirmity.
 

Mechanism unclear

By the end of the 5- and 10-year follow-up periods, an average of 8% and 6% of participants in each cohort, respectively, had been prescribed an antidepressant. SSRIs constituted the most commonly prescribed class (80%-82%), and citalopram was the most commonly prescribed SSRI (46%-47%). Mirtazapine was the most frequently prescribed non-SSRI antidepressant (44%-46%).

At 5 years, any antidepressant use was associated with an increased risk for diabetes, CHD, and all-cause mortality, but the findings were attenuated after further adjustment for confounders. In fact, SSRIs were associated with a reduced risk of diabetes at 5 years (hazard ratio, 0.64; 95% confidence interval, 0.49-0.83).

At 10 years, SSRIs were associated with an increased risk of cerebrovascular disease, CVD mortality, and all-cause mortality; non-SSRIs were associated with an increased risk of CHD, CVD, and all-cause mortality.

On the other hand, SSRIs were associated with a decrease in risk of diabetes and hypertension at 10 years (HR, 0.68; 95% CI, 0.53-0.87; and HR, 0.77; 95% CI, 0.66-0.89, respectively).

“While we have taken into account a wide range of pre-existing risk factors for cardiovascular disease, including those that are linked to depression such as excess weight, smoking, and low physical activity, it is difficult to fully control for the effects of depression in this kind of study, partly because there is considerable variability in the recording of depression severity in primary care,” said Dr. Bansal.

“This is important because many people taking antidepressants such as mirtazapine, venlafaxine, duloxetine and trazodone may have a more severe depression. This makes it difficult to fully separate the effects of the depression from the effects of medication,” she said.

Further research “is needed to assess whether the associations we have seen are genuinely due to the drugs; and, if so, why this might be,” she added.
 

Strengths, limitations

Commenting on the study, Roger McIntyre, MD, professor of psychiatry and pharmacology and head of the mood disorders psychopharmacology unit at the University of Toronto,, discussed the strengths and weaknesses of the study.

Dr. Roger S. McIntyre

The UK Biobank is a “well-described, well-phenotyped dataset of good quality,” said Dr. McIntyre, chairperson and executive director of the Brain and Cognitive Discover Foundation, Toronto, who was not involved with the study. Another strength is the “impressive number of variables the database contains, which enabled the authors to go much deeper into the topics.”

A “significant limitation” is the confounding that is inherent to the disorder itself – “people with depression have a much higher intrinsic risk of CVD, [cerebrovascular disease], and cardiovascular mortality,” Dr. McIntyre noted.

The researchers did not adjust for trauma or childhood maltreatment, “which are the biggest risk factors for both depression and CVD; and drug and alcohol misuse were also not accounted for.”

Additionally, “to determine whether something is an association or potentially causative, it must satisfy the Bradford-Hill criteria,” said Dr. McIntyre. “Since we’re moving more toward using these big databases and because we depend on them to give us long-term perspectives, we would want to see coherent, compelling Bradford-Hill criteria regarding causation. If you don’t have any, that’s fine too, but then it’s important to make clear that there is no clear causative line, just an association.”

The research was funded by the National Institute of Health Research School for Primary Care Research and was supported by the NI Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol. Dr. McIntyre has received research grant support from CI/GACD/National Natural Science Foundation of China and the Milken Institute and speaker/consultation fees from numerous companies. Dr. McIntyre is a CEO of Braxia Scientific.

A version of this article first appeared on Medscape.com.

 

Long-term antidepressant use is tied to an increased risk of adverse outcomes, including cardiovascular disease (CVD), cerebrovascular disease, coronary heart disease (CHD), and all-cause mortality, new research suggests.

The investigators drew on 10-year data from the UK Biobank on over 220,000 adults and compared the risk of developing adverse health outcomes among those taking antidepressants with the risk among those who were not taking antidepressants.

After adjusting for preexisting risk factors, they found that 10-year antidepressant use was associated with a twofold higher risk of CHD, an almost-twofold higher risk of CVD as well as CVD mortality, a higher risk of cerebrovascular disease, and more than double the risk of all-cause mortality.

On the other hand, at 10 years, antidepressant use was associated with a 23% lower risk of developing hypertension and a 32% lower risk of diabetes.

The main culprits were mirtazapine, venlafaxine, duloxetine, and trazodone, although SSRIs were also tied to increased risk.

“Our message for clinicians is that prescribing of antidepressants in the long term may not be harm free [and] we hope that this study will help doctors and patients have more informed conversations when they weigh up the potential risks and benefits of treatments for depression,” study investigator Narinder Bansal, MD, honorary research fellow, Centre for Academic Health and Centre for Academic Primary Care, University of Bristol (England), said in a news release.

“Regardless of whether the drugs are the underlying cause of these problems, our findings emphasize the importance of proactive cardiovascular monitoring and prevention in patients who have depression and are on antidepressants, given that both have been associated with higher risks,” she added.

The study was published online in the British Journal of Psychiatry Open.
 

Monitoring of CVD risk ‘critical’

Antidepressants are among the most widely prescribed drugs; 70 million prescriptions were dispensed in 2018 alone, representing a doubling of prescriptions for these agents in a decade, the investigators noted. “This striking rise in prescribing is attributed to long-term treatment rather than an increased incidence of depression.”

Most trials that have assessed antidepressant efficacy have been “poorly suited to examining adverse outcomes.” One reason for this is that many of the trials are short-term studies. Since depression is “strongly associated” with CVD risk factors, “careful assessment of the long-term cardiometabolic effects of antidepressant treatment is critical.”

Moreover, information about “a wide range of prospectively measured confounders ... is needed to provide robust estimates of the risks associated with long-term antidepressant use,” the authors noted.

The researchers examined the association between antidepressant use and four cardiometabolic morbidity outcomes – diabetes, hypertension, cerebrovascular disease, and CHD. In addition, they assessed two mortality outcomes – CVD mortality and all-cause mortality. Participants were divided into cohorts on the basis of outcome of interest.

The dataset contains detailed information on socioeconomic status, demographics, anthropometric, behavioral, and biochemical risk factors, disability, and health status and is linked to datasets of primary care records and deaths.

The study included 222,121 participants whose data had been linked to primary care records during 2018 (median age of participants, 56-57 years). About half were women, and 96% were of White ethnicity.

Participants were excluded if they had been prescribed antidepressants 12 months or less before baseline, if they had previously been diagnosed for the outcome of interest, if they had been previously prescribed psychotropic drugs, if they used cardiometabolic drugs at baseline, or if they had undergone treatment with antidepressant polytherapy.

Potential confounders included age, gender, body mass index, waist/hip ratio, smoking and alcohol intake status, physical activity, parental history of outcome, biochemical and hematologic biomarkers, socioeconomic status, and long-term illness, disability, or infirmity.
 

Mechanism unclear

By the end of the 5- and 10-year follow-up periods, an average of 8% and 6% of participants in each cohort, respectively, had been prescribed an antidepressant. SSRIs constituted the most commonly prescribed class (80%-82%), and citalopram was the most commonly prescribed SSRI (46%-47%). Mirtazapine was the most frequently prescribed non-SSRI antidepressant (44%-46%).

At 5 years, any antidepressant use was associated with an increased risk for diabetes, CHD, and all-cause mortality, but the findings were attenuated after further adjustment for confounders. In fact, SSRIs were associated with a reduced risk of diabetes at 5 years (hazard ratio, 0.64; 95% confidence interval, 0.49-0.83).

At 10 years, SSRIs were associated with an increased risk of cerebrovascular disease, CVD mortality, and all-cause mortality; non-SSRIs were associated with an increased risk of CHD, CVD, and all-cause mortality.

On the other hand, SSRIs were associated with a decrease in risk of diabetes and hypertension at 10 years (HR, 0.68; 95% CI, 0.53-0.87; and HR, 0.77; 95% CI, 0.66-0.89, respectively).

“While we have taken into account a wide range of pre-existing risk factors for cardiovascular disease, including those that are linked to depression such as excess weight, smoking, and low physical activity, it is difficult to fully control for the effects of depression in this kind of study, partly because there is considerable variability in the recording of depression severity in primary care,” said Dr. Bansal.

“This is important because many people taking antidepressants such as mirtazapine, venlafaxine, duloxetine and trazodone may have a more severe depression. This makes it difficult to fully separate the effects of the depression from the effects of medication,” she said.

Further research “is needed to assess whether the associations we have seen are genuinely due to the drugs; and, if so, why this might be,” she added.
 

Strengths, limitations

Commenting on the study, Roger McIntyre, MD, professor of psychiatry and pharmacology and head of the mood disorders psychopharmacology unit at the University of Toronto,, discussed the strengths and weaknesses of the study.

Dr. Roger S. McIntyre

The UK Biobank is a “well-described, well-phenotyped dataset of good quality,” said Dr. McIntyre, chairperson and executive director of the Brain and Cognitive Discover Foundation, Toronto, who was not involved with the study. Another strength is the “impressive number of variables the database contains, which enabled the authors to go much deeper into the topics.”

A “significant limitation” is the confounding that is inherent to the disorder itself – “people with depression have a much higher intrinsic risk of CVD, [cerebrovascular disease], and cardiovascular mortality,” Dr. McIntyre noted.

The researchers did not adjust for trauma or childhood maltreatment, “which are the biggest risk factors for both depression and CVD; and drug and alcohol misuse were also not accounted for.”

Additionally, “to determine whether something is an association or potentially causative, it must satisfy the Bradford-Hill criteria,” said Dr. McIntyre. “Since we’re moving more toward using these big databases and because we depend on them to give us long-term perspectives, we would want to see coherent, compelling Bradford-Hill criteria regarding causation. If you don’t have any, that’s fine too, but then it’s important to make clear that there is no clear causative line, just an association.”

The research was funded by the National Institute of Health Research School for Primary Care Research and was supported by the NI Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol. Dr. McIntyre has received research grant support from CI/GACD/National Natural Science Foundation of China and the Milken Institute and speaker/consultation fees from numerous companies. Dr. McIntyre is a CEO of Braxia Scientific.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE BRITISH JOURNAL OF PSYCHIATRY OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Bariatric surgery may up risk for epilepsy

Article Type
Changed
Thu, 12/15/2022 - 15:36

Bariatric surgery may raise the risk of developing epilepsy, new research suggests. Analyzing health records, investigators compared almost 17,000 patients who had undergone bariatric surgery with more than 620,000 individuals with obesity who had not undergone the surgery.

During a minimum 3-year follow-up period, the surgery group had a 45% higher risk of developing epilepsy than the nonsurgery group. Moreover, patients who had a stroke after their bariatric surgery were 14 times more likely to develop epilepsy than those who did not have a stroke.

“When considering having bariatric surgery, people should talk to their doctors about the benefits and risks,” senior investigator Jorge Burneo, MD, professor of neurology, biostatistics, and epidemiology and endowed chair in epilepsy at Western University, London, told this news organization.

“While there are many health benefits of weight loss, our findings suggest that epilepsy is a long-term risk of bariatric surgery for weight loss,” Dr. Burneo said.

The findings were published online in Neurology.
 

Unrecognized risk factor?

Bariatric surgery has become more common as global rates of obesity have increased. The surgery has been shown to reduce the risk for serious obesity-related conditions, the researchers note.

However, “in addition to the positive outcomes of bariatric surgery, several long-term neurological complications have also been identified,” they write.

One previous study reported increased epilepsy risk following gastric bypass. Those findings “suggest that bariatric surgery may be an unrecognized epilepsy risk factor; however, this possible association has not been thoroughly explored,” write the investigators.

Dr. Burneo said he conducted the study because he has seen patients with epilepsy in his clinic who were “without risk factors, with normal MRIs, who shared the history of having bariatric surgery before the development of epilepsy.”

The researchers’ primary objective was to “assess whether epilepsy risk is elevated following bariatric surgery for weight loss relative to a nonsurgical cohort of patients who are obese,” he noted.

The study used linked administrative health databases in Ontario, Canada. Patients were accrued from July 1, 2010, to Dec. 31, 2016, and were followed until Dec. 31, 2019. The analysis included 639,472 participants, 2.7% of whom had undergone bariatric surgery.

The “exposed” cohort consisted of all Ontario residents aged 18 years or older who had undergone bariatric surgery during the 6-year period (n = 16,958; 65.1% women; mean age, 47.4 years), while the “unexposed” cohort consisted of patients hospitalized with a diagnosis of obesity who had not undergone bariatric surgery (n = 622,514; 62.8% women; mean age, 47.6 years).

Patients with a history of seizures, epilepsy, epilepsy risk factors, prior brain surgery, psychiatric disorders, or drug or alcohol abuse/dependence were excluded from the analysis.

The researchers collected data on patients’ sociodemographic characteristics at the index date, as well as Charlson Comorbidity Index scores during the 2 years prior to index, and data regarding several specific comorbidities, such as diabetes mellitus, hypertension, sleep apnea, depression/anxiety, and cardiovascular factors.

The exposed and unexposed cohorts were followed for a median period of 5.8 and 5.9 person-years, respectively.
 

‘Unclear’ mechanisms

Before weighting, 0.4% of participants in the exposed cohort (n = 73) developed epilepsy, versus 0.2% of participants in the unexposed cohort (n = 1,260) by the end of the follow-up period.

In the weighted cohorts, there were 50.1 epilepsy diagnoses per 100,000 person-years, versus 34.1 per 100,000 person-years (rate difference, 16 per 100,000 person-years).

The multivariable analysis of the weighted cohort showed the hazard ratio for epilepsy cases that were associated with bariatric surgery was 1.45 (95% confidence interval, 1.35-1.56), after adjusting for sleep apnea and including stroke as a time-varying covariate.

Having a stroke during the follow-up period increased epilepsy 14-fold in the exposed cohort (HR, 14.03; 95% CI, 4.25-46.25).

The investigators note that they were unable to measure obesity status or body mass index throughout the study and that some obesity-related comorbidities “may affect epilepsy risk.”

In addition, Dr. Burneo reported that the study did not investigate potential causes and mechanisms of the association between bariatric surgery and epilepsy risk.

Hypotheses “include potential nutritional deficiencies, receipt of general anesthesia, or other unclear causes,” he said.

“Future research should investigate epilepsy as a potential long-term complication of bariatric surgery, exploring the possible effects of this procedure,” Dr. Burneo added.
 

Risk-benefit discussion

In a comment, Jacqueline French, MD, professor of neurology at NYU Grossman School of Medicine, and director of NYU’s Epilepsy Study Consortium, said she was “not 100% surprised by the findings” because she has seen in her clinical practice “a number of patients who developed epilepsy after bariatric surgery or had a history of bariatric surgery at the time they developed epilepsy.”

On the other hand, she has also seen patients who did not have a history of bariatric surgery and who developed epilepsy.

“I’m unable to tell if there is an association, although I’ve had it at the back of my head as a thought and wondered about it,” said Dr. French, who is also the chief medical and innovation officer at the Epilepsy Foundation. She was not involved with the study.

She noted that possible mechanisms underlying the association are that gastric bypass surgery leads to a “significant alteration” in nutrient absorption. Moreover, “we now know that the microbiome is associated with epilepsy” and that changes occur in the gut microbiome after bariatric surgery, Dr. French said.

There are two take-home messages for practicing clinicians, she added.

“Although the risk [of developing epilepsy] is very low, it should be presented as part of the risks and benefits to patients considering bariatric surgery,” she said.

“It’s equally important to follow up on the potential differences in these patients who go on to develop epilepsy following bariatric surgery,” said Dr. French. “Is there a certain metabolic profile or some nutrient previously absorbed that now is not absorbed that might predispose people to risk?”

This would be “enormously important to know because it might not just pertain to these people but to a whole other cohort of people who develop epilepsy,” Dr. French concluded.

The study was funded by the Ontario Ministry of Health and Ministry of Long-Term Care and by the Jack Cowin Endowed Chair in Epilepsy Research at Western University. Dr. Burneo holds the Jack Cowin Endowed Chair in Epilepsy Research at Western University. The other investigators and Dr. French have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(11)
Publications
Topics
Sections

Bariatric surgery may raise the risk of developing epilepsy, new research suggests. Analyzing health records, investigators compared almost 17,000 patients who had undergone bariatric surgery with more than 620,000 individuals with obesity who had not undergone the surgery.

During a minimum 3-year follow-up period, the surgery group had a 45% higher risk of developing epilepsy than the nonsurgery group. Moreover, patients who had a stroke after their bariatric surgery were 14 times more likely to develop epilepsy than those who did not have a stroke.

“When considering having bariatric surgery, people should talk to their doctors about the benefits and risks,” senior investigator Jorge Burneo, MD, professor of neurology, biostatistics, and epidemiology and endowed chair in epilepsy at Western University, London, told this news organization.

“While there are many health benefits of weight loss, our findings suggest that epilepsy is a long-term risk of bariatric surgery for weight loss,” Dr. Burneo said.

The findings were published online in Neurology.
 

Unrecognized risk factor?

Bariatric surgery has become more common as global rates of obesity have increased. The surgery has been shown to reduce the risk for serious obesity-related conditions, the researchers note.

However, “in addition to the positive outcomes of bariatric surgery, several long-term neurological complications have also been identified,” they write.

One previous study reported increased epilepsy risk following gastric bypass. Those findings “suggest that bariatric surgery may be an unrecognized epilepsy risk factor; however, this possible association has not been thoroughly explored,” write the investigators.

Dr. Burneo said he conducted the study because he has seen patients with epilepsy in his clinic who were “without risk factors, with normal MRIs, who shared the history of having bariatric surgery before the development of epilepsy.”

The researchers’ primary objective was to “assess whether epilepsy risk is elevated following bariatric surgery for weight loss relative to a nonsurgical cohort of patients who are obese,” he noted.

The study used linked administrative health databases in Ontario, Canada. Patients were accrued from July 1, 2010, to Dec. 31, 2016, and were followed until Dec. 31, 2019. The analysis included 639,472 participants, 2.7% of whom had undergone bariatric surgery.

The “exposed” cohort consisted of all Ontario residents aged 18 years or older who had undergone bariatric surgery during the 6-year period (n = 16,958; 65.1% women; mean age, 47.4 years), while the “unexposed” cohort consisted of patients hospitalized with a diagnosis of obesity who had not undergone bariatric surgery (n = 622,514; 62.8% women; mean age, 47.6 years).

Patients with a history of seizures, epilepsy, epilepsy risk factors, prior brain surgery, psychiatric disorders, or drug or alcohol abuse/dependence were excluded from the analysis.

The researchers collected data on patients’ sociodemographic characteristics at the index date, as well as Charlson Comorbidity Index scores during the 2 years prior to index, and data regarding several specific comorbidities, such as diabetes mellitus, hypertension, sleep apnea, depression/anxiety, and cardiovascular factors.

The exposed and unexposed cohorts were followed for a median period of 5.8 and 5.9 person-years, respectively.
 

‘Unclear’ mechanisms

Before weighting, 0.4% of participants in the exposed cohort (n = 73) developed epilepsy, versus 0.2% of participants in the unexposed cohort (n = 1,260) by the end of the follow-up period.

In the weighted cohorts, there were 50.1 epilepsy diagnoses per 100,000 person-years, versus 34.1 per 100,000 person-years (rate difference, 16 per 100,000 person-years).

The multivariable analysis of the weighted cohort showed the hazard ratio for epilepsy cases that were associated with bariatric surgery was 1.45 (95% confidence interval, 1.35-1.56), after adjusting for sleep apnea and including stroke as a time-varying covariate.

Having a stroke during the follow-up period increased epilepsy 14-fold in the exposed cohort (HR, 14.03; 95% CI, 4.25-46.25).

The investigators note that they were unable to measure obesity status or body mass index throughout the study and that some obesity-related comorbidities “may affect epilepsy risk.”

In addition, Dr. Burneo reported that the study did not investigate potential causes and mechanisms of the association between bariatric surgery and epilepsy risk.

Hypotheses “include potential nutritional deficiencies, receipt of general anesthesia, or other unclear causes,” he said.

“Future research should investigate epilepsy as a potential long-term complication of bariatric surgery, exploring the possible effects of this procedure,” Dr. Burneo added.
 

Risk-benefit discussion

In a comment, Jacqueline French, MD, professor of neurology at NYU Grossman School of Medicine, and director of NYU’s Epilepsy Study Consortium, said she was “not 100% surprised by the findings” because she has seen in her clinical practice “a number of patients who developed epilepsy after bariatric surgery or had a history of bariatric surgery at the time they developed epilepsy.”

On the other hand, she has also seen patients who did not have a history of bariatric surgery and who developed epilepsy.

“I’m unable to tell if there is an association, although I’ve had it at the back of my head as a thought and wondered about it,” said Dr. French, who is also the chief medical and innovation officer at the Epilepsy Foundation. She was not involved with the study.

She noted that possible mechanisms underlying the association are that gastric bypass surgery leads to a “significant alteration” in nutrient absorption. Moreover, “we now know that the microbiome is associated with epilepsy” and that changes occur in the gut microbiome after bariatric surgery, Dr. French said.

There are two take-home messages for practicing clinicians, she added.

“Although the risk [of developing epilepsy] is very low, it should be presented as part of the risks and benefits to patients considering bariatric surgery,” she said.

“It’s equally important to follow up on the potential differences in these patients who go on to develop epilepsy following bariatric surgery,” said Dr. French. “Is there a certain metabolic profile or some nutrient previously absorbed that now is not absorbed that might predispose people to risk?”

This would be “enormously important to know because it might not just pertain to these people but to a whole other cohort of people who develop epilepsy,” Dr. French concluded.

The study was funded by the Ontario Ministry of Health and Ministry of Long-Term Care and by the Jack Cowin Endowed Chair in Epilepsy Research at Western University. Dr. Burneo holds the Jack Cowin Endowed Chair in Epilepsy Research at Western University. The other investigators and Dr. French have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Bariatric surgery may raise the risk of developing epilepsy, new research suggests. Analyzing health records, investigators compared almost 17,000 patients who had undergone bariatric surgery with more than 620,000 individuals with obesity who had not undergone the surgery.

During a minimum 3-year follow-up period, the surgery group had a 45% higher risk of developing epilepsy than the nonsurgery group. Moreover, patients who had a stroke after their bariatric surgery were 14 times more likely to develop epilepsy than those who did not have a stroke.

“When considering having bariatric surgery, people should talk to their doctors about the benefits and risks,” senior investigator Jorge Burneo, MD, professor of neurology, biostatistics, and epidemiology and endowed chair in epilepsy at Western University, London, told this news organization.

“While there are many health benefits of weight loss, our findings suggest that epilepsy is a long-term risk of bariatric surgery for weight loss,” Dr. Burneo said.

The findings were published online in Neurology.
 

Unrecognized risk factor?

Bariatric surgery has become more common as global rates of obesity have increased. The surgery has been shown to reduce the risk for serious obesity-related conditions, the researchers note.

However, “in addition to the positive outcomes of bariatric surgery, several long-term neurological complications have also been identified,” they write.

One previous study reported increased epilepsy risk following gastric bypass. Those findings “suggest that bariatric surgery may be an unrecognized epilepsy risk factor; however, this possible association has not been thoroughly explored,” write the investigators.

Dr. Burneo said he conducted the study because he has seen patients with epilepsy in his clinic who were “without risk factors, with normal MRIs, who shared the history of having bariatric surgery before the development of epilepsy.”

The researchers’ primary objective was to “assess whether epilepsy risk is elevated following bariatric surgery for weight loss relative to a nonsurgical cohort of patients who are obese,” he noted.

The study used linked administrative health databases in Ontario, Canada. Patients were accrued from July 1, 2010, to Dec. 31, 2016, and were followed until Dec. 31, 2019. The analysis included 639,472 participants, 2.7% of whom had undergone bariatric surgery.

The “exposed” cohort consisted of all Ontario residents aged 18 years or older who had undergone bariatric surgery during the 6-year period (n = 16,958; 65.1% women; mean age, 47.4 years), while the “unexposed” cohort consisted of patients hospitalized with a diagnosis of obesity who had not undergone bariatric surgery (n = 622,514; 62.8% women; mean age, 47.6 years).

Patients with a history of seizures, epilepsy, epilepsy risk factors, prior brain surgery, psychiatric disorders, or drug or alcohol abuse/dependence were excluded from the analysis.

The researchers collected data on patients’ sociodemographic characteristics at the index date, as well as Charlson Comorbidity Index scores during the 2 years prior to index, and data regarding several specific comorbidities, such as diabetes mellitus, hypertension, sleep apnea, depression/anxiety, and cardiovascular factors.

The exposed and unexposed cohorts were followed for a median period of 5.8 and 5.9 person-years, respectively.
 

‘Unclear’ mechanisms

Before weighting, 0.4% of participants in the exposed cohort (n = 73) developed epilepsy, versus 0.2% of participants in the unexposed cohort (n = 1,260) by the end of the follow-up period.

In the weighted cohorts, there were 50.1 epilepsy diagnoses per 100,000 person-years, versus 34.1 per 100,000 person-years (rate difference, 16 per 100,000 person-years).

The multivariable analysis of the weighted cohort showed the hazard ratio for epilepsy cases that were associated with bariatric surgery was 1.45 (95% confidence interval, 1.35-1.56), after adjusting for sleep apnea and including stroke as a time-varying covariate.

Having a stroke during the follow-up period increased epilepsy 14-fold in the exposed cohort (HR, 14.03; 95% CI, 4.25-46.25).

The investigators note that they were unable to measure obesity status or body mass index throughout the study and that some obesity-related comorbidities “may affect epilepsy risk.”

In addition, Dr. Burneo reported that the study did not investigate potential causes and mechanisms of the association between bariatric surgery and epilepsy risk.

Hypotheses “include potential nutritional deficiencies, receipt of general anesthesia, or other unclear causes,” he said.

“Future research should investigate epilepsy as a potential long-term complication of bariatric surgery, exploring the possible effects of this procedure,” Dr. Burneo added.
 

Risk-benefit discussion

In a comment, Jacqueline French, MD, professor of neurology at NYU Grossman School of Medicine, and director of NYU’s Epilepsy Study Consortium, said she was “not 100% surprised by the findings” because she has seen in her clinical practice “a number of patients who developed epilepsy after bariatric surgery or had a history of bariatric surgery at the time they developed epilepsy.”

On the other hand, she has also seen patients who did not have a history of bariatric surgery and who developed epilepsy.

“I’m unable to tell if there is an association, although I’ve had it at the back of my head as a thought and wondered about it,” said Dr. French, who is also the chief medical and innovation officer at the Epilepsy Foundation. She was not involved with the study.

She noted that possible mechanisms underlying the association are that gastric bypass surgery leads to a “significant alteration” in nutrient absorption. Moreover, “we now know that the microbiome is associated with epilepsy” and that changes occur in the gut microbiome after bariatric surgery, Dr. French said.

There are two take-home messages for practicing clinicians, she added.

“Although the risk [of developing epilepsy] is very low, it should be presented as part of the risks and benefits to patients considering bariatric surgery,” she said.

“It’s equally important to follow up on the potential differences in these patients who go on to develop epilepsy following bariatric surgery,” said Dr. French. “Is there a certain metabolic profile or some nutrient previously absorbed that now is not absorbed that might predispose people to risk?”

This would be “enormously important to know because it might not just pertain to these people but to a whole other cohort of people who develop epilepsy,” Dr. French concluded.

The study was funded by the Ontario Ministry of Health and Ministry of Long-Term Care and by the Jack Cowin Endowed Chair in Epilepsy Research at Western University. Dr. Burneo holds the Jack Cowin Endowed Chair in Epilepsy Research at Western University. The other investigators and Dr. French have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(11)
Issue
Neurology Reviews - 30(11)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Heart failure drug a new treatment option for alcoholism? 

Article Type
Changed
Fri, 09/30/2022 - 08:05

Spironolactone, a potassium-sparing diuretic typically used to treat heart failure and hypertension, shows promise in treating alcohol use disorder (AUD), new research suggests.

Researchers at the National Institute on Drug Abuse, the National Institute on Alcohol Abuse and Alcoholism, and Yale University, New Haven, Conn., investigated the impact of spironolactone on AUD.

Initially, they studied rodents and found that spironolactone reduced binge drinking in mice and reduced self-administration of alcohol in rats without adversely affecting food or water intake or causing motor or coordination problems.

They also analyzed electronic health records of patients drawn from the United States Veterans Affairs health care system to explore potential changes in alcohol use after spironolactone treatment was initiated for other conditions and found a significant link between spironolactone treatment and reduction in self-reported alcohol consumption, with the largest effects observed among those who reported hazardous/heavy episodic alcohol use prior to starting spironolactone treatment.

“Combining findings across three species and different types of research studies, and then seeing similarities in these data, gives us confidence that we are onto something potentially important scientifically and clinically,” senior coauthor Lorenzo Leggio, MD, PhD, senior investigator in the Clinical Psychoneuroendocrinology and Neuropsychopharmacology Section, a joint NIDA and NIAAA laboratory, said in a news release.

The study was published online in Molecular Psychiatry.
 

There is a “critical need to increase the armamentarium of pharmacotherapies to treat individuals with AUD,” the authors note, adding that neuroendocrine systems involved in alcohol craving and drinking “offer promising pharmacologic targets in this regard.”

“Both our team and others have observed that patients with AUD often present with changes in peripheral hormones, including aldosterone, which plays a key role in regulating blood pressure and electrolytes,” Dr. Leggio said in an interview.

Spironolactone is a nonselective mineralocorticoid receptor (MT) antagonist. In studies in animal models, investigators said they found “an inverse correlation between alcohol drinking and the expression of the MR in the amygdala, a key brain region in the development and maintenance of AUD and addiction in general.”

Taken together, this led them to hypothesize that blocking the MR, which is the mechanism of action of spironolactone, “could be a novel pharmacotherapeutic approach for AUD,” he said.

Previous research by the same group of researchers suggested spironolactone “may be a potential new medication to treat patients with AUD.” The present study expanded on those findings and consisted of a three-part investigation.

In the current study, the investigators tested different dosages of spironolactone on binge-like alcohol consumption in male and female mice and assessed food and water intake, blood alcohol levels, motor coordination, and spontaneous locomotion.

They then tested the effects of different dosages of spironolactone injections on operant alcohol self-administration in alcohol-dependent and nondependent male and female rats, also testing blood alcohol levels and motor coordination.

Finally, they analyzed health records of veterans to examine the association between at least 60 continuous days of spironolactone treatment and self-reported alcohol consumption (measured by the Alcohol Use Disorders Identification Test-Consumption [AUDIT-C]).

Each of the spironolactone-exposed patients was matched using propensity scores with up to five unexposed patients who had reported alcohol consumption in the 2 years prior to the index date.

The final analysis included a matched cohort of 10,726 spironolactone-exposed individuals who were matched to 34,461 unexposed individuals.
 

 

 

New targets

Spironolactone reduced alcohol intake in mice drinking a sweetened alcohol solution; a 2-way ANOVA revealed a main effect of dose (F 4,52 = 9.09; P < .0001) and sex, with female mice drinking more alcohol, compared to male mice (F 1,13 = 6.05; P = .02).

Post hoc comparisons showed that spironolactone at doses of 50, 100, and 200 mg/kg significantly reduced alcohol intake (P values = .007, .002, and .0001, respectively).

In mice drinking an unsweetened alcohol solution, the 2-way repeated measures ANOVA similarly found a main effect of dose (F 4,52 = 5.77; P = .0006), but not of sex (F 1,13 = 1.41; P = .25).

Spironolactone had no effect on the mice’s intake of a sweet solution without alcohol and had no impact on the consumption of food and water or on locomotion and coordination.

In rats, a 2-way ANOVA revealed a significant spironolactone effect of dose (F 3,66 = 43.95; P < .001), with a post hoc test indicating that spironolactone at 25, 50, and 75 mg/kg reduced alcohol self-administration in alcohol-dependent and nondependent rats (all P values = .0001).

In humans, among the exposed individuals in the matched cohort, 25%, 57%, and 18% received daily doses of spironolactone of less than 25 mg/day, 25-49 mg/day, and 50 mg/day or higher, respectively, with a median follow-up time of 542 (interquartile range, 337-730) days.

The AUDIT-C scores decreased during the study period in both treatment groups, with a larger decrease in average AUDIT-C scores among the exposed vs. unexposed individuals.



“These are very exciting times because, thanks to the progress in the addiction biomedical research field, we are increasing our understanding of the mechanisms how some people develop AUD; hence we can use this knowledge to identify new targets.” The current study “is an example of these ongoing efforts,” said Dr. Leggio.

“It is important to note that [these results] are important but preliminary.” At this juncture, “it would be too premature to think about prescribing spironolactone to treat AUD,” he added.

 

Exciting findings

Commenting on the study, Joyce Besheer, PhD, professor, department of psychiatry and Bowles Center for Alcohol Studies, University of North Carolina at Chapel Hill, called the study an “elegant demonstration of translational science.”

“While clinical trials will be needed to determine whether this medication is effective at reducing drinking in patients with AUD, these findings are exciting as they suggest that spironolactone may be a promising compound and new treatment options for AUD are much needed,” said Dr. Besheer, who was not involved with the current study.

Dr. Leggio agreed. “We now need prospective, placebo-controlled studies to assess the potential safety and efficacy of spironolactone in people with AUD,” he said.

This work was supported by the National Institutes of Health and the NIAAA. Dr. Leggio, study coauthors, and Dr. Besheer declare no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Spironolactone, a potassium-sparing diuretic typically used to treat heart failure and hypertension, shows promise in treating alcohol use disorder (AUD), new research suggests.

Researchers at the National Institute on Drug Abuse, the National Institute on Alcohol Abuse and Alcoholism, and Yale University, New Haven, Conn., investigated the impact of spironolactone on AUD.

Initially, they studied rodents and found that spironolactone reduced binge drinking in mice and reduced self-administration of alcohol in rats without adversely affecting food or water intake or causing motor or coordination problems.

They also analyzed electronic health records of patients drawn from the United States Veterans Affairs health care system to explore potential changes in alcohol use after spironolactone treatment was initiated for other conditions and found a significant link between spironolactone treatment and reduction in self-reported alcohol consumption, with the largest effects observed among those who reported hazardous/heavy episodic alcohol use prior to starting spironolactone treatment.

“Combining findings across three species and different types of research studies, and then seeing similarities in these data, gives us confidence that we are onto something potentially important scientifically and clinically,” senior coauthor Lorenzo Leggio, MD, PhD, senior investigator in the Clinical Psychoneuroendocrinology and Neuropsychopharmacology Section, a joint NIDA and NIAAA laboratory, said in a news release.

The study was published online in Molecular Psychiatry.
 

There is a “critical need to increase the armamentarium of pharmacotherapies to treat individuals with AUD,” the authors note, adding that neuroendocrine systems involved in alcohol craving and drinking “offer promising pharmacologic targets in this regard.”

“Both our team and others have observed that patients with AUD often present with changes in peripheral hormones, including aldosterone, which plays a key role in regulating blood pressure and electrolytes,” Dr. Leggio said in an interview.

Spironolactone is a nonselective mineralocorticoid receptor (MT) antagonist. In studies in animal models, investigators said they found “an inverse correlation between alcohol drinking and the expression of the MR in the amygdala, a key brain region in the development and maintenance of AUD and addiction in general.”

Taken together, this led them to hypothesize that blocking the MR, which is the mechanism of action of spironolactone, “could be a novel pharmacotherapeutic approach for AUD,” he said.

Previous research by the same group of researchers suggested spironolactone “may be a potential new medication to treat patients with AUD.” The present study expanded on those findings and consisted of a three-part investigation.

In the current study, the investigators tested different dosages of spironolactone on binge-like alcohol consumption in male and female mice and assessed food and water intake, blood alcohol levels, motor coordination, and spontaneous locomotion.

They then tested the effects of different dosages of spironolactone injections on operant alcohol self-administration in alcohol-dependent and nondependent male and female rats, also testing blood alcohol levels and motor coordination.

Finally, they analyzed health records of veterans to examine the association between at least 60 continuous days of spironolactone treatment and self-reported alcohol consumption (measured by the Alcohol Use Disorders Identification Test-Consumption [AUDIT-C]).

Each of the spironolactone-exposed patients was matched using propensity scores with up to five unexposed patients who had reported alcohol consumption in the 2 years prior to the index date.

The final analysis included a matched cohort of 10,726 spironolactone-exposed individuals who were matched to 34,461 unexposed individuals.
 

 

 

New targets

Spironolactone reduced alcohol intake in mice drinking a sweetened alcohol solution; a 2-way ANOVA revealed a main effect of dose (F 4,52 = 9.09; P < .0001) and sex, with female mice drinking more alcohol, compared to male mice (F 1,13 = 6.05; P = .02).

Post hoc comparisons showed that spironolactone at doses of 50, 100, and 200 mg/kg significantly reduced alcohol intake (P values = .007, .002, and .0001, respectively).

In mice drinking an unsweetened alcohol solution, the 2-way repeated measures ANOVA similarly found a main effect of dose (F 4,52 = 5.77; P = .0006), but not of sex (F 1,13 = 1.41; P = .25).

Spironolactone had no effect on the mice’s intake of a sweet solution without alcohol and had no impact on the consumption of food and water or on locomotion and coordination.

In rats, a 2-way ANOVA revealed a significant spironolactone effect of dose (F 3,66 = 43.95; P < .001), with a post hoc test indicating that spironolactone at 25, 50, and 75 mg/kg reduced alcohol self-administration in alcohol-dependent and nondependent rats (all P values = .0001).

In humans, among the exposed individuals in the matched cohort, 25%, 57%, and 18% received daily doses of spironolactone of less than 25 mg/day, 25-49 mg/day, and 50 mg/day or higher, respectively, with a median follow-up time of 542 (interquartile range, 337-730) days.

The AUDIT-C scores decreased during the study period in both treatment groups, with a larger decrease in average AUDIT-C scores among the exposed vs. unexposed individuals.



“These are very exciting times because, thanks to the progress in the addiction biomedical research field, we are increasing our understanding of the mechanisms how some people develop AUD; hence we can use this knowledge to identify new targets.” The current study “is an example of these ongoing efforts,” said Dr. Leggio.

“It is important to note that [these results] are important but preliminary.” At this juncture, “it would be too premature to think about prescribing spironolactone to treat AUD,” he added.

 

Exciting findings

Commenting on the study, Joyce Besheer, PhD, professor, department of psychiatry and Bowles Center for Alcohol Studies, University of North Carolina at Chapel Hill, called the study an “elegant demonstration of translational science.”

“While clinical trials will be needed to determine whether this medication is effective at reducing drinking in patients with AUD, these findings are exciting as they suggest that spironolactone may be a promising compound and new treatment options for AUD are much needed,” said Dr. Besheer, who was not involved with the current study.

Dr. Leggio agreed. “We now need prospective, placebo-controlled studies to assess the potential safety and efficacy of spironolactone in people with AUD,” he said.

This work was supported by the National Institutes of Health and the NIAAA. Dr. Leggio, study coauthors, and Dr. Besheer declare no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Spironolactone, a potassium-sparing diuretic typically used to treat heart failure and hypertension, shows promise in treating alcohol use disorder (AUD), new research suggests.

Researchers at the National Institute on Drug Abuse, the National Institute on Alcohol Abuse and Alcoholism, and Yale University, New Haven, Conn., investigated the impact of spironolactone on AUD.

Initially, they studied rodents and found that spironolactone reduced binge drinking in mice and reduced self-administration of alcohol in rats without adversely affecting food or water intake or causing motor or coordination problems.

They also analyzed electronic health records of patients drawn from the United States Veterans Affairs health care system to explore potential changes in alcohol use after spironolactone treatment was initiated for other conditions and found a significant link between spironolactone treatment and reduction in self-reported alcohol consumption, with the largest effects observed among those who reported hazardous/heavy episodic alcohol use prior to starting spironolactone treatment.

“Combining findings across three species and different types of research studies, and then seeing similarities in these data, gives us confidence that we are onto something potentially important scientifically and clinically,” senior coauthor Lorenzo Leggio, MD, PhD, senior investigator in the Clinical Psychoneuroendocrinology and Neuropsychopharmacology Section, a joint NIDA and NIAAA laboratory, said in a news release.

The study was published online in Molecular Psychiatry.
 

There is a “critical need to increase the armamentarium of pharmacotherapies to treat individuals with AUD,” the authors note, adding that neuroendocrine systems involved in alcohol craving and drinking “offer promising pharmacologic targets in this regard.”

“Both our team and others have observed that patients with AUD often present with changes in peripheral hormones, including aldosterone, which plays a key role in regulating blood pressure and electrolytes,” Dr. Leggio said in an interview.

Spironolactone is a nonselective mineralocorticoid receptor (MT) antagonist. In studies in animal models, investigators said they found “an inverse correlation between alcohol drinking and the expression of the MR in the amygdala, a key brain region in the development and maintenance of AUD and addiction in general.”

Taken together, this led them to hypothesize that blocking the MR, which is the mechanism of action of spironolactone, “could be a novel pharmacotherapeutic approach for AUD,” he said.

Previous research by the same group of researchers suggested spironolactone “may be a potential new medication to treat patients with AUD.” The present study expanded on those findings and consisted of a three-part investigation.

In the current study, the investigators tested different dosages of spironolactone on binge-like alcohol consumption in male and female mice and assessed food and water intake, blood alcohol levels, motor coordination, and spontaneous locomotion.

They then tested the effects of different dosages of spironolactone injections on operant alcohol self-administration in alcohol-dependent and nondependent male and female rats, also testing blood alcohol levels and motor coordination.

Finally, they analyzed health records of veterans to examine the association between at least 60 continuous days of spironolactone treatment and self-reported alcohol consumption (measured by the Alcohol Use Disorders Identification Test-Consumption [AUDIT-C]).

Each of the spironolactone-exposed patients was matched using propensity scores with up to five unexposed patients who had reported alcohol consumption in the 2 years prior to the index date.

The final analysis included a matched cohort of 10,726 spironolactone-exposed individuals who were matched to 34,461 unexposed individuals.
 

 

 

New targets

Spironolactone reduced alcohol intake in mice drinking a sweetened alcohol solution; a 2-way ANOVA revealed a main effect of dose (F 4,52 = 9.09; P < .0001) and sex, with female mice drinking more alcohol, compared to male mice (F 1,13 = 6.05; P = .02).

Post hoc comparisons showed that spironolactone at doses of 50, 100, and 200 mg/kg significantly reduced alcohol intake (P values = .007, .002, and .0001, respectively).

In mice drinking an unsweetened alcohol solution, the 2-way repeated measures ANOVA similarly found a main effect of dose (F 4,52 = 5.77; P = .0006), but not of sex (F 1,13 = 1.41; P = .25).

Spironolactone had no effect on the mice’s intake of a sweet solution without alcohol and had no impact on the consumption of food and water or on locomotion and coordination.

In rats, a 2-way ANOVA revealed a significant spironolactone effect of dose (F 3,66 = 43.95; P < .001), with a post hoc test indicating that spironolactone at 25, 50, and 75 mg/kg reduced alcohol self-administration in alcohol-dependent and nondependent rats (all P values = .0001).

In humans, among the exposed individuals in the matched cohort, 25%, 57%, and 18% received daily doses of spironolactone of less than 25 mg/day, 25-49 mg/day, and 50 mg/day or higher, respectively, with a median follow-up time of 542 (interquartile range, 337-730) days.

The AUDIT-C scores decreased during the study period in both treatment groups, with a larger decrease in average AUDIT-C scores among the exposed vs. unexposed individuals.



“These are very exciting times because, thanks to the progress in the addiction biomedical research field, we are increasing our understanding of the mechanisms how some people develop AUD; hence we can use this knowledge to identify new targets.” The current study “is an example of these ongoing efforts,” said Dr. Leggio.

“It is important to note that [these results] are important but preliminary.” At this juncture, “it would be too premature to think about prescribing spironolactone to treat AUD,” he added.

 

Exciting findings

Commenting on the study, Joyce Besheer, PhD, professor, department of psychiatry and Bowles Center for Alcohol Studies, University of North Carolina at Chapel Hill, called the study an “elegant demonstration of translational science.”

“While clinical trials will be needed to determine whether this medication is effective at reducing drinking in patients with AUD, these findings are exciting as they suggest that spironolactone may be a promising compound and new treatment options for AUD are much needed,” said Dr. Besheer, who was not involved with the current study.

Dr. Leggio agreed. “We now need prospective, placebo-controlled studies to assess the potential safety and efficacy of spironolactone in people with AUD,” he said.

This work was supported by the National Institutes of Health and the NIAAA. Dr. Leggio, study coauthors, and Dr. Besheer declare no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM MOLECULAR PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Unconventional wisdom: Major depression tied to childhood trauma is treatable

Article Type
Changed
Tue, 09/27/2022 - 15:28

Despite a higher symptom burden, patients with major depressive disorder (MDD) and a history of childhood trauma (CT) can achieve significant recovery following treatment with a combination of pharmacotherapy and psychotherapy, new research suggests.
 

Results from a meta-analysis of 29 studies from 1966 to 2019, which included almost 7,000 adults with MDD, showed that more than 60% reported a history of CT. But despite having more severe depression at baseline, those with CT benefited from active treatment. Effect sizes were comparable, and dropout rates were similar to those of their counterparts without CT.

“Evidence-based psychotherapy and pharmacotherapy should be offered to depressed patients, regardless of their childhood trauma status,” lead author Erika Kuzminskaite, MSc, a PhD candidate at Amsterdam UMC department of psychiatry, the Netherlands, told this news organization.

“Screening for childhood trauma is important to identify individuals at risk for more severe course of the disorder and post-treatment residual symptoms,” she added.

The study was published online in the Lancet Psychiatry.
 

Common and potent risk factor

The researchers note that CT is common and is a potent risk factor for depression. Previous studies have “consistently indicated significantly higher severity and persistence of depressive symptoms in adult patients with depression and a history of childhood trauma.”

Previous individual and meta-analytic studies “indicated poorer response to first-line depression treatments in patients with childhood trauma, compared to those without trauma, suggesting the need for new personalized treatments for depressed patients with childhood trauma history,” Ms. Kuzminskaite said.

“However, the evidence on poorer treatment outcomes has not been definitive, and a comprehensive meta-analysis of available findings has been lacking,” she added.

The previous meta-analyses showed high between-study heterogeneity, and some primary studies reported similar or even superior improvement for patients with CT, compared with those without such history, following treatment with evidence-based psychotherapy or pharmacotherapy.

Previous studies also did not investigate the “relative contribution of different childhood trauma types.”

To address this gap, investigators in the Childhood Trauma Meta-Analysis Study Group conducted the “largest and most comprehensive study of available evidence examining the effects of childhood trauma on the efficacy and effectiveness of first-line treatments for adults with MDD.”

To be included, a study had to focus on adults over 18 years old who had received a primary diagnosis of depression. The study had to have included an available assessment of childhood trauma, and patients were required to have undergone psychotherapy and/or pharmacotherapy for depression alone or in combination with other guideline-recommended treatments. Studies were also required to have a comparator group, when applicable, and to have reported depression severity before and after the acute treatment phase.

Of 10,505 publications, 54 trials met inclusion criteria; of these, 29 (20 randomized controlled trials and 9 open trials), encompassing 6,830 participants aged 18-85 years, included data that had been made available by authors of the various studies and were included in the current analysis.

Most studies focused on MDD; 11 trials focused on patients with chronic or treatment-resistant depression.

The primary outcome was “depression severity change from baseline to the end of the acute treatment phase” (expressed as standardized effect size – Hedges’ g).
 

 

 

Greater treatment motivation?

Of the included patients, 62% reported a history of CT. They were found to have more severe depression at baseline, compared with those without CT (g = .202; 95% confidence interval, 0.145-0.258; I² = 0%).

The benefits from active treatment obtained by these patients with CT were similar to the benefits obtained by their counterparts without CT (between-group treatment effect difference: g = .016; 95% CI, –0.094-0.125; I² = 44.3%).

No significant difference in active treatment effects (in comparison with control condition) was found between individuals with and those without CT (g = .605; 95% CI, 0.294-0.916; I² = 58.0%; and g = .178; 95% CI, –0.195-0.552; I² = 67.5%, respectively; between-group difference P = .051).

Dropout rates were similar for the participants with and those without CT (risk ratio, 1.063; 95% CI, 0.945-1.195; I² = 0%).

“Findings did not significantly differ by childhood trauma type, study design, depression diagnosis, assessment method of childhood trauma, study quality, year, or treatment type or length,” the authors report.

The findings did, however, differ by country, with North American studies showing larger treatment effects for patients with CT, compared with studies conducted in Asian-Pacific countries (g = 0.150; 95% CI, 0.030-0.269; vs. g = 0.255; 95% CI, –0.508- –0.002, respectively; corrected false discovery rate, 0.0080). “However, because of limited power, these findings should be interpreted with caution,” the authors warn.

“It could be a chance finding and is certainly not causal,” Ms. Kuzminskaite suggested.

Most studies (21 of the 29) had a “moderate to high risk of bias.” But when the researchers conducted a sensitivity analysis in the low-bias studies, they found that results were similar to those of the primary analysis that included all the studies.

“Treatments were similarly effective for patients with and without childhood trauma, with slightly larger active treatment (vs. control condition – placebo, wait list, care-as-usual) effects for patients with childhood trauma history,” Ms. Kuzminskaite said.

“Some evidence suggests that patients with childhood trauma are characterized by greater treatment motivation,” she noted. Moreover, “they are also more severely depressed prior to treatment [and] thus have more room for improvement.”
 

‘Hopeful message’

Commenting for this news organization, Yvette Sheline, MD, McLure professor of psychiatry, radiology, and neurology and director of the center for neuromodulation in depression and Stress, University of Pennsylvania, Philadelphia, called it a “well-executed” and “straightforward” study “with clear-cut findings.”

Dr. Sheline, the director of the section on mood, anxiety, and trauma, who was not involved with the study, agrees with the authors’ conclusions – “to use evidence-based treatments for depression in all patients,” with or without a history of CT.

In an accompanying editorial, Antoine Yrondi, MD, PhD, of Université de Toulouse (France), called the findings “important and encouraging” but cautioned that CT could be associated with conditions other than depression, which could make MDD “more difficult to treat.”

Nevertheless, the meta-analysis “delivers a hopeful message to patients with childhood trauma that evidence-based psychotherapy and pharmacotherapy could improve depressive symptoms,” Dr. Yrondi said.

Dr. Yrondi encouraged physicians not to neglect CT in patients with MDD. “For this, it is important that physicians are trained to evaluate childhood trauma and to take it into account in their daily practice.”

No source of funding for the study was listed. The authors and Dr. Sheline have disclosed no relevant financial relationships. Dr. Yrondi has received speaker’s honoraria from AstraZeneca, Janssen, Lundbeck, Otsuka, and Jazz and has carried out clinical studies in relation to the development of a medicine for Janssen and Lundbeck that are unrelated to this work.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Despite a higher symptom burden, patients with major depressive disorder (MDD) and a history of childhood trauma (CT) can achieve significant recovery following treatment with a combination of pharmacotherapy and psychotherapy, new research suggests.
 

Results from a meta-analysis of 29 studies from 1966 to 2019, which included almost 7,000 adults with MDD, showed that more than 60% reported a history of CT. But despite having more severe depression at baseline, those with CT benefited from active treatment. Effect sizes were comparable, and dropout rates were similar to those of their counterparts without CT.

“Evidence-based psychotherapy and pharmacotherapy should be offered to depressed patients, regardless of their childhood trauma status,” lead author Erika Kuzminskaite, MSc, a PhD candidate at Amsterdam UMC department of psychiatry, the Netherlands, told this news organization.

“Screening for childhood trauma is important to identify individuals at risk for more severe course of the disorder and post-treatment residual symptoms,” she added.

The study was published online in the Lancet Psychiatry.
 

Common and potent risk factor

The researchers note that CT is common and is a potent risk factor for depression. Previous studies have “consistently indicated significantly higher severity and persistence of depressive symptoms in adult patients with depression and a history of childhood trauma.”

Previous individual and meta-analytic studies “indicated poorer response to first-line depression treatments in patients with childhood trauma, compared to those without trauma, suggesting the need for new personalized treatments for depressed patients with childhood trauma history,” Ms. Kuzminskaite said.

“However, the evidence on poorer treatment outcomes has not been definitive, and a comprehensive meta-analysis of available findings has been lacking,” she added.

The previous meta-analyses showed high between-study heterogeneity, and some primary studies reported similar or even superior improvement for patients with CT, compared with those without such history, following treatment with evidence-based psychotherapy or pharmacotherapy.

Previous studies also did not investigate the “relative contribution of different childhood trauma types.”

To address this gap, investigators in the Childhood Trauma Meta-Analysis Study Group conducted the “largest and most comprehensive study of available evidence examining the effects of childhood trauma on the efficacy and effectiveness of first-line treatments for adults with MDD.”

To be included, a study had to focus on adults over 18 years old who had received a primary diagnosis of depression. The study had to have included an available assessment of childhood trauma, and patients were required to have undergone psychotherapy and/or pharmacotherapy for depression alone or in combination with other guideline-recommended treatments. Studies were also required to have a comparator group, when applicable, and to have reported depression severity before and after the acute treatment phase.

Of 10,505 publications, 54 trials met inclusion criteria; of these, 29 (20 randomized controlled trials and 9 open trials), encompassing 6,830 participants aged 18-85 years, included data that had been made available by authors of the various studies and were included in the current analysis.

Most studies focused on MDD; 11 trials focused on patients with chronic or treatment-resistant depression.

The primary outcome was “depression severity change from baseline to the end of the acute treatment phase” (expressed as standardized effect size – Hedges’ g).
 

 

 

Greater treatment motivation?

Of the included patients, 62% reported a history of CT. They were found to have more severe depression at baseline, compared with those without CT (g = .202; 95% confidence interval, 0.145-0.258; I² = 0%).

The benefits from active treatment obtained by these patients with CT were similar to the benefits obtained by their counterparts without CT (between-group treatment effect difference: g = .016; 95% CI, –0.094-0.125; I² = 44.3%).

No significant difference in active treatment effects (in comparison with control condition) was found between individuals with and those without CT (g = .605; 95% CI, 0.294-0.916; I² = 58.0%; and g = .178; 95% CI, –0.195-0.552; I² = 67.5%, respectively; between-group difference P = .051).

Dropout rates were similar for the participants with and those without CT (risk ratio, 1.063; 95% CI, 0.945-1.195; I² = 0%).

“Findings did not significantly differ by childhood trauma type, study design, depression diagnosis, assessment method of childhood trauma, study quality, year, or treatment type or length,” the authors report.

The findings did, however, differ by country, with North American studies showing larger treatment effects for patients with CT, compared with studies conducted in Asian-Pacific countries (g = 0.150; 95% CI, 0.030-0.269; vs. g = 0.255; 95% CI, –0.508- –0.002, respectively; corrected false discovery rate, 0.0080). “However, because of limited power, these findings should be interpreted with caution,” the authors warn.

“It could be a chance finding and is certainly not causal,” Ms. Kuzminskaite suggested.

Most studies (21 of the 29) had a “moderate to high risk of bias.” But when the researchers conducted a sensitivity analysis in the low-bias studies, they found that results were similar to those of the primary analysis that included all the studies.

“Treatments were similarly effective for patients with and without childhood trauma, with slightly larger active treatment (vs. control condition – placebo, wait list, care-as-usual) effects for patients with childhood trauma history,” Ms. Kuzminskaite said.

“Some evidence suggests that patients with childhood trauma are characterized by greater treatment motivation,” she noted. Moreover, “they are also more severely depressed prior to treatment [and] thus have more room for improvement.”
 

‘Hopeful message’

Commenting for this news organization, Yvette Sheline, MD, McLure professor of psychiatry, radiology, and neurology and director of the center for neuromodulation in depression and Stress, University of Pennsylvania, Philadelphia, called it a “well-executed” and “straightforward” study “with clear-cut findings.”

Dr. Sheline, the director of the section on mood, anxiety, and trauma, who was not involved with the study, agrees with the authors’ conclusions – “to use evidence-based treatments for depression in all patients,” with or without a history of CT.

In an accompanying editorial, Antoine Yrondi, MD, PhD, of Université de Toulouse (France), called the findings “important and encouraging” but cautioned that CT could be associated with conditions other than depression, which could make MDD “more difficult to treat.”

Nevertheless, the meta-analysis “delivers a hopeful message to patients with childhood trauma that evidence-based psychotherapy and pharmacotherapy could improve depressive symptoms,” Dr. Yrondi said.

Dr. Yrondi encouraged physicians not to neglect CT in patients with MDD. “For this, it is important that physicians are trained to evaluate childhood trauma and to take it into account in their daily practice.”

No source of funding for the study was listed. The authors and Dr. Sheline have disclosed no relevant financial relationships. Dr. Yrondi has received speaker’s honoraria from AstraZeneca, Janssen, Lundbeck, Otsuka, and Jazz and has carried out clinical studies in relation to the development of a medicine for Janssen and Lundbeck that are unrelated to this work.

A version of this article first appeared on Medscape.com.

Despite a higher symptom burden, patients with major depressive disorder (MDD) and a history of childhood trauma (CT) can achieve significant recovery following treatment with a combination of pharmacotherapy and psychotherapy, new research suggests.
 

Results from a meta-analysis of 29 studies from 1966 to 2019, which included almost 7,000 adults with MDD, showed that more than 60% reported a history of CT. But despite having more severe depression at baseline, those with CT benefited from active treatment. Effect sizes were comparable, and dropout rates were similar to those of their counterparts without CT.

“Evidence-based psychotherapy and pharmacotherapy should be offered to depressed patients, regardless of their childhood trauma status,” lead author Erika Kuzminskaite, MSc, a PhD candidate at Amsterdam UMC department of psychiatry, the Netherlands, told this news organization.

“Screening for childhood trauma is important to identify individuals at risk for more severe course of the disorder and post-treatment residual symptoms,” she added.

The study was published online in the Lancet Psychiatry.
 

Common and potent risk factor

The researchers note that CT is common and is a potent risk factor for depression. Previous studies have “consistently indicated significantly higher severity and persistence of depressive symptoms in adult patients with depression and a history of childhood trauma.”

Previous individual and meta-analytic studies “indicated poorer response to first-line depression treatments in patients with childhood trauma, compared to those without trauma, suggesting the need for new personalized treatments for depressed patients with childhood trauma history,” Ms. Kuzminskaite said.

“However, the evidence on poorer treatment outcomes has not been definitive, and a comprehensive meta-analysis of available findings has been lacking,” she added.

The previous meta-analyses showed high between-study heterogeneity, and some primary studies reported similar or even superior improvement for patients with CT, compared with those without such history, following treatment with evidence-based psychotherapy or pharmacotherapy.

Previous studies also did not investigate the “relative contribution of different childhood trauma types.”

To address this gap, investigators in the Childhood Trauma Meta-Analysis Study Group conducted the “largest and most comprehensive study of available evidence examining the effects of childhood trauma on the efficacy and effectiveness of first-line treatments for adults with MDD.”

To be included, a study had to focus on adults over 18 years old who had received a primary diagnosis of depression. The study had to have included an available assessment of childhood trauma, and patients were required to have undergone psychotherapy and/or pharmacotherapy for depression alone or in combination with other guideline-recommended treatments. Studies were also required to have a comparator group, when applicable, and to have reported depression severity before and after the acute treatment phase.

Of 10,505 publications, 54 trials met inclusion criteria; of these, 29 (20 randomized controlled trials and 9 open trials), encompassing 6,830 participants aged 18-85 years, included data that had been made available by authors of the various studies and were included in the current analysis.

Most studies focused on MDD; 11 trials focused on patients with chronic or treatment-resistant depression.

The primary outcome was “depression severity change from baseline to the end of the acute treatment phase” (expressed as standardized effect size – Hedges’ g).
 

 

 

Greater treatment motivation?

Of the included patients, 62% reported a history of CT. They were found to have more severe depression at baseline, compared with those without CT (g = .202; 95% confidence interval, 0.145-0.258; I² = 0%).

The benefits from active treatment obtained by these patients with CT were similar to the benefits obtained by their counterparts without CT (between-group treatment effect difference: g = .016; 95% CI, –0.094-0.125; I² = 44.3%).

No significant difference in active treatment effects (in comparison with control condition) was found between individuals with and those without CT (g = .605; 95% CI, 0.294-0.916; I² = 58.0%; and g = .178; 95% CI, –0.195-0.552; I² = 67.5%, respectively; between-group difference P = .051).

Dropout rates were similar for the participants with and those without CT (risk ratio, 1.063; 95% CI, 0.945-1.195; I² = 0%).

“Findings did not significantly differ by childhood trauma type, study design, depression diagnosis, assessment method of childhood trauma, study quality, year, or treatment type or length,” the authors report.

The findings did, however, differ by country, with North American studies showing larger treatment effects for patients with CT, compared with studies conducted in Asian-Pacific countries (g = 0.150; 95% CI, 0.030-0.269; vs. g = 0.255; 95% CI, –0.508- –0.002, respectively; corrected false discovery rate, 0.0080). “However, because of limited power, these findings should be interpreted with caution,” the authors warn.

“It could be a chance finding and is certainly not causal,” Ms. Kuzminskaite suggested.

Most studies (21 of the 29) had a “moderate to high risk of bias.” But when the researchers conducted a sensitivity analysis in the low-bias studies, they found that results were similar to those of the primary analysis that included all the studies.

“Treatments were similarly effective for patients with and without childhood trauma, with slightly larger active treatment (vs. control condition – placebo, wait list, care-as-usual) effects for patients with childhood trauma history,” Ms. Kuzminskaite said.

“Some evidence suggests that patients with childhood trauma are characterized by greater treatment motivation,” she noted. Moreover, “they are also more severely depressed prior to treatment [and] thus have more room for improvement.”
 

‘Hopeful message’

Commenting for this news organization, Yvette Sheline, MD, McLure professor of psychiatry, radiology, and neurology and director of the center for neuromodulation in depression and Stress, University of Pennsylvania, Philadelphia, called it a “well-executed” and “straightforward” study “with clear-cut findings.”

Dr. Sheline, the director of the section on mood, anxiety, and trauma, who was not involved with the study, agrees with the authors’ conclusions – “to use evidence-based treatments for depression in all patients,” with or without a history of CT.

In an accompanying editorial, Antoine Yrondi, MD, PhD, of Université de Toulouse (France), called the findings “important and encouraging” but cautioned that CT could be associated with conditions other than depression, which could make MDD “more difficult to treat.”

Nevertheless, the meta-analysis “delivers a hopeful message to patients with childhood trauma that evidence-based psychotherapy and pharmacotherapy could improve depressive symptoms,” Dr. Yrondi said.

Dr. Yrondi encouraged physicians not to neglect CT in patients with MDD. “For this, it is important that physicians are trained to evaluate childhood trauma and to take it into account in their daily practice.”

No source of funding for the study was listed. The authors and Dr. Sheline have disclosed no relevant financial relationships. Dr. Yrondi has received speaker’s honoraria from AstraZeneca, Janssen, Lundbeck, Otsuka, and Jazz and has carried out clinical studies in relation to the development of a medicine for Janssen and Lundbeck that are unrelated to this work.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM LANCET PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Timing of food intake a novel strategy for treating mood disorders?

Article Type
Changed
Tue, 09/27/2022 - 11:53

Shift workers who confine their eating to the daytime may experience fewer mood symptoms compared to those who eat both day and night, new research suggests.

Investigators at Brigham and Women’s Hospital, Boston, created a simulated nightwork schedule for 19 individuals in a laboratory setting. Participants then engaged in two different meal timing models – daytime-only meals (DMI), and meals taken during both daytime and nighttime (DNMC).

Depression- and anxiety-like mood levels increased by 26% and 16%, respectively, among the daytime and nighttime eaters, but there was no such increase in daytime-only eaters.

“Our findings provide evidence for the timing of food intake as a novel strategy to potentially minimize mood vulnerability in individuals experiencing circadian misalignment, such as people engaged in shift work, experiencing jet lag, or suffering from circadian rhythm disorders,” co–corresponding author Frank A.J.L. Scheer, PhD, director of the medical chronobiology program, Brigham and Women’s Hospital, Boston, said in a news release.

The study was published online in the Proceedings of the National Academy of Sciences.
 

Misaligned circadian clock

“Shift workers often experience a misalignment between their central circadian clock in the brain and daily behaviors, such as sleep/wake and fasting/eating cycles,” senior author Sarah Chellappa, MD, PhD, currently the Alexander Von Humboldt Experienced Fellow in the department of nuclear medicine, University of Cologne (Germany). Dr. Chellappa was a postdoctoral fellow at Brigham and Women’s Hospital when the study was conducted.

“They also have a 25%-40% higher risk of depression and anxiety,” she continued. “Since meal timing is important for physical health and diet is important for mood, we sought to find out whether meal timing can benefit mental health as well.”

Given that impaired glycemic control is a “risk factor for mood disruption,” the researchers tested the prediction that daytime eating “would prevent mood vulnerability, despite simulated night work.”

To investigate the question, they conducted a parallel-design, randomized clinical trial that included a 14-day circadian laboratory protocol with 19 healthy adults (12 men, 7 women; mean age, 26.5 ± 4.1 years) who underwent a forced desynchrony (FD) in dim light for 4 “days,” each of which consisted of 28 hours. Each 28-hour “day” resulted in an additional 4-hour misalignment between the central circadian clock and external behavioral/environmental cycles.

By the fourth day, the participants were misaligned by 12 hours, compared to baseline (that is, the first day). They were then randomly assigned to two groups.

The DNMC group – the control group – had a “typical 28-hour FD protocol,” with behavioral and environmental cycles (sleep/wake, rest/activity, supine/upright posture, dark during scheduled sleep/dim light during wakefulness) scheduled on a 28-hour cycle. Thus, they took their meals during both “daytime” and “nighttime,” which is the typical way that night workers eat.

The DMI group underwent a modified 28-hour FD protocol, with all cycles scheduled on a 28-hour basis, except for the fasting/eating cycle, which was scheduled on a 24-hour basis, resulting in meals consumed only during the “daytime.”

Depression- and anxiety-like mood (which “correspond to an amalgam of mood states typically observed in depression and anxiety) were assessed every hour during the 4 FD days, using computerized visual analogue scales.
 

 

 

Nutritional psychiatry

Participants in the DNMC group experienced an increase from baseline in depression- and anxiety-like mood levels of 26.2% (95% confidence interval, 21-31.5; P = .001; P value using false discovery rate, .01; effect-size r, 0.78) and 16.1% (95% CI, 8.5-23.6; P = .005; PFDR, .001; effect-size r, 0.47), respectively.

By contrast, a similar increase did not take place in the DMI group for either depression- or anxiety-like mood levels (95% CI, –5.7% to 7.4%, P not significant and 95% CI, –3.1% to 9.9%, P not significant, respectively).

The researchers tested “whether increase mood vulnerability during simulated night work was associated with the degree of internal circadian misalignment” — defined as “change in the phase difference between the acrophase of circadian glucose rhythms and the bathyphase of circadian body temperature rhythms.”

They found that a larger degree of internal circadian misalignment was “robustly associated” with more depression-like (r, 0.77; P = .001) and anxiety-like (r, 0.67; P = .002) mood levels during simulated night work.

The findings imply that meal timing had “moderate to large effects in depression-like and anxiety-like mood levels during night work, and that such effects were associated with the degree of internal circadian misalignment,” the authors wrote.

The laboratory protocol of both groups was identical except for the timing of meals. The authors noted that the “relevance of diet on sleep, circadian rhythms, and mental health is receiving growing awareness with the emergence of a new field, nutritional psychiatry.”

People who experience depression “often report poor-quality diets with high carbohydrate intake,” and there is evidence that adherence to the Mediterranean diet is associated “with lower odds of depression, anxiety, and psychological distress.”

They cautioned that although these emerging studies suggest an association between dietary factors and mental health, “experimental studies in individuals with depression and/or anxiety/anxiety-related disorders are required to determine causality and direction of effects.”

They described meal timing as “an emerging aspect of nutrition, with increasing research interest because of its influence on physical health.” However, they noted, “the causal role of the timing of food intake on mental health remains to be tested.”
 

Novel findings

Commenting for this article, Kathleen Merikangas, PhD, distinguished investigator and chief, genetic epidemiology research branch, intramural research program, National Institute of Mental Health, Bethesda, Md., described the research as important with novel findings.

The research “employs the elegant, carefully controlled laboratory procedures that have unraveled the influence of light and other environmental cues on sleep and circadian rhythms over the past 2 decades,” said Dr. Merikangas, who was not involved with the study.

“One of the most significant contributions of this work is its demonstration of the importance of investigating circadian rhythms of multiple systems rather than solely focusing on sleep, eating, or emotional states that have often been studied in isolation,” she pointed out.

“Growing evidence from basic research highlights the interdependence of multiple human systems that should be built into interventions that tend to focus on one or two domains.”

She recommended that this work be replicated “in more diverse samples ... in both controlled and naturalistic settings...to test both the generalizability and mechanism of these intriguing findings.”

The study was funded by the National Institutes of Health. Individual investigators were funded by the Alexander Von Humboldt Foundation and the American Diabetes Association. Dr. Chellappa disclosed no relevant financial relationships. Dr. Merikangas disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Shift workers who confine their eating to the daytime may experience fewer mood symptoms compared to those who eat both day and night, new research suggests.

Investigators at Brigham and Women’s Hospital, Boston, created a simulated nightwork schedule for 19 individuals in a laboratory setting. Participants then engaged in two different meal timing models – daytime-only meals (DMI), and meals taken during both daytime and nighttime (DNMC).

Depression- and anxiety-like mood levels increased by 26% and 16%, respectively, among the daytime and nighttime eaters, but there was no such increase in daytime-only eaters.

“Our findings provide evidence for the timing of food intake as a novel strategy to potentially minimize mood vulnerability in individuals experiencing circadian misalignment, such as people engaged in shift work, experiencing jet lag, or suffering from circadian rhythm disorders,” co–corresponding author Frank A.J.L. Scheer, PhD, director of the medical chronobiology program, Brigham and Women’s Hospital, Boston, said in a news release.

The study was published online in the Proceedings of the National Academy of Sciences.
 

Misaligned circadian clock

“Shift workers often experience a misalignment between their central circadian clock in the brain and daily behaviors, such as sleep/wake and fasting/eating cycles,” senior author Sarah Chellappa, MD, PhD, currently the Alexander Von Humboldt Experienced Fellow in the department of nuclear medicine, University of Cologne (Germany). Dr. Chellappa was a postdoctoral fellow at Brigham and Women’s Hospital when the study was conducted.

“They also have a 25%-40% higher risk of depression and anxiety,” she continued. “Since meal timing is important for physical health and diet is important for mood, we sought to find out whether meal timing can benefit mental health as well.”

Given that impaired glycemic control is a “risk factor for mood disruption,” the researchers tested the prediction that daytime eating “would prevent mood vulnerability, despite simulated night work.”

To investigate the question, they conducted a parallel-design, randomized clinical trial that included a 14-day circadian laboratory protocol with 19 healthy adults (12 men, 7 women; mean age, 26.5 ± 4.1 years) who underwent a forced desynchrony (FD) in dim light for 4 “days,” each of which consisted of 28 hours. Each 28-hour “day” resulted in an additional 4-hour misalignment between the central circadian clock and external behavioral/environmental cycles.

By the fourth day, the participants were misaligned by 12 hours, compared to baseline (that is, the first day). They were then randomly assigned to two groups.

The DNMC group – the control group – had a “typical 28-hour FD protocol,” with behavioral and environmental cycles (sleep/wake, rest/activity, supine/upright posture, dark during scheduled sleep/dim light during wakefulness) scheduled on a 28-hour cycle. Thus, they took their meals during both “daytime” and “nighttime,” which is the typical way that night workers eat.

The DMI group underwent a modified 28-hour FD protocol, with all cycles scheduled on a 28-hour basis, except for the fasting/eating cycle, which was scheduled on a 24-hour basis, resulting in meals consumed only during the “daytime.”

Depression- and anxiety-like mood (which “correspond to an amalgam of mood states typically observed in depression and anxiety) were assessed every hour during the 4 FD days, using computerized visual analogue scales.
 

 

 

Nutritional psychiatry

Participants in the DNMC group experienced an increase from baseline in depression- and anxiety-like mood levels of 26.2% (95% confidence interval, 21-31.5; P = .001; P value using false discovery rate, .01; effect-size r, 0.78) and 16.1% (95% CI, 8.5-23.6; P = .005; PFDR, .001; effect-size r, 0.47), respectively.

By contrast, a similar increase did not take place in the DMI group for either depression- or anxiety-like mood levels (95% CI, –5.7% to 7.4%, P not significant and 95% CI, –3.1% to 9.9%, P not significant, respectively).

The researchers tested “whether increase mood vulnerability during simulated night work was associated with the degree of internal circadian misalignment” — defined as “change in the phase difference between the acrophase of circadian glucose rhythms and the bathyphase of circadian body temperature rhythms.”

They found that a larger degree of internal circadian misalignment was “robustly associated” with more depression-like (r, 0.77; P = .001) and anxiety-like (r, 0.67; P = .002) mood levels during simulated night work.

The findings imply that meal timing had “moderate to large effects in depression-like and anxiety-like mood levels during night work, and that such effects were associated with the degree of internal circadian misalignment,” the authors wrote.

The laboratory protocol of both groups was identical except for the timing of meals. The authors noted that the “relevance of diet on sleep, circadian rhythms, and mental health is receiving growing awareness with the emergence of a new field, nutritional psychiatry.”

People who experience depression “often report poor-quality diets with high carbohydrate intake,” and there is evidence that adherence to the Mediterranean diet is associated “with lower odds of depression, anxiety, and psychological distress.”

They cautioned that although these emerging studies suggest an association between dietary factors and mental health, “experimental studies in individuals with depression and/or anxiety/anxiety-related disorders are required to determine causality and direction of effects.”

They described meal timing as “an emerging aspect of nutrition, with increasing research interest because of its influence on physical health.” However, they noted, “the causal role of the timing of food intake on mental health remains to be tested.”
 

Novel findings

Commenting for this article, Kathleen Merikangas, PhD, distinguished investigator and chief, genetic epidemiology research branch, intramural research program, National Institute of Mental Health, Bethesda, Md., described the research as important with novel findings.

The research “employs the elegant, carefully controlled laboratory procedures that have unraveled the influence of light and other environmental cues on sleep and circadian rhythms over the past 2 decades,” said Dr. Merikangas, who was not involved with the study.

“One of the most significant contributions of this work is its demonstration of the importance of investigating circadian rhythms of multiple systems rather than solely focusing on sleep, eating, or emotional states that have often been studied in isolation,” she pointed out.

“Growing evidence from basic research highlights the interdependence of multiple human systems that should be built into interventions that tend to focus on one or two domains.”

She recommended that this work be replicated “in more diverse samples ... in both controlled and naturalistic settings...to test both the generalizability and mechanism of these intriguing findings.”

The study was funded by the National Institutes of Health. Individual investigators were funded by the Alexander Von Humboldt Foundation and the American Diabetes Association. Dr. Chellappa disclosed no relevant financial relationships. Dr. Merikangas disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Shift workers who confine their eating to the daytime may experience fewer mood symptoms compared to those who eat both day and night, new research suggests.

Investigators at Brigham and Women’s Hospital, Boston, created a simulated nightwork schedule for 19 individuals in a laboratory setting. Participants then engaged in two different meal timing models – daytime-only meals (DMI), and meals taken during both daytime and nighttime (DNMC).

Depression- and anxiety-like mood levels increased by 26% and 16%, respectively, among the daytime and nighttime eaters, but there was no such increase in daytime-only eaters.

“Our findings provide evidence for the timing of food intake as a novel strategy to potentially minimize mood vulnerability in individuals experiencing circadian misalignment, such as people engaged in shift work, experiencing jet lag, or suffering from circadian rhythm disorders,” co–corresponding author Frank A.J.L. Scheer, PhD, director of the medical chronobiology program, Brigham and Women’s Hospital, Boston, said in a news release.

The study was published online in the Proceedings of the National Academy of Sciences.
 

Misaligned circadian clock

“Shift workers often experience a misalignment between their central circadian clock in the brain and daily behaviors, such as sleep/wake and fasting/eating cycles,” senior author Sarah Chellappa, MD, PhD, currently the Alexander Von Humboldt Experienced Fellow in the department of nuclear medicine, University of Cologne (Germany). Dr. Chellappa was a postdoctoral fellow at Brigham and Women’s Hospital when the study was conducted.

“They also have a 25%-40% higher risk of depression and anxiety,” she continued. “Since meal timing is important for physical health and diet is important for mood, we sought to find out whether meal timing can benefit mental health as well.”

Given that impaired glycemic control is a “risk factor for mood disruption,” the researchers tested the prediction that daytime eating “would prevent mood vulnerability, despite simulated night work.”

To investigate the question, they conducted a parallel-design, randomized clinical trial that included a 14-day circadian laboratory protocol with 19 healthy adults (12 men, 7 women; mean age, 26.5 ± 4.1 years) who underwent a forced desynchrony (FD) in dim light for 4 “days,” each of which consisted of 28 hours. Each 28-hour “day” resulted in an additional 4-hour misalignment between the central circadian clock and external behavioral/environmental cycles.

By the fourth day, the participants were misaligned by 12 hours, compared to baseline (that is, the first day). They were then randomly assigned to two groups.

The DNMC group – the control group – had a “typical 28-hour FD protocol,” with behavioral and environmental cycles (sleep/wake, rest/activity, supine/upright posture, dark during scheduled sleep/dim light during wakefulness) scheduled on a 28-hour cycle. Thus, they took their meals during both “daytime” and “nighttime,” which is the typical way that night workers eat.

The DMI group underwent a modified 28-hour FD protocol, with all cycles scheduled on a 28-hour basis, except for the fasting/eating cycle, which was scheduled on a 24-hour basis, resulting in meals consumed only during the “daytime.”

Depression- and anxiety-like mood (which “correspond to an amalgam of mood states typically observed in depression and anxiety) were assessed every hour during the 4 FD days, using computerized visual analogue scales.
 

 

 

Nutritional psychiatry

Participants in the DNMC group experienced an increase from baseline in depression- and anxiety-like mood levels of 26.2% (95% confidence interval, 21-31.5; P = .001; P value using false discovery rate, .01; effect-size r, 0.78) and 16.1% (95% CI, 8.5-23.6; P = .005; PFDR, .001; effect-size r, 0.47), respectively.

By contrast, a similar increase did not take place in the DMI group for either depression- or anxiety-like mood levels (95% CI, –5.7% to 7.4%, P not significant and 95% CI, –3.1% to 9.9%, P not significant, respectively).

The researchers tested “whether increase mood vulnerability during simulated night work was associated with the degree of internal circadian misalignment” — defined as “change in the phase difference between the acrophase of circadian glucose rhythms and the bathyphase of circadian body temperature rhythms.”

They found that a larger degree of internal circadian misalignment was “robustly associated” with more depression-like (r, 0.77; P = .001) and anxiety-like (r, 0.67; P = .002) mood levels during simulated night work.

The findings imply that meal timing had “moderate to large effects in depression-like and anxiety-like mood levels during night work, and that such effects were associated with the degree of internal circadian misalignment,” the authors wrote.

The laboratory protocol of both groups was identical except for the timing of meals. The authors noted that the “relevance of diet on sleep, circadian rhythms, and mental health is receiving growing awareness with the emergence of a new field, nutritional psychiatry.”

People who experience depression “often report poor-quality diets with high carbohydrate intake,” and there is evidence that adherence to the Mediterranean diet is associated “with lower odds of depression, anxiety, and psychological distress.”

They cautioned that although these emerging studies suggest an association between dietary factors and mental health, “experimental studies in individuals with depression and/or anxiety/anxiety-related disorders are required to determine causality and direction of effects.”

They described meal timing as “an emerging aspect of nutrition, with increasing research interest because of its influence on physical health.” However, they noted, “the causal role of the timing of food intake on mental health remains to be tested.”
 

Novel findings

Commenting for this article, Kathleen Merikangas, PhD, distinguished investigator and chief, genetic epidemiology research branch, intramural research program, National Institute of Mental Health, Bethesda, Md., described the research as important with novel findings.

The research “employs the elegant, carefully controlled laboratory procedures that have unraveled the influence of light and other environmental cues on sleep and circadian rhythms over the past 2 decades,” said Dr. Merikangas, who was not involved with the study.

“One of the most significant contributions of this work is its demonstration of the importance of investigating circadian rhythms of multiple systems rather than solely focusing on sleep, eating, or emotional states that have often been studied in isolation,” she pointed out.

“Growing evidence from basic research highlights the interdependence of multiple human systems that should be built into interventions that tend to focus on one or two domains.”

She recommended that this work be replicated “in more diverse samples ... in both controlled and naturalistic settings...to test both the generalizability and mechanism of these intriguing findings.”

The study was funded by the National Institutes of Health. Individual investigators were funded by the Alexander Von Humboldt Foundation and the American Diabetes Association. Dr. Chellappa disclosed no relevant financial relationships. Dr. Merikangas disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article