User login
Mortality post perioperative CPR climbs with patient frailty
And the frailer that patients were going into surgery, according to their scores on an established frailty index, the greater their adjusted mortality risk at 30 days and the likelier they were to be discharged to a location other than their home.
The findings are based on more than 3,000 patients in an American College of Surgeons (ACS) quality improvement registry who underwent CPR at noncardiac surgery, about one-fourth of whom scored a least 40 on the revised Risk Analysis Index (RAI). The frailty index accounts for the patient’s comorbidities, cognition, functional and nutritional status, and other factors as predictors of postoperative mortality risk.
Such CPR for perioperative cardiac arrest “should not be considered futile just because a patient is frail, but neither should cardiac arrest be considered as ‘reversible’ in this population, as previously thought,” lead author Matthew B. Allen, MD, of Brigham and Women’s Hospital, Boston, said in an interview.
“We know that patients who are frail have higher risk of complications and mortality after surgery, and recent studies have demonstrated that frailty is associated with very poor outcomes following CPR in nonsurgical settings,” said Dr. Allen, an attending physician in the department of anesthesiology, perioperative, and pain medicine at his center.
Although cardiac arrest is typically regarded as being “more reversible” in the setting of surgery and anesthesia than elsewhere in the hospital, he observed, there’s very little data on whether that is indeed the case for frail patients.
The current analysis provides “a heretofore absent base of evidence to guide decision-making regarding CPR in patients with frailty who undergo surgery,” states the report, published in JAMA Network Open.
The 3,058 patients in the analysis, from the ACS National Surgical Quality Improvement database, received CPR for cardiac arrest during or soon after noncardiac surgery. Their mean age was 71 and 44% were women.
Their RAI scores ranged from 14 to 71 and averaged 37.7; one-fourth of the patients had scores of 40 or higher, the study’s threshold for identifying patients as “frail.”
Overall in the cohort, more cardiac arrests occurred during surgeries that entailed low-to-moderate physiologic stress (an Operative Stress Score of 1 to 3) than in the setting of emergency surgery: 67.9% vs. 39.1%, respectively.
During emergency surgeries, a greater proportion of frail than nonfrail patients experienced cardiac arrest, 42% and 38%, respectively. The same relationship was observed during low-to-moderate stress surgeries: 76.6% of frail patients and 64.8% of nonfrail patients. General anesthesia was used in about 93% of procedures for both frail and nonfrail patients, the report states.
The primary endpoint, 30-day mortality, was 58.6% overall, 67.4% in frail patients, and 55.6% for nonfrail patients. Frailty and mortality were positively associated, with an adjusted odds ratio (AOR) of 1.35 (95% confidence interval [CI], 1.11-1.65, P = .003) in multivariate analysis.
Of the cohort’s 1,164 patients who had been admitted from home and survived to discharge, 38.6% were discharged to a destination other than home; the corresponding rates for frail and nonfrail patients were 59.3% and 33.9%, respectively. Frailty and nonhome discharge were positively correlated with an AOR of 1.85 (95% CI, 1.31-2.62, P < .001).
“There is no such thing as a low-risk procedure in patients who are frail,” Dr. Allen said in an interview. “Frail patients should be medically optimized prior to undergoing surgery and anesthesia, and plans should be tailored to patients’ vulnerabilities to reduce the risk of complications and facilitate rapid recognition and treatment when they occur.”
Moreover, he said, management of clinical decompensation in the perioperative period should be a part of the shared decision-making process “to establish a plan aligned with the patients’ priorities whenever possible.”
The current study quantifies risk associated with frailty in the surgical setting, and “this quantification can help providers, patients, and insurers better grasp the growing frailty problem,” Balachundhar Subramaniam, MD, MPH, of Harvard Medical School, Boston, said in an interview.
Universal screening for frailty is “a must in all surgical patients” to help identify those who are high-risk and reduce their chances for perioperative adverse events, said Dr. Subramaniam, who was not involved in the study.
“Prehabilitation with education, nutrition, physical fitness, and psychological support offer the best chance of significantly reducing poor outcomes” in frail patients, he said, along with “continuous education” in the care of frail patients.
University of Colorado surgeon Joseph Cleveland, MD, not part of the current study, said that it “provides a framework for counseling patients” regarding their do-not-resuscitate status.
“We can counsel patients with frailty with this information,” he said, “that if their heart should stop or go into in irregular rhythm, their chances of surviving are not greater than 50% and they have a more than 50% chance of not being discharged home.”
Dr. Allen reported receiving a clinical translational starter grant from Brigham and Women’s Hospital Department of Anesthesiology; disclosures for the other authors are in the original article. Dr. Subramaniam disclosed research funding from Masimo and Merck and serving as an education consultant for Masimo. Dr. Cleveland reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
And the frailer that patients were going into surgery, according to their scores on an established frailty index, the greater their adjusted mortality risk at 30 days and the likelier they were to be discharged to a location other than their home.
The findings are based on more than 3,000 patients in an American College of Surgeons (ACS) quality improvement registry who underwent CPR at noncardiac surgery, about one-fourth of whom scored a least 40 on the revised Risk Analysis Index (RAI). The frailty index accounts for the patient’s comorbidities, cognition, functional and nutritional status, and other factors as predictors of postoperative mortality risk.
Such CPR for perioperative cardiac arrest “should not be considered futile just because a patient is frail, but neither should cardiac arrest be considered as ‘reversible’ in this population, as previously thought,” lead author Matthew B. Allen, MD, of Brigham and Women’s Hospital, Boston, said in an interview.
“We know that patients who are frail have higher risk of complications and mortality after surgery, and recent studies have demonstrated that frailty is associated with very poor outcomes following CPR in nonsurgical settings,” said Dr. Allen, an attending physician in the department of anesthesiology, perioperative, and pain medicine at his center.
Although cardiac arrest is typically regarded as being “more reversible” in the setting of surgery and anesthesia than elsewhere in the hospital, he observed, there’s very little data on whether that is indeed the case for frail patients.
The current analysis provides “a heretofore absent base of evidence to guide decision-making regarding CPR in patients with frailty who undergo surgery,” states the report, published in JAMA Network Open.
The 3,058 patients in the analysis, from the ACS National Surgical Quality Improvement database, received CPR for cardiac arrest during or soon after noncardiac surgery. Their mean age was 71 and 44% were women.
Their RAI scores ranged from 14 to 71 and averaged 37.7; one-fourth of the patients had scores of 40 or higher, the study’s threshold for identifying patients as “frail.”
Overall in the cohort, more cardiac arrests occurred during surgeries that entailed low-to-moderate physiologic stress (an Operative Stress Score of 1 to 3) than in the setting of emergency surgery: 67.9% vs. 39.1%, respectively.
During emergency surgeries, a greater proportion of frail than nonfrail patients experienced cardiac arrest, 42% and 38%, respectively. The same relationship was observed during low-to-moderate stress surgeries: 76.6% of frail patients and 64.8% of nonfrail patients. General anesthesia was used in about 93% of procedures for both frail and nonfrail patients, the report states.
The primary endpoint, 30-day mortality, was 58.6% overall, 67.4% in frail patients, and 55.6% for nonfrail patients. Frailty and mortality were positively associated, with an adjusted odds ratio (AOR) of 1.35 (95% confidence interval [CI], 1.11-1.65, P = .003) in multivariate analysis.
Of the cohort’s 1,164 patients who had been admitted from home and survived to discharge, 38.6% were discharged to a destination other than home; the corresponding rates for frail and nonfrail patients were 59.3% and 33.9%, respectively. Frailty and nonhome discharge were positively correlated with an AOR of 1.85 (95% CI, 1.31-2.62, P < .001).
“There is no such thing as a low-risk procedure in patients who are frail,” Dr. Allen said in an interview. “Frail patients should be medically optimized prior to undergoing surgery and anesthesia, and plans should be tailored to patients’ vulnerabilities to reduce the risk of complications and facilitate rapid recognition and treatment when they occur.”
Moreover, he said, management of clinical decompensation in the perioperative period should be a part of the shared decision-making process “to establish a plan aligned with the patients’ priorities whenever possible.”
The current study quantifies risk associated with frailty in the surgical setting, and “this quantification can help providers, patients, and insurers better grasp the growing frailty problem,” Balachundhar Subramaniam, MD, MPH, of Harvard Medical School, Boston, said in an interview.
Universal screening for frailty is “a must in all surgical patients” to help identify those who are high-risk and reduce their chances for perioperative adverse events, said Dr. Subramaniam, who was not involved in the study.
“Prehabilitation with education, nutrition, physical fitness, and psychological support offer the best chance of significantly reducing poor outcomes” in frail patients, he said, along with “continuous education” in the care of frail patients.
University of Colorado surgeon Joseph Cleveland, MD, not part of the current study, said that it “provides a framework for counseling patients” regarding their do-not-resuscitate status.
“We can counsel patients with frailty with this information,” he said, “that if their heart should stop or go into in irregular rhythm, their chances of surviving are not greater than 50% and they have a more than 50% chance of not being discharged home.”
Dr. Allen reported receiving a clinical translational starter grant from Brigham and Women’s Hospital Department of Anesthesiology; disclosures for the other authors are in the original article. Dr. Subramaniam disclosed research funding from Masimo and Merck and serving as an education consultant for Masimo. Dr. Cleveland reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
And the frailer that patients were going into surgery, according to their scores on an established frailty index, the greater their adjusted mortality risk at 30 days and the likelier they were to be discharged to a location other than their home.
The findings are based on more than 3,000 patients in an American College of Surgeons (ACS) quality improvement registry who underwent CPR at noncardiac surgery, about one-fourth of whom scored a least 40 on the revised Risk Analysis Index (RAI). The frailty index accounts for the patient’s comorbidities, cognition, functional and nutritional status, and other factors as predictors of postoperative mortality risk.
Such CPR for perioperative cardiac arrest “should not be considered futile just because a patient is frail, but neither should cardiac arrest be considered as ‘reversible’ in this population, as previously thought,” lead author Matthew B. Allen, MD, of Brigham and Women’s Hospital, Boston, said in an interview.
“We know that patients who are frail have higher risk of complications and mortality after surgery, and recent studies have demonstrated that frailty is associated with very poor outcomes following CPR in nonsurgical settings,” said Dr. Allen, an attending physician in the department of anesthesiology, perioperative, and pain medicine at his center.
Although cardiac arrest is typically regarded as being “more reversible” in the setting of surgery and anesthesia than elsewhere in the hospital, he observed, there’s very little data on whether that is indeed the case for frail patients.
The current analysis provides “a heretofore absent base of evidence to guide decision-making regarding CPR in patients with frailty who undergo surgery,” states the report, published in JAMA Network Open.
The 3,058 patients in the analysis, from the ACS National Surgical Quality Improvement database, received CPR for cardiac arrest during or soon after noncardiac surgery. Their mean age was 71 and 44% were women.
Their RAI scores ranged from 14 to 71 and averaged 37.7; one-fourth of the patients had scores of 40 or higher, the study’s threshold for identifying patients as “frail.”
Overall in the cohort, more cardiac arrests occurred during surgeries that entailed low-to-moderate physiologic stress (an Operative Stress Score of 1 to 3) than in the setting of emergency surgery: 67.9% vs. 39.1%, respectively.
During emergency surgeries, a greater proportion of frail than nonfrail patients experienced cardiac arrest, 42% and 38%, respectively. The same relationship was observed during low-to-moderate stress surgeries: 76.6% of frail patients and 64.8% of nonfrail patients. General anesthesia was used in about 93% of procedures for both frail and nonfrail patients, the report states.
The primary endpoint, 30-day mortality, was 58.6% overall, 67.4% in frail patients, and 55.6% for nonfrail patients. Frailty and mortality were positively associated, with an adjusted odds ratio (AOR) of 1.35 (95% confidence interval [CI], 1.11-1.65, P = .003) in multivariate analysis.
Of the cohort’s 1,164 patients who had been admitted from home and survived to discharge, 38.6% were discharged to a destination other than home; the corresponding rates for frail and nonfrail patients were 59.3% and 33.9%, respectively. Frailty and nonhome discharge were positively correlated with an AOR of 1.85 (95% CI, 1.31-2.62, P < .001).
“There is no such thing as a low-risk procedure in patients who are frail,” Dr. Allen said in an interview. “Frail patients should be medically optimized prior to undergoing surgery and anesthesia, and plans should be tailored to patients’ vulnerabilities to reduce the risk of complications and facilitate rapid recognition and treatment when they occur.”
Moreover, he said, management of clinical decompensation in the perioperative period should be a part of the shared decision-making process “to establish a plan aligned with the patients’ priorities whenever possible.”
The current study quantifies risk associated with frailty in the surgical setting, and “this quantification can help providers, patients, and insurers better grasp the growing frailty problem,” Balachundhar Subramaniam, MD, MPH, of Harvard Medical School, Boston, said in an interview.
Universal screening for frailty is “a must in all surgical patients” to help identify those who are high-risk and reduce their chances for perioperative adverse events, said Dr. Subramaniam, who was not involved in the study.
“Prehabilitation with education, nutrition, physical fitness, and psychological support offer the best chance of significantly reducing poor outcomes” in frail patients, he said, along with “continuous education” in the care of frail patients.
University of Colorado surgeon Joseph Cleveland, MD, not part of the current study, said that it “provides a framework for counseling patients” regarding their do-not-resuscitate status.
“We can counsel patients with frailty with this information,” he said, “that if their heart should stop or go into in irregular rhythm, their chances of surviving are not greater than 50% and they have a more than 50% chance of not being discharged home.”
Dr. Allen reported receiving a clinical translational starter grant from Brigham and Women’s Hospital Department of Anesthesiology; disclosures for the other authors are in the original article. Dr. Subramaniam disclosed research funding from Masimo and Merck and serving as an education consultant for Masimo. Dr. Cleveland reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Aspirin not the best antiplatelet for CAD secondary prevention in meta-analysis
such as clopidogrel or ticagrelor rather than aspirin, suggests a patient-level meta-analysis of seven randomized trials.
The more than 24,000 patients in the meta-analysis, called PANTHER, had documented stable CAD, prior myocardial infarction (MI), or recent or remote surgical or percutaneous coronary revascularization.
About half of patients in each antiplatelet monotherapy trial received clopidogrel or ticagrelor, and the other half received aspirin. Follow-ups ranged from 6 months to 3 years.
Those taking a P2Y12 inhibitor showed a 12% reduction in risk (P = .012) for the primary efficacy outcome, a composite of cardiovascular (CV) death, MI, and stroke, over a median of about 1.35 years. The difference was driven primarily by a 23% reduction in risk for MI (P < .001); mortality seemed unaffected by antiplatelet treatment assignment.
Although the P2Y12 inhibitor and aspirin groups were similar with respect to risk of major bleeding, the P2Y12 inhibitor group showed significant reductions in risk for gastrointestinal (GI) bleeding, definite stent thrombosis, and hemorrhagic stroke; rates of hemorrhagic stroke were well under 1% in both groups.
The treatment effects were consistent across patient subgroups, including whether the aspirin comparison was with clopidogrel or ticagrelor.
“Taken together, our data challenge the central role of aspirin in secondary prevention and support a paradigm shift toward P2Y12 inhibitor monotherapy as long-term antiplatelet strategy in the sizable population of patients with coronary atherosclerosis,” Felice Gragnano, MD, PhD, said in an interview. “Given [their] superior efficacy and similar overall safety, P2Y12 inhibitors may be preferred [over] aspirin for the prevention of cardiovascular events in patients with CAD.”
Dr. Gragnano, of the University of Campania Luigi Vanvitelli, Caserta, Italy, who called PANTHER “the largest and most comprehensive synthesis of individual patient data from randomized trials comparing P2Y12 inhibitor monotherapy with aspirin monotherapy,” is lead author of the study, which was published online in the Journal of the American College of Cardiology.
Current guidelines recommend aspirin for antiplatelet monotherapy for patients with established CAD, Dr. Gragnano said, but “the primacy of aspirin in secondary prevention is based on historical trials conducted in the 1970s and 1980s and may not apply to contemporary practice.”
Moreover, later trials that compared P2Y12 inhibitors with aspirin for secondary prevention produced “inconsistent results,” possibly owing to their heterogeneous populations of patients with coronary, cerebrovascular, or peripheral vascular disease, he said. Study-level meta-analyses in this area “provide inconclusive evidence” because they haven’t evaluated treatment effects exclusively in patients with established CAD.
Most of the seven trials’ 24,325 participants had a history of MI, and some had peripheral artery disease (PAD); the rates were 56.2% and 9.1%, respectively. Coronary revascularization, either percutaneous or surgical, had been performed for about 70%. Most (61%) had presented with acute coronary syndromes, and the remainder had presented with chronic CAD.
About 76% of the combined cohorts were from Europe or North America; the rest were from Asia. The mean age of the patients was 64 years, and about 22% were women.
In all, 12,175 had been assigned to P2Y12 inhibitor monotherapy (62% received clopidogrel and 38% received ticagrelor); 12,147 received aspirin at dosages ranging from 75 mg to 325 mg daily.
The hazard ratio (HR) for the primary efficacy outcome, P2Y12 inhibitors vs. aspirin, was significantly reduced, at 0.88 (95% confidence interval [CI], 0.79-0.97; P = .012); the number needed to treat (NNT) to prevent one primary event over 2 years was 121, the report states.
The corresponding HR for MI was 0.77 (95% CI, 0.66-0.90; P < .001), for an NNT benefit of 136. For net adverse clinical events, the HR was 0.89 (95% CI, 0.81-0.98; P = .020), for an NNT benefit of 121.
Risk for major bleeding was not significantly different (HR, 0.87; 95% CI, 0.70-1.09; P = .23), nor were risks for stroke (HR, 0.84; 95% CI, 0.70-1.02; P = .076) or cardiovascular death (HR, 1.02; 95% CI, 0.86-1.20; P = .82).
Still, the P2Y12 inhibitor group showed significant risk reductions for the following:
- GI bleeding: HR, 0.75 (95% CI, 0.57-0.97; P = .027)
- Definite stent thrombosis: HR, 0.42 (95% CI, 0.19-0.97; P = .028)
- Hemorrhagic stroke: HR, 0.43 (95% CI, 0.23-0.83; P = .012)
The current findings are “hypothesis-generating but not definitive,” Dharam Kumbhani, MD, University of Texas Southwestern, Dallas, said in an interview.
It remains unclear “whether aspirin or P2Y12 inhibitor monotherapy is better for long-term maintenance use among patients with established CAD. Aspirin has historically been the agent of choice for this indication,” said Dr. Kumbhani, who with James A. de Lemos, MD, of the same institution, wrote an editorial accompanying the PANTHER report.
“It certainly would be appropriate to consider P2Y12 monotherapy preferentially for patients with prior or currently at high risk for GI or intracranial bleeding, for instance,” Dr. Kumbhani said. For the remainder, aspirin and P2Y12 inhibitors are both “reasonable alternatives.”
In their editorial, Dr. Kumbhani and Dr. de Lemos call the PANTHER meta-analysis “a well-done study with potentially important clinical implications.” The findings “make biological sense: P2Y12 inhibitors are more potent antiplatelet agents than aspirin and have less effect on gastrointestinal mucosal integrity.”
But for now, they wrote, “both aspirin and P2Y12 inhibitors remain viable alternatives for prevention of atherothrombotic events among patients with established CAD.”
Dr. Gragnano had no disclosures; potential conflicts for the other authors are in the report. Dr. Kumbhani reports no relevant relationships; Dr. de Lemos has received honoraria for participation in data safety monitoring boards from Eli Lilly, Novo Nordisk, AstraZeneca, and Janssen.
A version of this article first appeared on Medscape.com.
such as clopidogrel or ticagrelor rather than aspirin, suggests a patient-level meta-analysis of seven randomized trials.
The more than 24,000 patients in the meta-analysis, called PANTHER, had documented stable CAD, prior myocardial infarction (MI), or recent or remote surgical or percutaneous coronary revascularization.
About half of patients in each antiplatelet monotherapy trial received clopidogrel or ticagrelor, and the other half received aspirin. Follow-ups ranged from 6 months to 3 years.
Those taking a P2Y12 inhibitor showed a 12% reduction in risk (P = .012) for the primary efficacy outcome, a composite of cardiovascular (CV) death, MI, and stroke, over a median of about 1.35 years. The difference was driven primarily by a 23% reduction in risk for MI (P < .001); mortality seemed unaffected by antiplatelet treatment assignment.
Although the P2Y12 inhibitor and aspirin groups were similar with respect to risk of major bleeding, the P2Y12 inhibitor group showed significant reductions in risk for gastrointestinal (GI) bleeding, definite stent thrombosis, and hemorrhagic stroke; rates of hemorrhagic stroke were well under 1% in both groups.
The treatment effects were consistent across patient subgroups, including whether the aspirin comparison was with clopidogrel or ticagrelor.
“Taken together, our data challenge the central role of aspirin in secondary prevention and support a paradigm shift toward P2Y12 inhibitor monotherapy as long-term antiplatelet strategy in the sizable population of patients with coronary atherosclerosis,” Felice Gragnano, MD, PhD, said in an interview. “Given [their] superior efficacy and similar overall safety, P2Y12 inhibitors may be preferred [over] aspirin for the prevention of cardiovascular events in patients with CAD.”
Dr. Gragnano, of the University of Campania Luigi Vanvitelli, Caserta, Italy, who called PANTHER “the largest and most comprehensive synthesis of individual patient data from randomized trials comparing P2Y12 inhibitor monotherapy with aspirin monotherapy,” is lead author of the study, which was published online in the Journal of the American College of Cardiology.
Current guidelines recommend aspirin for antiplatelet monotherapy for patients with established CAD, Dr. Gragnano said, but “the primacy of aspirin in secondary prevention is based on historical trials conducted in the 1970s and 1980s and may not apply to contemporary practice.”
Moreover, later trials that compared P2Y12 inhibitors with aspirin for secondary prevention produced “inconsistent results,” possibly owing to their heterogeneous populations of patients with coronary, cerebrovascular, or peripheral vascular disease, he said. Study-level meta-analyses in this area “provide inconclusive evidence” because they haven’t evaluated treatment effects exclusively in patients with established CAD.
Most of the seven trials’ 24,325 participants had a history of MI, and some had peripheral artery disease (PAD); the rates were 56.2% and 9.1%, respectively. Coronary revascularization, either percutaneous or surgical, had been performed for about 70%. Most (61%) had presented with acute coronary syndromes, and the remainder had presented with chronic CAD.
About 76% of the combined cohorts were from Europe or North America; the rest were from Asia. The mean age of the patients was 64 years, and about 22% were women.
In all, 12,175 had been assigned to P2Y12 inhibitor monotherapy (62% received clopidogrel and 38% received ticagrelor); 12,147 received aspirin at dosages ranging from 75 mg to 325 mg daily.
The hazard ratio (HR) for the primary efficacy outcome, P2Y12 inhibitors vs. aspirin, was significantly reduced, at 0.88 (95% confidence interval [CI], 0.79-0.97; P = .012); the number needed to treat (NNT) to prevent one primary event over 2 years was 121, the report states.
The corresponding HR for MI was 0.77 (95% CI, 0.66-0.90; P < .001), for an NNT benefit of 136. For net adverse clinical events, the HR was 0.89 (95% CI, 0.81-0.98; P = .020), for an NNT benefit of 121.
Risk for major bleeding was not significantly different (HR, 0.87; 95% CI, 0.70-1.09; P = .23), nor were risks for stroke (HR, 0.84; 95% CI, 0.70-1.02; P = .076) or cardiovascular death (HR, 1.02; 95% CI, 0.86-1.20; P = .82).
Still, the P2Y12 inhibitor group showed significant risk reductions for the following:
- GI bleeding: HR, 0.75 (95% CI, 0.57-0.97; P = .027)
- Definite stent thrombosis: HR, 0.42 (95% CI, 0.19-0.97; P = .028)
- Hemorrhagic stroke: HR, 0.43 (95% CI, 0.23-0.83; P = .012)
The current findings are “hypothesis-generating but not definitive,” Dharam Kumbhani, MD, University of Texas Southwestern, Dallas, said in an interview.
It remains unclear “whether aspirin or P2Y12 inhibitor monotherapy is better for long-term maintenance use among patients with established CAD. Aspirin has historically been the agent of choice for this indication,” said Dr. Kumbhani, who with James A. de Lemos, MD, of the same institution, wrote an editorial accompanying the PANTHER report.
“It certainly would be appropriate to consider P2Y12 monotherapy preferentially for patients with prior or currently at high risk for GI or intracranial bleeding, for instance,” Dr. Kumbhani said. For the remainder, aspirin and P2Y12 inhibitors are both “reasonable alternatives.”
In their editorial, Dr. Kumbhani and Dr. de Lemos call the PANTHER meta-analysis “a well-done study with potentially important clinical implications.” The findings “make biological sense: P2Y12 inhibitors are more potent antiplatelet agents than aspirin and have less effect on gastrointestinal mucosal integrity.”
But for now, they wrote, “both aspirin and P2Y12 inhibitors remain viable alternatives for prevention of atherothrombotic events among patients with established CAD.”
Dr. Gragnano had no disclosures; potential conflicts for the other authors are in the report. Dr. Kumbhani reports no relevant relationships; Dr. de Lemos has received honoraria for participation in data safety monitoring boards from Eli Lilly, Novo Nordisk, AstraZeneca, and Janssen.
A version of this article first appeared on Medscape.com.
such as clopidogrel or ticagrelor rather than aspirin, suggests a patient-level meta-analysis of seven randomized trials.
The more than 24,000 patients in the meta-analysis, called PANTHER, had documented stable CAD, prior myocardial infarction (MI), or recent or remote surgical or percutaneous coronary revascularization.
About half of patients in each antiplatelet monotherapy trial received clopidogrel or ticagrelor, and the other half received aspirin. Follow-ups ranged from 6 months to 3 years.
Those taking a P2Y12 inhibitor showed a 12% reduction in risk (P = .012) for the primary efficacy outcome, a composite of cardiovascular (CV) death, MI, and stroke, over a median of about 1.35 years. The difference was driven primarily by a 23% reduction in risk for MI (P < .001); mortality seemed unaffected by antiplatelet treatment assignment.
Although the P2Y12 inhibitor and aspirin groups were similar with respect to risk of major bleeding, the P2Y12 inhibitor group showed significant reductions in risk for gastrointestinal (GI) bleeding, definite stent thrombosis, and hemorrhagic stroke; rates of hemorrhagic stroke were well under 1% in both groups.
The treatment effects were consistent across patient subgroups, including whether the aspirin comparison was with clopidogrel or ticagrelor.
“Taken together, our data challenge the central role of aspirin in secondary prevention and support a paradigm shift toward P2Y12 inhibitor monotherapy as long-term antiplatelet strategy in the sizable population of patients with coronary atherosclerosis,” Felice Gragnano, MD, PhD, said in an interview. “Given [their] superior efficacy and similar overall safety, P2Y12 inhibitors may be preferred [over] aspirin for the prevention of cardiovascular events in patients with CAD.”
Dr. Gragnano, of the University of Campania Luigi Vanvitelli, Caserta, Italy, who called PANTHER “the largest and most comprehensive synthesis of individual patient data from randomized trials comparing P2Y12 inhibitor monotherapy with aspirin monotherapy,” is lead author of the study, which was published online in the Journal of the American College of Cardiology.
Current guidelines recommend aspirin for antiplatelet monotherapy for patients with established CAD, Dr. Gragnano said, but “the primacy of aspirin in secondary prevention is based on historical trials conducted in the 1970s and 1980s and may not apply to contemporary practice.”
Moreover, later trials that compared P2Y12 inhibitors with aspirin for secondary prevention produced “inconsistent results,” possibly owing to their heterogeneous populations of patients with coronary, cerebrovascular, or peripheral vascular disease, he said. Study-level meta-analyses in this area “provide inconclusive evidence” because they haven’t evaluated treatment effects exclusively in patients with established CAD.
Most of the seven trials’ 24,325 participants had a history of MI, and some had peripheral artery disease (PAD); the rates were 56.2% and 9.1%, respectively. Coronary revascularization, either percutaneous or surgical, had been performed for about 70%. Most (61%) had presented with acute coronary syndromes, and the remainder had presented with chronic CAD.
About 76% of the combined cohorts were from Europe or North America; the rest were from Asia. The mean age of the patients was 64 years, and about 22% were women.
In all, 12,175 had been assigned to P2Y12 inhibitor monotherapy (62% received clopidogrel and 38% received ticagrelor); 12,147 received aspirin at dosages ranging from 75 mg to 325 mg daily.
The hazard ratio (HR) for the primary efficacy outcome, P2Y12 inhibitors vs. aspirin, was significantly reduced, at 0.88 (95% confidence interval [CI], 0.79-0.97; P = .012); the number needed to treat (NNT) to prevent one primary event over 2 years was 121, the report states.
The corresponding HR for MI was 0.77 (95% CI, 0.66-0.90; P < .001), for an NNT benefit of 136. For net adverse clinical events, the HR was 0.89 (95% CI, 0.81-0.98; P = .020), for an NNT benefit of 121.
Risk for major bleeding was not significantly different (HR, 0.87; 95% CI, 0.70-1.09; P = .23), nor were risks for stroke (HR, 0.84; 95% CI, 0.70-1.02; P = .076) or cardiovascular death (HR, 1.02; 95% CI, 0.86-1.20; P = .82).
Still, the P2Y12 inhibitor group showed significant risk reductions for the following:
- GI bleeding: HR, 0.75 (95% CI, 0.57-0.97; P = .027)
- Definite stent thrombosis: HR, 0.42 (95% CI, 0.19-0.97; P = .028)
- Hemorrhagic stroke: HR, 0.43 (95% CI, 0.23-0.83; P = .012)
The current findings are “hypothesis-generating but not definitive,” Dharam Kumbhani, MD, University of Texas Southwestern, Dallas, said in an interview.
It remains unclear “whether aspirin or P2Y12 inhibitor monotherapy is better for long-term maintenance use among patients with established CAD. Aspirin has historically been the agent of choice for this indication,” said Dr. Kumbhani, who with James A. de Lemos, MD, of the same institution, wrote an editorial accompanying the PANTHER report.
“It certainly would be appropriate to consider P2Y12 monotherapy preferentially for patients with prior or currently at high risk for GI or intracranial bleeding, for instance,” Dr. Kumbhani said. For the remainder, aspirin and P2Y12 inhibitors are both “reasonable alternatives.”
In their editorial, Dr. Kumbhani and Dr. de Lemos call the PANTHER meta-analysis “a well-done study with potentially important clinical implications.” The findings “make biological sense: P2Y12 inhibitors are more potent antiplatelet agents than aspirin and have less effect on gastrointestinal mucosal integrity.”
But for now, they wrote, “both aspirin and P2Y12 inhibitors remain viable alternatives for prevention of atherothrombotic events among patients with established CAD.”
Dr. Gragnano had no disclosures; potential conflicts for the other authors are in the report. Dr. Kumbhani reports no relevant relationships; Dr. de Lemos has received honoraria for participation in data safety monitoring boards from Eli Lilly, Novo Nordisk, AstraZeneca, and Janssen.
A version of this article first appeared on Medscape.com.
FROM JACC
Expanded coverage of carotid stenting in CMS draft proposal
The new memo follows a national coverage analysis for CAS that was initiated in January 2023 and considers 193 public comments received in the ensuing month.
That analysis followed a request from the Multispecialty Carotid Alliance (MSCA) to make the existing guidelines less restrictive.
The decision proposal would expand coverage for CAS “to standard surgical risk patients by removing the limitation of coverage to only high surgical risk patients.” It would limit it to patients for whom CAS is considered “reasonable and necessary” and who are either symptomatic with carotid stenosis of 50% or greater or asymptomatic with carotid stenosis of at least 70%.
The proposal would require practitioners to “engage in a formal shared decision-making interaction with the beneficiary” that involves use of a “validated decision-making tool.” The conversation must include discussion of all treatment options and their risks and benefits and cover information from the clinical guidelines, as well as “incorporate the patient’s personal preferences and priorities.”
Much of the proposed coverage criteria resemble recommendations from several societies that offered comments in response to the Jan. 12 CMS statement that led to the current draft proposal. They include, along with MSCA, the American Association of Neurological Surgeons and the Congress of Neurological Surgeons, and jointly the American College of Cardiology and the American Heart Association.
Carotid stenting, commented the ACC/AHA, “was first introduced in 1994, and the field has matured in the last 3 decades.” The procedure “is a well-established treatment option.” The groups declared support for “removal of the facility and operator requirement for CAS consistent with the current state of the published literature and standard clinical practice.”
The current CMS draft proposal acknowledges the publication of five major randomized controlled trials and a number of “large, prospective registry-based studies” since 2009 that support its proposed coverage criteria.
Collectively, it states, the evidence “suffices to demonstrate that CAS and [carotid endarterectomy] are similarly effective” with respect to the clinical primary endpoints of recent trials “in patients with either standard or high surgical risk and who are symptomatic with carotid artery stenosis ≥ 50% or asymptomatic with stenosis ≥ 70%.”
A version of this article appeared on Medscape.com.
The new memo follows a national coverage analysis for CAS that was initiated in January 2023 and considers 193 public comments received in the ensuing month.
That analysis followed a request from the Multispecialty Carotid Alliance (MSCA) to make the existing guidelines less restrictive.
The decision proposal would expand coverage for CAS “to standard surgical risk patients by removing the limitation of coverage to only high surgical risk patients.” It would limit it to patients for whom CAS is considered “reasonable and necessary” and who are either symptomatic with carotid stenosis of 50% or greater or asymptomatic with carotid stenosis of at least 70%.
The proposal would require practitioners to “engage in a formal shared decision-making interaction with the beneficiary” that involves use of a “validated decision-making tool.” The conversation must include discussion of all treatment options and their risks and benefits and cover information from the clinical guidelines, as well as “incorporate the patient’s personal preferences and priorities.”
Much of the proposed coverage criteria resemble recommendations from several societies that offered comments in response to the Jan. 12 CMS statement that led to the current draft proposal. They include, along with MSCA, the American Association of Neurological Surgeons and the Congress of Neurological Surgeons, and jointly the American College of Cardiology and the American Heart Association.
Carotid stenting, commented the ACC/AHA, “was first introduced in 1994, and the field has matured in the last 3 decades.” The procedure “is a well-established treatment option.” The groups declared support for “removal of the facility and operator requirement for CAS consistent with the current state of the published literature and standard clinical practice.”
The current CMS draft proposal acknowledges the publication of five major randomized controlled trials and a number of “large, prospective registry-based studies” since 2009 that support its proposed coverage criteria.
Collectively, it states, the evidence “suffices to demonstrate that CAS and [carotid endarterectomy] are similarly effective” with respect to the clinical primary endpoints of recent trials “in patients with either standard or high surgical risk and who are symptomatic with carotid artery stenosis ≥ 50% or asymptomatic with stenosis ≥ 70%.”
A version of this article appeared on Medscape.com.
The new memo follows a national coverage analysis for CAS that was initiated in January 2023 and considers 193 public comments received in the ensuing month.
That analysis followed a request from the Multispecialty Carotid Alliance (MSCA) to make the existing guidelines less restrictive.
The decision proposal would expand coverage for CAS “to standard surgical risk patients by removing the limitation of coverage to only high surgical risk patients.” It would limit it to patients for whom CAS is considered “reasonable and necessary” and who are either symptomatic with carotid stenosis of 50% or greater or asymptomatic with carotid stenosis of at least 70%.
The proposal would require practitioners to “engage in a formal shared decision-making interaction with the beneficiary” that involves use of a “validated decision-making tool.” The conversation must include discussion of all treatment options and their risks and benefits and cover information from the clinical guidelines, as well as “incorporate the patient’s personal preferences and priorities.”
Much of the proposed coverage criteria resemble recommendations from several societies that offered comments in response to the Jan. 12 CMS statement that led to the current draft proposal. They include, along with MSCA, the American Association of Neurological Surgeons and the Congress of Neurological Surgeons, and jointly the American College of Cardiology and the American Heart Association.
Carotid stenting, commented the ACC/AHA, “was first introduced in 1994, and the field has matured in the last 3 decades.” The procedure “is a well-established treatment option.” The groups declared support for “removal of the facility and operator requirement for CAS consistent with the current state of the published literature and standard clinical practice.”
The current CMS draft proposal acknowledges the publication of five major randomized controlled trials and a number of “large, prospective registry-based studies” since 2009 that support its proposed coverage criteria.
Collectively, it states, the evidence “suffices to demonstrate that CAS and [carotid endarterectomy] are similarly effective” with respect to the clinical primary endpoints of recent trials “in patients with either standard or high surgical risk and who are symptomatic with carotid artery stenosis ≥ 50% or asymptomatic with stenosis ≥ 70%.”
A version of this article appeared on Medscape.com.
Peripartum cardiomyopathy raises risks at future pregnancy despite LV recovery
, a new study suggests.
Researchers looked at the long-term outcomes in a cohort of women who had developed PPCM and became pregnant again several years later, comparing those with LV function that had “normalized” in the interim against those with persisting LV dysfunction.
In their analysis, adverse maternal outcomes 5 years after an index pregnancy were significantly worse among those in whom LV dysfunction had persisted, compared with those with recovered LV function. The risk of relapsed PPCM persisted out to 8 years. Mortality remained high in both groups through the follow-up.
The study suggests that “women with PPCM need long-term follow-up by cardiology, as mortality does not abate over time,” Kalgi Modi, MD, Louisiana State University, Shreveport, said in an interview.
Women with a history of PPCM, she said, need “multidisciplinary and shared decision-making for family planning, because normalization of left ventricular function after index pregnancy does not guarantee a favorable outcome in the subsequent pregnancies.”
Dr. Modi is senior author on the study published online in the Journal of the American College of Cardiology.
The current findings are important to women with a history of PPCM who are “contemplating future pregnancy,” Afshan Hameed, MD, a maternal-fetal medicine specialist and cardiologist at the University of California, Irvine, said in an interview. The investigators suggest that “complete recovery of cardiac function after PPCM does not guarantee a favorable outcome in future pregnancy,” agreed Dr. Hameed, who was not involved in the current study. Future pregnancies must therefore “be highly discouraged or considered with caution even in patients who have recovered their cardiac function.”
To investigate the impact of PPCM on risk at subsequent pregnancies, the researchers studied 45 patients with PPCM who had gone on to have at least one more pregnancy, the first a median of 28 months later. Their mean age was 27 and 80% were Black; they were followed a median of 8 years.
Peripartum cardiomyopathy, defined as idiopathic heart failure with LV ejection fraction (LVEF) 45% or less in the last month of pregnancy through the following 5 months, was diagnosed post partum in 93.3% and antepartum in the remaining 6.7% (mean time of diagnosis, 6 weeks post partum).
The mean LVEF fell from 45.1% at the index pregnancy to 41.2% (P = .009) at subsequent pregnancies. The “recovery group” included the 30 women with LVEF recovery to 50% or higher after the index pregnancy, and the remaining 15 with persisting LV dysfunction – defined as LVEF < 50% – made up the “nonrecovery group.”
Recovery of LVEF was associated with a reduced risk of persisting LV dysfunction, the report states, at a hazard ratio of 0.08 (95% CI, 0.01-0.64; P = .02) after adjustment for hypertension, diabetes, and history of preeclampsia. But that risk went up sharply in association with illicit drug use, similarly adjusted, with an HR of 9.08 (95% CI, 1.38-59.8; P = .02).
The nonrecovery group, compared with the recovery group, experienced significantly higher rates of adverse maternal outcomes (53.3% vs. 20.0%; P = .04) – a composite endpoint that included relapse PPCM (33.3% vs. 3.3%; P = .01), HF (53.3% vs. 20.0%; P = .03), cardiogenic shock, thromboembolic events, and death – at 5 years. However, all-cause mortality was nonsignificantly different between the two groups (13.3% vs. 3.3%; P = .25)
All-cause mortality was nonsignificantly different between the two groups at a median of 8 years (20.0% vs. 20.0%; P = 1.00), and the difference in overall adverse maternal outcomes had gone from significant to nonsignificant (53.3% vs. 33.3%; P = .20). The difference in relapse PPCM between groups remained significant after 8 years (53.3% vs. 23.3%; P = .04)
The study is limited by its retrospective nature, a relatively small population, and lack of racial diversity, the report notes.
Indeed, most of the study’s subjects were Black, and previous studies have demonstrated a “different phenotypic presentation and outcome in African American women with PPCM, compared with non–African American women,” an accompanying editorial states.
Therefore, applicability of its findings to other populations “needs to be examined by urgently needed national prospective registries with long-term follow-up,” writes Uri Elkayam, MD, University of Southern California, Los Angeles.
Moreover, the study questions “whether the reverse remodeling and improvement of [LVEF] in women with PPCM represent a true recovery.” Prior studies “have shown an impaired contractile reserve as well as abnormal myocardial strain and reduced exercise capacity and even mortality in women with PPCM after RLV,” Dr. Elkayam notes.
It’s therefore possible – as with other forms of dilated cardiomyopathy – that LVEF normalization “does not represent a true recovery but a new steady state with subclinical myocardial dysfunction that is prone to development of recurrent [LV dysfunction] and clinical deterioration in response to various triggers such as long-standing hypertension, obesity, diabetes, illicit drug use,” and, “more importantly,” subsequent pregnancies.
The study points to “the need for a close long-term follow-up of women with PPCM” and provides “a rationale for early initiation of guideline-directed medical therapy after the diagnosis of PPCM and possible continuation even after improvement of LVEF.”
No funding source was reported. Dr. Modi and coauthors, Dr. Elkayam, and Dr. Hameed declare no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, a new study suggests.
Researchers looked at the long-term outcomes in a cohort of women who had developed PPCM and became pregnant again several years later, comparing those with LV function that had “normalized” in the interim against those with persisting LV dysfunction.
In their analysis, adverse maternal outcomes 5 years after an index pregnancy were significantly worse among those in whom LV dysfunction had persisted, compared with those with recovered LV function. The risk of relapsed PPCM persisted out to 8 years. Mortality remained high in both groups through the follow-up.
The study suggests that “women with PPCM need long-term follow-up by cardiology, as mortality does not abate over time,” Kalgi Modi, MD, Louisiana State University, Shreveport, said in an interview.
Women with a history of PPCM, she said, need “multidisciplinary and shared decision-making for family planning, because normalization of left ventricular function after index pregnancy does not guarantee a favorable outcome in the subsequent pregnancies.”
Dr. Modi is senior author on the study published online in the Journal of the American College of Cardiology.
The current findings are important to women with a history of PPCM who are “contemplating future pregnancy,” Afshan Hameed, MD, a maternal-fetal medicine specialist and cardiologist at the University of California, Irvine, said in an interview. The investigators suggest that “complete recovery of cardiac function after PPCM does not guarantee a favorable outcome in future pregnancy,” agreed Dr. Hameed, who was not involved in the current study. Future pregnancies must therefore “be highly discouraged or considered with caution even in patients who have recovered their cardiac function.”
To investigate the impact of PPCM on risk at subsequent pregnancies, the researchers studied 45 patients with PPCM who had gone on to have at least one more pregnancy, the first a median of 28 months later. Their mean age was 27 and 80% were Black; they were followed a median of 8 years.
Peripartum cardiomyopathy, defined as idiopathic heart failure with LV ejection fraction (LVEF) 45% or less in the last month of pregnancy through the following 5 months, was diagnosed post partum in 93.3% and antepartum in the remaining 6.7% (mean time of diagnosis, 6 weeks post partum).
The mean LVEF fell from 45.1% at the index pregnancy to 41.2% (P = .009) at subsequent pregnancies. The “recovery group” included the 30 women with LVEF recovery to 50% or higher after the index pregnancy, and the remaining 15 with persisting LV dysfunction – defined as LVEF < 50% – made up the “nonrecovery group.”
Recovery of LVEF was associated with a reduced risk of persisting LV dysfunction, the report states, at a hazard ratio of 0.08 (95% CI, 0.01-0.64; P = .02) after adjustment for hypertension, diabetes, and history of preeclampsia. But that risk went up sharply in association with illicit drug use, similarly adjusted, with an HR of 9.08 (95% CI, 1.38-59.8; P = .02).
The nonrecovery group, compared with the recovery group, experienced significantly higher rates of adverse maternal outcomes (53.3% vs. 20.0%; P = .04) – a composite endpoint that included relapse PPCM (33.3% vs. 3.3%; P = .01), HF (53.3% vs. 20.0%; P = .03), cardiogenic shock, thromboembolic events, and death – at 5 years. However, all-cause mortality was nonsignificantly different between the two groups (13.3% vs. 3.3%; P = .25)
All-cause mortality was nonsignificantly different between the two groups at a median of 8 years (20.0% vs. 20.0%; P = 1.00), and the difference in overall adverse maternal outcomes had gone from significant to nonsignificant (53.3% vs. 33.3%; P = .20). The difference in relapse PPCM between groups remained significant after 8 years (53.3% vs. 23.3%; P = .04)
The study is limited by its retrospective nature, a relatively small population, and lack of racial diversity, the report notes.
Indeed, most of the study’s subjects were Black, and previous studies have demonstrated a “different phenotypic presentation and outcome in African American women with PPCM, compared with non–African American women,” an accompanying editorial states.
Therefore, applicability of its findings to other populations “needs to be examined by urgently needed national prospective registries with long-term follow-up,” writes Uri Elkayam, MD, University of Southern California, Los Angeles.
Moreover, the study questions “whether the reverse remodeling and improvement of [LVEF] in women with PPCM represent a true recovery.” Prior studies “have shown an impaired contractile reserve as well as abnormal myocardial strain and reduced exercise capacity and even mortality in women with PPCM after RLV,” Dr. Elkayam notes.
It’s therefore possible – as with other forms of dilated cardiomyopathy – that LVEF normalization “does not represent a true recovery but a new steady state with subclinical myocardial dysfunction that is prone to development of recurrent [LV dysfunction] and clinical deterioration in response to various triggers such as long-standing hypertension, obesity, diabetes, illicit drug use,” and, “more importantly,” subsequent pregnancies.
The study points to “the need for a close long-term follow-up of women with PPCM” and provides “a rationale for early initiation of guideline-directed medical therapy after the diagnosis of PPCM and possible continuation even after improvement of LVEF.”
No funding source was reported. Dr. Modi and coauthors, Dr. Elkayam, and Dr. Hameed declare no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, a new study suggests.
Researchers looked at the long-term outcomes in a cohort of women who had developed PPCM and became pregnant again several years later, comparing those with LV function that had “normalized” in the interim against those with persisting LV dysfunction.
In their analysis, adverse maternal outcomes 5 years after an index pregnancy were significantly worse among those in whom LV dysfunction had persisted, compared with those with recovered LV function. The risk of relapsed PPCM persisted out to 8 years. Mortality remained high in both groups through the follow-up.
The study suggests that “women with PPCM need long-term follow-up by cardiology, as mortality does not abate over time,” Kalgi Modi, MD, Louisiana State University, Shreveport, said in an interview.
Women with a history of PPCM, she said, need “multidisciplinary and shared decision-making for family planning, because normalization of left ventricular function after index pregnancy does not guarantee a favorable outcome in the subsequent pregnancies.”
Dr. Modi is senior author on the study published online in the Journal of the American College of Cardiology.
The current findings are important to women with a history of PPCM who are “contemplating future pregnancy,” Afshan Hameed, MD, a maternal-fetal medicine specialist and cardiologist at the University of California, Irvine, said in an interview. The investigators suggest that “complete recovery of cardiac function after PPCM does not guarantee a favorable outcome in future pregnancy,” agreed Dr. Hameed, who was not involved in the current study. Future pregnancies must therefore “be highly discouraged or considered with caution even in patients who have recovered their cardiac function.”
To investigate the impact of PPCM on risk at subsequent pregnancies, the researchers studied 45 patients with PPCM who had gone on to have at least one more pregnancy, the first a median of 28 months later. Their mean age was 27 and 80% were Black; they were followed a median of 8 years.
Peripartum cardiomyopathy, defined as idiopathic heart failure with LV ejection fraction (LVEF) 45% or less in the last month of pregnancy through the following 5 months, was diagnosed post partum in 93.3% and antepartum in the remaining 6.7% (mean time of diagnosis, 6 weeks post partum).
The mean LVEF fell from 45.1% at the index pregnancy to 41.2% (P = .009) at subsequent pregnancies. The “recovery group” included the 30 women with LVEF recovery to 50% or higher after the index pregnancy, and the remaining 15 with persisting LV dysfunction – defined as LVEF < 50% – made up the “nonrecovery group.”
Recovery of LVEF was associated with a reduced risk of persisting LV dysfunction, the report states, at a hazard ratio of 0.08 (95% CI, 0.01-0.64; P = .02) after adjustment for hypertension, diabetes, and history of preeclampsia. But that risk went up sharply in association with illicit drug use, similarly adjusted, with an HR of 9.08 (95% CI, 1.38-59.8; P = .02).
The nonrecovery group, compared with the recovery group, experienced significantly higher rates of adverse maternal outcomes (53.3% vs. 20.0%; P = .04) – a composite endpoint that included relapse PPCM (33.3% vs. 3.3%; P = .01), HF (53.3% vs. 20.0%; P = .03), cardiogenic shock, thromboembolic events, and death – at 5 years. However, all-cause mortality was nonsignificantly different between the two groups (13.3% vs. 3.3%; P = .25)
All-cause mortality was nonsignificantly different between the two groups at a median of 8 years (20.0% vs. 20.0%; P = 1.00), and the difference in overall adverse maternal outcomes had gone from significant to nonsignificant (53.3% vs. 33.3%; P = .20). The difference in relapse PPCM between groups remained significant after 8 years (53.3% vs. 23.3%; P = .04)
The study is limited by its retrospective nature, a relatively small population, and lack of racial diversity, the report notes.
Indeed, most of the study’s subjects were Black, and previous studies have demonstrated a “different phenotypic presentation and outcome in African American women with PPCM, compared with non–African American women,” an accompanying editorial states.
Therefore, applicability of its findings to other populations “needs to be examined by urgently needed national prospective registries with long-term follow-up,” writes Uri Elkayam, MD, University of Southern California, Los Angeles.
Moreover, the study questions “whether the reverse remodeling and improvement of [LVEF] in women with PPCM represent a true recovery.” Prior studies “have shown an impaired contractile reserve as well as abnormal myocardial strain and reduced exercise capacity and even mortality in women with PPCM after RLV,” Dr. Elkayam notes.
It’s therefore possible – as with other forms of dilated cardiomyopathy – that LVEF normalization “does not represent a true recovery but a new steady state with subclinical myocardial dysfunction that is prone to development of recurrent [LV dysfunction] and clinical deterioration in response to various triggers such as long-standing hypertension, obesity, diabetes, illicit drug use,” and, “more importantly,” subsequent pregnancies.
The study points to “the need for a close long-term follow-up of women with PPCM” and provides “a rationale for early initiation of guideline-directed medical therapy after the diagnosis of PPCM and possible continuation even after improvement of LVEF.”
No funding source was reported. Dr. Modi and coauthors, Dr. Elkayam, and Dr. Hameed declare no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY
New definition for iron deficiency in CV disease proposed
with implications that may extend to cardiovascular disease in general.
In the study involving more than 900 patients with PH, investigators at seven U.S. centers determined the prevalence of iron deficiency by two separate definitions and assessed its associations with functional measures and quality of life (QoL) scores.
An iron deficiency definition used conventionally in heart failure (HF) – ferritin less than 100 g/mL or 100-299 ng/mL with transferrin saturation (TSAT) less than 20% – failed to discriminate patients with reduced peak oxygen consumption (peakVO2), 6-minute walk test (6MWT) results, and QoL scores on the 36-item Short Form Survey (SF-36).
But an alternative definition for iron deficiency, simply a TSAT less than 21%, did predict such patients with reduced peakVO2, 6MWT, and QoL. It was also associated with an increased mortality risk. The study was published in the European Heart Journal.
“A low TSAT, less than 21%, is key in the pathophysiology of iron deficiency in pulmonary hypertension” and is associated with those important clinical and functional characteristics, lead author Pieter Martens MD, PhD, said in an interview. The study “underscores the importance of these criteria in future intervention studies in the field of pulmonary hypertension testing iron therapies.”
A broader implication is that “we should revise how we define iron deficiency in heart failure and cardiovascular disease in general and how we select patients for iron therapies,” said Dr. Martens, of the Heart, Vascular & Thoracic Institute of the Cleveland Clinic.
Iron’s role in pulmonary vascular disease
“Iron deficiency is associated with an energetic deficit, especially in high energy–demanding tissue, leading to early skeletal muscle acidification and diminished left and right ventricular (RV) contractile reserve during exercise,” the published report states. It can lead to “maladaptive RV remodeling,” which is a “hallmark feature” predictive of morbidity and mortality in patients with pulmonary vascular disease (PVD).
Some studies have suggested that iron deficiency is a common comorbidity in patients with PVD, their estimates of its prevalence ranging widely due in part to the “absence of a uniform definition,” write the authors.
Dr. Martens said the current study was conducted partly in response to the increasingly common observation that the HF-associated definition of iron deficiency “has limitations.” Yet, “without validation in the field of pulmonary hypertension, the 2022 pulmonary hypertension guidelines endorse this definition.”
As iron deficiency is a causal risk factor for HF progression, Dr. Martens added, the HF field has “taught us the importance of using validated definitions for iron deficiency when selecting patients for iron treatment in randomized controlled trials.”
Moreover, some evidence suggests that iron deficiency by some definitions may be associated with diminished exercise capacity and QoL in patients with PVD, which are associations that have not been confirmed in large studies, the report notes.
Therefore, it continues, the study sought to “determine and validate” the optimal definition of iron deficiency in patients with PVD; document its prevalence; and explore associations between iron deficiency and exercise capacity, QoL, and cardiac and pulmonary vascular remodeling.
Evaluating definitions of iron deficiency
The prospective study, called PVDOMICS, entered 1,195 subjects with available iron levels. After exclusion of 38 patients with sarcoidosis, myeloproliferative disease, or hemoglobinopathy, there remained 693 patients with “overt” PH, 225 with a milder form of PH who served as PVD comparators, and 90 age-, sex-, race/ethnicity- matched “healthy” adults who served as controls.
According to the conventional HF definition of iron deficiency – that is, ferritin 100-299 ng/mL and TSAT less than 20% – the prevalences were 74% in patients with overt PH and 72% of those “across the PVD spectrum.”
But by that definition, iron deficient and non-iron deficient patients didn’t differ significantly in peakVO2, 6MWT distance, or SF-36 physical component scores.
In contrast, patients meeting the alternative definition of iron deficiency of TSAT less than 21% showed significantly reduced functional and QoL measures, compared with those with TSAT greater than or equal to 21%.
The group with TSAT less than 21% also showed significantly more RV remodeling at cardiac MRI, compared with those who had TSAT greater than or equal to 21%, but their invasively measured pulmonary vascular resistance was comparable.
Of note, those with TSAT less than 21% also showed significantly increased all-cause mortality (hazard ratio, 1.63; 95% confidence interval, 1.13-2.34; P = .009) after adjustment for age, sex, hemoglobin, and natriuretic peptide levels.
“Proper validation of the definition of iron deficiency is important for prognostication,” the published report states, “but also for providing a working definition that can be used to identify suitable patients for inclusion in randomized controlled trials” of drugs for iron deficiency.
Additionally, the finding that TSAT less than 21% points to patients with diminished functional and exercise capacity is “consistent with more recent studies in the field of heart failure” that suggest “functional abnormalities and adverse cardiac remodeling are worse in patients with a low TSAT.” Indeed, the report states, such treatment effects have been “the most convincing” in HF trials.
Broader implications
An accompanying editorial agrees that the study’s implications apply well beyond PH. It highlights that iron deficiency is common in PH, while such PH is “not substantially different from the problem in patients with heart failure, chronic kidney disease, and cardiovascular disease in general,” lead editorialist John G.F. Cleland, MD, PhD, University of Glasgow, said in an interview. “It’s also common as people get older, even in those without these diseases.”
Dr. Cleland said the anemia definition currently used in cardiovascular research and practice is based on a hemoglobin concentration below the 5th percentile of age and sex in primarily young, healthy people, and not on its association with clinical outcomes.
“We recently analyzed data on a large population in the United Kingdom with a broad range of cardiovascular diseases and found that unless anemia is severe, [other] markers of iron deficiency are usually not measured,” he said. A low hemoglobin and TSAT, but not low ferritin levels, are associated with worse prognosis.
Dr. Cleland agreed that the HF-oriented definition is “poor,” with profound implications for the conduct of clinical trials. “If the definition of iron deficiency lacks specificity, then clinical trials will include many patients without iron deficiency who are unlikely to benefit from and might be harmed by IV iron.” Inclusion of such patients may also “dilute” any benefit that might emerge and render the outcome inaccurate.
But if the definition of iron deficiency lacks sensitivity, “then in clinical practice, many patients with iron deficiency may be denied a simple and effective treatment.”
Measuring serum iron could potentially be useful, but it’s usually not done in randomized trials “especially since taking an iron tablet can give a temporary ‘blip’ in serum iron,” Dr. Cleland said. “So TSAT is a reasonable compromise.” He said he “looks forward” to any further data on serum iron as a way of assessing iron deficiency and anemia.
Half full vs. half empty
Dr. Cleland likened the question of whom to treat with iron supplementation as a “glass half full versus half empty” clinical dilemma. “One approach is to give iron to everyone unless there’s evidence that they’re overloaded,” he said, “while the other is to withhold iron from everyone unless there’s evidence that they’re iron depleted.”
Recent evidence from the IRONMAN trial suggested that its patients with HF who received intravenous iron were less likely to be hospitalized for infections, particularly COVID-19, than a usual-care group. The treatment may also help reduce frailty.
“So should we be offering IV iron specifically to people considered iron deficient, or should we be ensuring that everyone over age 70 get iron supplements?” Dr. Cleland mused rhetorically. On a cautionary note, he added, perhaps iron supplementation will be harmful if it’s not necessary.
Dr. Cleland proposed “focusing for the moment on people who are iron deficient but investigating the possibility that we are being overly restrictive and should be giving iron to a much broader population.” That course, however, would require large population-based studies.
“We need more experience,” Dr. Cleland said, “to make sure that the benefits outweigh any risks before we can just give iron to everyone.”
Dr. Martens has received consultancy fees from AstraZeneca, Abbott, Bayer, Boehringer Ingelheim, Daiichi Sankyo, Novartis, Novo Nordisk, and Vifor Pharma. Dr. Cleland declares grant support, support for travel, and personal honoraria from Pharmacosmos and Vifor. Disclosures for other authors are in the published report and editorial.
A version of this article first appeared on Medscape.com.
with implications that may extend to cardiovascular disease in general.
In the study involving more than 900 patients with PH, investigators at seven U.S. centers determined the prevalence of iron deficiency by two separate definitions and assessed its associations with functional measures and quality of life (QoL) scores.
An iron deficiency definition used conventionally in heart failure (HF) – ferritin less than 100 g/mL or 100-299 ng/mL with transferrin saturation (TSAT) less than 20% – failed to discriminate patients with reduced peak oxygen consumption (peakVO2), 6-minute walk test (6MWT) results, and QoL scores on the 36-item Short Form Survey (SF-36).
But an alternative definition for iron deficiency, simply a TSAT less than 21%, did predict such patients with reduced peakVO2, 6MWT, and QoL. It was also associated with an increased mortality risk. The study was published in the European Heart Journal.
“A low TSAT, less than 21%, is key in the pathophysiology of iron deficiency in pulmonary hypertension” and is associated with those important clinical and functional characteristics, lead author Pieter Martens MD, PhD, said in an interview. The study “underscores the importance of these criteria in future intervention studies in the field of pulmonary hypertension testing iron therapies.”
A broader implication is that “we should revise how we define iron deficiency in heart failure and cardiovascular disease in general and how we select patients for iron therapies,” said Dr. Martens, of the Heart, Vascular & Thoracic Institute of the Cleveland Clinic.
Iron’s role in pulmonary vascular disease
“Iron deficiency is associated with an energetic deficit, especially in high energy–demanding tissue, leading to early skeletal muscle acidification and diminished left and right ventricular (RV) contractile reserve during exercise,” the published report states. It can lead to “maladaptive RV remodeling,” which is a “hallmark feature” predictive of morbidity and mortality in patients with pulmonary vascular disease (PVD).
Some studies have suggested that iron deficiency is a common comorbidity in patients with PVD, their estimates of its prevalence ranging widely due in part to the “absence of a uniform definition,” write the authors.
Dr. Martens said the current study was conducted partly in response to the increasingly common observation that the HF-associated definition of iron deficiency “has limitations.” Yet, “without validation in the field of pulmonary hypertension, the 2022 pulmonary hypertension guidelines endorse this definition.”
As iron deficiency is a causal risk factor for HF progression, Dr. Martens added, the HF field has “taught us the importance of using validated definitions for iron deficiency when selecting patients for iron treatment in randomized controlled trials.”
Moreover, some evidence suggests that iron deficiency by some definitions may be associated with diminished exercise capacity and QoL in patients with PVD, which are associations that have not been confirmed in large studies, the report notes.
Therefore, it continues, the study sought to “determine and validate” the optimal definition of iron deficiency in patients with PVD; document its prevalence; and explore associations between iron deficiency and exercise capacity, QoL, and cardiac and pulmonary vascular remodeling.
Evaluating definitions of iron deficiency
The prospective study, called PVDOMICS, entered 1,195 subjects with available iron levels. After exclusion of 38 patients with sarcoidosis, myeloproliferative disease, or hemoglobinopathy, there remained 693 patients with “overt” PH, 225 with a milder form of PH who served as PVD comparators, and 90 age-, sex-, race/ethnicity- matched “healthy” adults who served as controls.
According to the conventional HF definition of iron deficiency – that is, ferritin 100-299 ng/mL and TSAT less than 20% – the prevalences were 74% in patients with overt PH and 72% of those “across the PVD spectrum.”
But by that definition, iron deficient and non-iron deficient patients didn’t differ significantly in peakVO2, 6MWT distance, or SF-36 physical component scores.
In contrast, patients meeting the alternative definition of iron deficiency of TSAT less than 21% showed significantly reduced functional and QoL measures, compared with those with TSAT greater than or equal to 21%.
The group with TSAT less than 21% also showed significantly more RV remodeling at cardiac MRI, compared with those who had TSAT greater than or equal to 21%, but their invasively measured pulmonary vascular resistance was comparable.
Of note, those with TSAT less than 21% also showed significantly increased all-cause mortality (hazard ratio, 1.63; 95% confidence interval, 1.13-2.34; P = .009) after adjustment for age, sex, hemoglobin, and natriuretic peptide levels.
“Proper validation of the definition of iron deficiency is important for prognostication,” the published report states, “but also for providing a working definition that can be used to identify suitable patients for inclusion in randomized controlled trials” of drugs for iron deficiency.
Additionally, the finding that TSAT less than 21% points to patients with diminished functional and exercise capacity is “consistent with more recent studies in the field of heart failure” that suggest “functional abnormalities and adverse cardiac remodeling are worse in patients with a low TSAT.” Indeed, the report states, such treatment effects have been “the most convincing” in HF trials.
Broader implications
An accompanying editorial agrees that the study’s implications apply well beyond PH. It highlights that iron deficiency is common in PH, while such PH is “not substantially different from the problem in patients with heart failure, chronic kidney disease, and cardiovascular disease in general,” lead editorialist John G.F. Cleland, MD, PhD, University of Glasgow, said in an interview. “It’s also common as people get older, even in those without these diseases.”
Dr. Cleland said the anemia definition currently used in cardiovascular research and practice is based on a hemoglobin concentration below the 5th percentile of age and sex in primarily young, healthy people, and not on its association with clinical outcomes.
“We recently analyzed data on a large population in the United Kingdom with a broad range of cardiovascular diseases and found that unless anemia is severe, [other] markers of iron deficiency are usually not measured,” he said. A low hemoglobin and TSAT, but not low ferritin levels, are associated with worse prognosis.
Dr. Cleland agreed that the HF-oriented definition is “poor,” with profound implications for the conduct of clinical trials. “If the definition of iron deficiency lacks specificity, then clinical trials will include many patients without iron deficiency who are unlikely to benefit from and might be harmed by IV iron.” Inclusion of such patients may also “dilute” any benefit that might emerge and render the outcome inaccurate.
But if the definition of iron deficiency lacks sensitivity, “then in clinical practice, many patients with iron deficiency may be denied a simple and effective treatment.”
Measuring serum iron could potentially be useful, but it’s usually not done in randomized trials “especially since taking an iron tablet can give a temporary ‘blip’ in serum iron,” Dr. Cleland said. “So TSAT is a reasonable compromise.” He said he “looks forward” to any further data on serum iron as a way of assessing iron deficiency and anemia.
Half full vs. half empty
Dr. Cleland likened the question of whom to treat with iron supplementation as a “glass half full versus half empty” clinical dilemma. “One approach is to give iron to everyone unless there’s evidence that they’re overloaded,” he said, “while the other is to withhold iron from everyone unless there’s evidence that they’re iron depleted.”
Recent evidence from the IRONMAN trial suggested that its patients with HF who received intravenous iron were less likely to be hospitalized for infections, particularly COVID-19, than a usual-care group. The treatment may also help reduce frailty.
“So should we be offering IV iron specifically to people considered iron deficient, or should we be ensuring that everyone over age 70 get iron supplements?” Dr. Cleland mused rhetorically. On a cautionary note, he added, perhaps iron supplementation will be harmful if it’s not necessary.
Dr. Cleland proposed “focusing for the moment on people who are iron deficient but investigating the possibility that we are being overly restrictive and should be giving iron to a much broader population.” That course, however, would require large population-based studies.
“We need more experience,” Dr. Cleland said, “to make sure that the benefits outweigh any risks before we can just give iron to everyone.”
Dr. Martens has received consultancy fees from AstraZeneca, Abbott, Bayer, Boehringer Ingelheim, Daiichi Sankyo, Novartis, Novo Nordisk, and Vifor Pharma. Dr. Cleland declares grant support, support for travel, and personal honoraria from Pharmacosmos and Vifor. Disclosures for other authors are in the published report and editorial.
A version of this article first appeared on Medscape.com.
with implications that may extend to cardiovascular disease in general.
In the study involving more than 900 patients with PH, investigators at seven U.S. centers determined the prevalence of iron deficiency by two separate definitions and assessed its associations with functional measures and quality of life (QoL) scores.
An iron deficiency definition used conventionally in heart failure (HF) – ferritin less than 100 g/mL or 100-299 ng/mL with transferrin saturation (TSAT) less than 20% – failed to discriminate patients with reduced peak oxygen consumption (peakVO2), 6-minute walk test (6MWT) results, and QoL scores on the 36-item Short Form Survey (SF-36).
But an alternative definition for iron deficiency, simply a TSAT less than 21%, did predict such patients with reduced peakVO2, 6MWT, and QoL. It was also associated with an increased mortality risk. The study was published in the European Heart Journal.
“A low TSAT, less than 21%, is key in the pathophysiology of iron deficiency in pulmonary hypertension” and is associated with those important clinical and functional characteristics, lead author Pieter Martens MD, PhD, said in an interview. The study “underscores the importance of these criteria in future intervention studies in the field of pulmonary hypertension testing iron therapies.”
A broader implication is that “we should revise how we define iron deficiency in heart failure and cardiovascular disease in general and how we select patients for iron therapies,” said Dr. Martens, of the Heart, Vascular & Thoracic Institute of the Cleveland Clinic.
Iron’s role in pulmonary vascular disease
“Iron deficiency is associated with an energetic deficit, especially in high energy–demanding tissue, leading to early skeletal muscle acidification and diminished left and right ventricular (RV) contractile reserve during exercise,” the published report states. It can lead to “maladaptive RV remodeling,” which is a “hallmark feature” predictive of morbidity and mortality in patients with pulmonary vascular disease (PVD).
Some studies have suggested that iron deficiency is a common comorbidity in patients with PVD, their estimates of its prevalence ranging widely due in part to the “absence of a uniform definition,” write the authors.
Dr. Martens said the current study was conducted partly in response to the increasingly common observation that the HF-associated definition of iron deficiency “has limitations.” Yet, “without validation in the field of pulmonary hypertension, the 2022 pulmonary hypertension guidelines endorse this definition.”
As iron deficiency is a causal risk factor for HF progression, Dr. Martens added, the HF field has “taught us the importance of using validated definitions for iron deficiency when selecting patients for iron treatment in randomized controlled trials.”
Moreover, some evidence suggests that iron deficiency by some definitions may be associated with diminished exercise capacity and QoL in patients with PVD, which are associations that have not been confirmed in large studies, the report notes.
Therefore, it continues, the study sought to “determine and validate” the optimal definition of iron deficiency in patients with PVD; document its prevalence; and explore associations between iron deficiency and exercise capacity, QoL, and cardiac and pulmonary vascular remodeling.
Evaluating definitions of iron deficiency
The prospective study, called PVDOMICS, entered 1,195 subjects with available iron levels. After exclusion of 38 patients with sarcoidosis, myeloproliferative disease, or hemoglobinopathy, there remained 693 patients with “overt” PH, 225 with a milder form of PH who served as PVD comparators, and 90 age-, sex-, race/ethnicity- matched “healthy” adults who served as controls.
According to the conventional HF definition of iron deficiency – that is, ferritin 100-299 ng/mL and TSAT less than 20% – the prevalences were 74% in patients with overt PH and 72% of those “across the PVD spectrum.”
But by that definition, iron deficient and non-iron deficient patients didn’t differ significantly in peakVO2, 6MWT distance, or SF-36 physical component scores.
In contrast, patients meeting the alternative definition of iron deficiency of TSAT less than 21% showed significantly reduced functional and QoL measures, compared with those with TSAT greater than or equal to 21%.
The group with TSAT less than 21% also showed significantly more RV remodeling at cardiac MRI, compared with those who had TSAT greater than or equal to 21%, but their invasively measured pulmonary vascular resistance was comparable.
Of note, those with TSAT less than 21% also showed significantly increased all-cause mortality (hazard ratio, 1.63; 95% confidence interval, 1.13-2.34; P = .009) after adjustment for age, sex, hemoglobin, and natriuretic peptide levels.
“Proper validation of the definition of iron deficiency is important for prognostication,” the published report states, “but also for providing a working definition that can be used to identify suitable patients for inclusion in randomized controlled trials” of drugs for iron deficiency.
Additionally, the finding that TSAT less than 21% points to patients with diminished functional and exercise capacity is “consistent with more recent studies in the field of heart failure” that suggest “functional abnormalities and adverse cardiac remodeling are worse in patients with a low TSAT.” Indeed, the report states, such treatment effects have been “the most convincing” in HF trials.
Broader implications
An accompanying editorial agrees that the study’s implications apply well beyond PH. It highlights that iron deficiency is common in PH, while such PH is “not substantially different from the problem in patients with heart failure, chronic kidney disease, and cardiovascular disease in general,” lead editorialist John G.F. Cleland, MD, PhD, University of Glasgow, said in an interview. “It’s also common as people get older, even in those without these diseases.”
Dr. Cleland said the anemia definition currently used in cardiovascular research and practice is based on a hemoglobin concentration below the 5th percentile of age and sex in primarily young, healthy people, and not on its association with clinical outcomes.
“We recently analyzed data on a large population in the United Kingdom with a broad range of cardiovascular diseases and found that unless anemia is severe, [other] markers of iron deficiency are usually not measured,” he said. A low hemoglobin and TSAT, but not low ferritin levels, are associated with worse prognosis.
Dr. Cleland agreed that the HF-oriented definition is “poor,” with profound implications for the conduct of clinical trials. “If the definition of iron deficiency lacks specificity, then clinical trials will include many patients without iron deficiency who are unlikely to benefit from and might be harmed by IV iron.” Inclusion of such patients may also “dilute” any benefit that might emerge and render the outcome inaccurate.
But if the definition of iron deficiency lacks sensitivity, “then in clinical practice, many patients with iron deficiency may be denied a simple and effective treatment.”
Measuring serum iron could potentially be useful, but it’s usually not done in randomized trials “especially since taking an iron tablet can give a temporary ‘blip’ in serum iron,” Dr. Cleland said. “So TSAT is a reasonable compromise.” He said he “looks forward” to any further data on serum iron as a way of assessing iron deficiency and anemia.
Half full vs. half empty
Dr. Cleland likened the question of whom to treat with iron supplementation as a “glass half full versus half empty” clinical dilemma. “One approach is to give iron to everyone unless there’s evidence that they’re overloaded,” he said, “while the other is to withhold iron from everyone unless there’s evidence that they’re iron depleted.”
Recent evidence from the IRONMAN trial suggested that its patients with HF who received intravenous iron were less likely to be hospitalized for infections, particularly COVID-19, than a usual-care group. The treatment may also help reduce frailty.
“So should we be offering IV iron specifically to people considered iron deficient, or should we be ensuring that everyone over age 70 get iron supplements?” Dr. Cleland mused rhetorically. On a cautionary note, he added, perhaps iron supplementation will be harmful if it’s not necessary.
Dr. Cleland proposed “focusing for the moment on people who are iron deficient but investigating the possibility that we are being overly restrictive and should be giving iron to a much broader population.” That course, however, would require large population-based studies.
“We need more experience,” Dr. Cleland said, “to make sure that the benefits outweigh any risks before we can just give iron to everyone.”
Dr. Martens has received consultancy fees from AstraZeneca, Abbott, Bayer, Boehringer Ingelheim, Daiichi Sankyo, Novartis, Novo Nordisk, and Vifor Pharma. Dr. Cleland declares grant support, support for travel, and personal honoraria from Pharmacosmos and Vifor. Disclosures for other authors are in the published report and editorial.
A version of this article first appeared on Medscape.com.
FROM EUROPEAN HEART JOURNAL
Lean muscle mass protective against Alzheimer’s?
Investigators analyzed data on more than 450,000 participants in the UK Biobank as well as two independent samples of more than 320,000 individuals with and without AD, and more than 260,000 individuals participating in a separate genes and intelligence study.
They estimated lean muscle and fat tissue in the arms and legs and found, in adjusted analyses, over 500 genetic variants associated with lean mass.
On average, higher genetically lean mass was associated with a “modest but statistically robust” reduction in AD risk and with superior performance on cognitive tasks.
“Using human genetic data, we found evidence for a protective effect of lean mass on risk of Alzheimer’s disease,” study investigators Iyas Daghlas, MD, a resident in the department of neurology, University of California, San Francisco, said in an interview.
Although “clinical intervention studies are needed to confirm this effect, this study supports current recommendations to maintain a healthy lifestyle to prevent dementia,” he said.
The study was published online in BMJ Medicine.
Naturally randomized research
Several measures of body composition have been investigated for their potential association with AD. Lean mass – a “proxy for muscle mass, defined as the difference between total mass and fat mass” – has been shown to be reduced in patients with AD compared with controls, the researchers noted.
“Previous research studies have tested the relationship of body mass index with Alzheimer’s disease and did not find evidence for a causal effect,” Dr. Daghlas said. “We wondered whether BMI was an insufficiently granular measure and hypothesized that disaggregating body mass into lean mass and fat mass could reveal novel associations with disease.”
Most studies have used case-control designs, which might be biased by “residual confounding or reverse causality.” Naturally randomized data “may be used as an alternative to conventional observational studies to investigate causal relations between risk factors and diseases,” the researchers wrote.
In particular, the Mendelian randomization (MR) paradigm randomly allocates germline genetic variants and uses them as proxies for a specific risk factor.
MR “is a technique that permits researchers to investigate cause-and-effect relationships using human genetic data,” Dr. Daghlas explained. “In effect, we’re studying the results of a naturally randomized experiment whereby some individuals are genetically allocated to carry more lean mass.”
The current study used MR to investigate the effect of genetically proxied lean mass on the risk of AD and the “related phenotype” of cognitive performance.
Genetic proxy
As genetic proxies for lean mass, the researchers chose single nucleotide polymorphisms (genetic variants) that were associated, in a genome-wide association study (GWAS), with appendicular lean mass.
Appendicular lean mass “more accurately reflects the effects of lean mass than whole body lean mass, which includes smooth and cardiac muscle,” the authors explained.
This GWAS used phenotypic and genetic data from 450,243 participants in the UK Biobank cohort (mean age 57 years). All participants were of European ancestry.
The researchers adjusted for age, sex, and genetic ancestry. They measured appendicular lean mass using bioimpedance – an electric current that flows at different rates through the body, depending on its composition.
In addition to the UK Biobank participants, the researchers drew on an independent sample of 21,982 people with AD; a control group of 41,944 people without AD; a replication sample of 7,329 people with and 252,879 people without AD to validate the findings; and 269,867 people taking part in a genome-wide study of cognitive performance.
The researchers identified 584 variants that met criteria for use as genetic proxies for lean mass. None were located within the APOE gene region. In the aggregate, these variants explained 10.3% of the variance in appendicular lean mass.
Each standard deviation increase in genetically proxied lean mass was associated with a 12% reduction in AD risk (odds ratio [OR], 0.88; 95% confidence interval [CI], 0.82-0.95; P < .001). This finding was replicated in the independent consortium (OR, 0.91; 95% CI, 0.83-0.99; P = .02).
The findings remained “consistent” in sensitivity analyses.
A modifiable risk factor?
Higher appendicular lean mass was associated with higher levels of cognitive performance, with each SD increase in lean mass associated with an SD increase in cognitive performance (OR, 0.09; 95% CI, 0.06-0.11; P = .001).
“Adjusting for potential mediation through performance did not reduce the association between appendicular lean mass and risk of AD,” the authors wrote.
They obtained similar results using genetically proxied trunk and whole-body lean mass, after adjusting for fat mass.
The authors noted several limitations. The bioimpedance measures “only predict, but do not directly measure, lean mass.” Moreover, the approach didn’t examine whether a “critical window of risk factor timing” exists, during which lean mass might play a role in influencing AD risk and after which “interventions would no longer be effective.” Nor could the study determine whether increasing lean mass could reverse AD pathology in patients with preclinical disease or mild cognitive impairment.
Nevertheless, the findings suggest “that lean mass might be a possible modifiable protective factor for Alzheimer’s disease,” the authors wrote. “The mechanisms underlying this finding, as well as the clinical and public health implications, warrant further investigation.”
Novel strategies
In a comment, Iva Miljkovic, MD, PhD, associate professor, department of epidemiology, University of Pittsburgh, said the investigators used “very rigorous methodology.”
The finding suggesting that lean mass is associated with better cognitive function is “important, as cognitive impairment can become stable rather than progress to a pathological state; and, in some cases, can even be reversed.”
In those cases, “identifying the underlying cause – e.g., low lean mass – can significantly improve cognitive function,” said Dr. Miljkovic, senior author of a study showing muscle fat as a risk factor for cognitive decline.
More research will enable us to “expand our understanding” of the mechanisms involved and determine whether interventions aimed at preventing muscle loss and/or increasing muscle fat may have a beneficial effect on cognitive function,” she said. “This might lead to novel strategies to prevent AD.”
Dr. Daghlas is supported by the British Heart Foundation Centre of Research Excellence at Imperial College, London, and is employed part-time by Novo Nordisk. Dr. Miljkovic reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Investigators analyzed data on more than 450,000 participants in the UK Biobank as well as two independent samples of more than 320,000 individuals with and without AD, and more than 260,000 individuals participating in a separate genes and intelligence study.
They estimated lean muscle and fat tissue in the arms and legs and found, in adjusted analyses, over 500 genetic variants associated with lean mass.
On average, higher genetically lean mass was associated with a “modest but statistically robust” reduction in AD risk and with superior performance on cognitive tasks.
“Using human genetic data, we found evidence for a protective effect of lean mass on risk of Alzheimer’s disease,” study investigators Iyas Daghlas, MD, a resident in the department of neurology, University of California, San Francisco, said in an interview.
Although “clinical intervention studies are needed to confirm this effect, this study supports current recommendations to maintain a healthy lifestyle to prevent dementia,” he said.
The study was published online in BMJ Medicine.
Naturally randomized research
Several measures of body composition have been investigated for their potential association with AD. Lean mass – a “proxy for muscle mass, defined as the difference between total mass and fat mass” – has been shown to be reduced in patients with AD compared with controls, the researchers noted.
“Previous research studies have tested the relationship of body mass index with Alzheimer’s disease and did not find evidence for a causal effect,” Dr. Daghlas said. “We wondered whether BMI was an insufficiently granular measure and hypothesized that disaggregating body mass into lean mass and fat mass could reveal novel associations with disease.”
Most studies have used case-control designs, which might be biased by “residual confounding or reverse causality.” Naturally randomized data “may be used as an alternative to conventional observational studies to investigate causal relations between risk factors and diseases,” the researchers wrote.
In particular, the Mendelian randomization (MR) paradigm randomly allocates germline genetic variants and uses them as proxies for a specific risk factor.
MR “is a technique that permits researchers to investigate cause-and-effect relationships using human genetic data,” Dr. Daghlas explained. “In effect, we’re studying the results of a naturally randomized experiment whereby some individuals are genetically allocated to carry more lean mass.”
The current study used MR to investigate the effect of genetically proxied lean mass on the risk of AD and the “related phenotype” of cognitive performance.
Genetic proxy
As genetic proxies for lean mass, the researchers chose single nucleotide polymorphisms (genetic variants) that were associated, in a genome-wide association study (GWAS), with appendicular lean mass.
Appendicular lean mass “more accurately reflects the effects of lean mass than whole body lean mass, which includes smooth and cardiac muscle,” the authors explained.
This GWAS used phenotypic and genetic data from 450,243 participants in the UK Biobank cohort (mean age 57 years). All participants were of European ancestry.
The researchers adjusted for age, sex, and genetic ancestry. They measured appendicular lean mass using bioimpedance – an electric current that flows at different rates through the body, depending on its composition.
In addition to the UK Biobank participants, the researchers drew on an independent sample of 21,982 people with AD; a control group of 41,944 people without AD; a replication sample of 7,329 people with and 252,879 people without AD to validate the findings; and 269,867 people taking part in a genome-wide study of cognitive performance.
The researchers identified 584 variants that met criteria for use as genetic proxies for lean mass. None were located within the APOE gene region. In the aggregate, these variants explained 10.3% of the variance in appendicular lean mass.
Each standard deviation increase in genetically proxied lean mass was associated with a 12% reduction in AD risk (odds ratio [OR], 0.88; 95% confidence interval [CI], 0.82-0.95; P < .001). This finding was replicated in the independent consortium (OR, 0.91; 95% CI, 0.83-0.99; P = .02).
The findings remained “consistent” in sensitivity analyses.
A modifiable risk factor?
Higher appendicular lean mass was associated with higher levels of cognitive performance, with each SD increase in lean mass associated with an SD increase in cognitive performance (OR, 0.09; 95% CI, 0.06-0.11; P = .001).
“Adjusting for potential mediation through performance did not reduce the association between appendicular lean mass and risk of AD,” the authors wrote.
They obtained similar results using genetically proxied trunk and whole-body lean mass, after adjusting for fat mass.
The authors noted several limitations. The bioimpedance measures “only predict, but do not directly measure, lean mass.” Moreover, the approach didn’t examine whether a “critical window of risk factor timing” exists, during which lean mass might play a role in influencing AD risk and after which “interventions would no longer be effective.” Nor could the study determine whether increasing lean mass could reverse AD pathology in patients with preclinical disease or mild cognitive impairment.
Nevertheless, the findings suggest “that lean mass might be a possible modifiable protective factor for Alzheimer’s disease,” the authors wrote. “The mechanisms underlying this finding, as well as the clinical and public health implications, warrant further investigation.”
Novel strategies
In a comment, Iva Miljkovic, MD, PhD, associate professor, department of epidemiology, University of Pittsburgh, said the investigators used “very rigorous methodology.”
The finding suggesting that lean mass is associated with better cognitive function is “important, as cognitive impairment can become stable rather than progress to a pathological state; and, in some cases, can even be reversed.”
In those cases, “identifying the underlying cause – e.g., low lean mass – can significantly improve cognitive function,” said Dr. Miljkovic, senior author of a study showing muscle fat as a risk factor for cognitive decline.
More research will enable us to “expand our understanding” of the mechanisms involved and determine whether interventions aimed at preventing muscle loss and/or increasing muscle fat may have a beneficial effect on cognitive function,” she said. “This might lead to novel strategies to prevent AD.”
Dr. Daghlas is supported by the British Heart Foundation Centre of Research Excellence at Imperial College, London, and is employed part-time by Novo Nordisk. Dr. Miljkovic reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Investigators analyzed data on more than 450,000 participants in the UK Biobank as well as two independent samples of more than 320,000 individuals with and without AD, and more than 260,000 individuals participating in a separate genes and intelligence study.
They estimated lean muscle and fat tissue in the arms and legs and found, in adjusted analyses, over 500 genetic variants associated with lean mass.
On average, higher genetically lean mass was associated with a “modest but statistically robust” reduction in AD risk and with superior performance on cognitive tasks.
“Using human genetic data, we found evidence for a protective effect of lean mass on risk of Alzheimer’s disease,” study investigators Iyas Daghlas, MD, a resident in the department of neurology, University of California, San Francisco, said in an interview.
Although “clinical intervention studies are needed to confirm this effect, this study supports current recommendations to maintain a healthy lifestyle to prevent dementia,” he said.
The study was published online in BMJ Medicine.
Naturally randomized research
Several measures of body composition have been investigated for their potential association with AD. Lean mass – a “proxy for muscle mass, defined as the difference between total mass and fat mass” – has been shown to be reduced in patients with AD compared with controls, the researchers noted.
“Previous research studies have tested the relationship of body mass index with Alzheimer’s disease and did not find evidence for a causal effect,” Dr. Daghlas said. “We wondered whether BMI was an insufficiently granular measure and hypothesized that disaggregating body mass into lean mass and fat mass could reveal novel associations with disease.”
Most studies have used case-control designs, which might be biased by “residual confounding or reverse causality.” Naturally randomized data “may be used as an alternative to conventional observational studies to investigate causal relations between risk factors and diseases,” the researchers wrote.
In particular, the Mendelian randomization (MR) paradigm randomly allocates germline genetic variants and uses them as proxies for a specific risk factor.
MR “is a technique that permits researchers to investigate cause-and-effect relationships using human genetic data,” Dr. Daghlas explained. “In effect, we’re studying the results of a naturally randomized experiment whereby some individuals are genetically allocated to carry more lean mass.”
The current study used MR to investigate the effect of genetically proxied lean mass on the risk of AD and the “related phenotype” of cognitive performance.
Genetic proxy
As genetic proxies for lean mass, the researchers chose single nucleotide polymorphisms (genetic variants) that were associated, in a genome-wide association study (GWAS), with appendicular lean mass.
Appendicular lean mass “more accurately reflects the effects of lean mass than whole body lean mass, which includes smooth and cardiac muscle,” the authors explained.
This GWAS used phenotypic and genetic data from 450,243 participants in the UK Biobank cohort (mean age 57 years). All participants were of European ancestry.
The researchers adjusted for age, sex, and genetic ancestry. They measured appendicular lean mass using bioimpedance – an electric current that flows at different rates through the body, depending on its composition.
In addition to the UK Biobank participants, the researchers drew on an independent sample of 21,982 people with AD; a control group of 41,944 people without AD; a replication sample of 7,329 people with and 252,879 people without AD to validate the findings; and 269,867 people taking part in a genome-wide study of cognitive performance.
The researchers identified 584 variants that met criteria for use as genetic proxies for lean mass. None were located within the APOE gene region. In the aggregate, these variants explained 10.3% of the variance in appendicular lean mass.
Each standard deviation increase in genetically proxied lean mass was associated with a 12% reduction in AD risk (odds ratio [OR], 0.88; 95% confidence interval [CI], 0.82-0.95; P < .001). This finding was replicated in the independent consortium (OR, 0.91; 95% CI, 0.83-0.99; P = .02).
The findings remained “consistent” in sensitivity analyses.
A modifiable risk factor?
Higher appendicular lean mass was associated with higher levels of cognitive performance, with each SD increase in lean mass associated with an SD increase in cognitive performance (OR, 0.09; 95% CI, 0.06-0.11; P = .001).
“Adjusting for potential mediation through performance did not reduce the association between appendicular lean mass and risk of AD,” the authors wrote.
They obtained similar results using genetically proxied trunk and whole-body lean mass, after adjusting for fat mass.
The authors noted several limitations. The bioimpedance measures “only predict, but do not directly measure, lean mass.” Moreover, the approach didn’t examine whether a “critical window of risk factor timing” exists, during which lean mass might play a role in influencing AD risk and after which “interventions would no longer be effective.” Nor could the study determine whether increasing lean mass could reverse AD pathology in patients with preclinical disease or mild cognitive impairment.
Nevertheless, the findings suggest “that lean mass might be a possible modifiable protective factor for Alzheimer’s disease,” the authors wrote. “The mechanisms underlying this finding, as well as the clinical and public health implications, warrant further investigation.”
Novel strategies
In a comment, Iva Miljkovic, MD, PhD, associate professor, department of epidemiology, University of Pittsburgh, said the investigators used “very rigorous methodology.”
The finding suggesting that lean mass is associated with better cognitive function is “important, as cognitive impairment can become stable rather than progress to a pathological state; and, in some cases, can even be reversed.”
In those cases, “identifying the underlying cause – e.g., low lean mass – can significantly improve cognitive function,” said Dr. Miljkovic, senior author of a study showing muscle fat as a risk factor for cognitive decline.
More research will enable us to “expand our understanding” of the mechanisms involved and determine whether interventions aimed at preventing muscle loss and/or increasing muscle fat may have a beneficial effect on cognitive function,” she said. “This might lead to novel strategies to prevent AD.”
Dr. Daghlas is supported by the British Heart Foundation Centre of Research Excellence at Imperial College, London, and is employed part-time by Novo Nordisk. Dr. Miljkovic reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM BMJ MEDICINE
Do oral contraceptives increase depression risk?
In addition, OC use in adolescence has been tied to an increased risk for depression later in life. However, some experts believe the study’s methodology may be flawed.
The investigators tracked more than 250,000 women from birth to menopause, gathering information about their use of combined contraceptive pills (progesterone and estrogen), the timing of the initial depression diagnosis, and the onset of depressive symptoms that were not formally diagnosed.
Women who began using these OCs before or at the age of 20 experienced a 130% higher incidence of depressive symptoms, whereas adult users saw a 92% increase. But the higher occurrence of depression tended to decline after the first 2 years of use, except in teenagers, who maintained an increased incidence of depression even after discontinuation.
This effect remained, even after analysis of potential familial confounding.
“Our findings suggest that the use of OCs, particularly during the first 2 years, increases the risk of depression. Additionally, OC use during adolescence might increase the risk of depression later in life,” Therese Johansson, of the department of immunology, genetics, and pathology, Science for Life Laboratory, Uppsala (Sweden) University, and colleagues wrote.
The study was published online in Epidemiology and Psychiatric Sciences.
Inconsistent findings
Previous studies suggest an association between adolescent use of hormonal contraceptives (HCs) and increased depression risk, but it’s “less clear” whether these effects are similar in adults, the authors wrote. Randomized clinical trials have “shown little or no effect” of HCs on mood. However, most of these studies didn’t consider previous use of HC.
The researchers wanted to estimate the incidence rate of depression associated with first initiation of OC use as well as the lifetime risk associated with use.
They studied 264,557 female participants in the UK Biobank (aged 37-71 years), collecting data from questionnaires, interviews, physical health measures, biological samples, imaging, and linked health records.
Most participants taking OCs had initiated use during the 1970s/early 1980s when second-generation OCs were predominantly used, consisting of levonorgestrel and ethinyl estradiol.
The researchers conducted a secondary outcome analysis on women who completed the UK Biobank Mental Health Questionnaire (MHQ) to evaluate depressive symptoms.
They estimated the associated risk for depression within 2 years after starting OCs in all women, as well as in groups stratified by age at initiation: before age 20 (adolescents) and age 20 and older (adults). In addition, the investigators estimated the lifetime risk for depression.
Time-dependent analysis compared the effect of OC use at initiation to the effect during the remaining years of use in recent and previous users.
They analyzed a subcohort of female siblings, utilizing “inference about causation from examination of familial confounding,” defined by the authors as a “regression-based approach for determining causality through the use of paired observational data collected from related individuals.”
Adolescents at highest risk
Of the participants, 80.6% had used OCs at some point.
The first 2 years of use were associated with a higher rate of depression among users, compared with never-users (hazard ration, 1.79; 95% confidence interval, 1.63-1.96). Although the risk became less pronounced after that, ever-use was still associated with increased lifetime risk for depression (HR, 1.05; 95% CI, 1.01-1.09).
Adolescents and adult OC users both experienced higher rates of depression during the first 2 years, with a more marked effect in adolescents than in adults (HR, 1.95; 95% CI, 1.64-2.32; and HR, 1.74; 95% CI, 1.54-1.95, respectively).
Previous users of OCs had a higher lifetime risk for depression, compared with never-users (HR, 1.05; 95% CI, 1.01-1.09).
Of the subcohort of women who completed the MHQ (n = 82,232), about half reported experiencing at least one of the core depressive symptoms.
OC initiation was associated with an increased risk for depressive symptoms during the first 2 years in ever- versus never-users (HR, 2.00; 95% CI, 1.91-2.10).
Those who began using OCs during adolescence had a dramatically higher rate of depressive symptoms, compared with never-users (HR, 2.30; 95% CI, 2.11-2.51), as did adult initiators (HR, 1.92; 95% CI, 2.11-2.51).
In the analysis of 7,354 first-degree sister pairs, 81% had initiated OCs. A sibling’s OC use was positively associated with a depression diagnosis, and the cosibling’s OC use was also associated with the sibling’s depression diagnosis. “These results support the hypothesis of a causal relationship between OC use and depression, such that OC use increases the risk of depression,” the authors wrote.
The main limitation is the potential for recall bias in the self-reported data, and that the UK Biobank sample consists of a healthier population than the overall U.K. population, which “hampers the generalizability” of the findings, the authors stated.
Flawed study
In a comment, Natalie Rasgon, MD, founder and director of the Stanford (Calif.) Center for Neuroscience in Women’s Health, said the study was “well researched” and “well written” but had “methodological issues.”
She questioned the sibling component, “which the researchers regard as confirming causality.” The effect may be “important but not causative.” Causality in people who are recalling retrospectively “is highly questionable by any adept researcher because it’s subject to memory. Different siblings may have different recall.”
The authors also didn’t study the indication for OC use. Several medical conditions are treated with OCs, including premenstrual dysphoric disorder, the “number one mood disorder among women of reproductive age.” Including this “could have made a huge difference in outcome data,” said Dr. Rasgon, who was not involved with the study.
Anne-Marie Amies Oelschlager, MD, professor of obstetrics and gynecology, University of Washington, Seattle, noted participants were asked to recall depressive symptoms and OC use as far back as 20-30 years ago, which lends itself to inaccurate recall.
And the researchers didn’t ascertain whether the contraceptives had been used continuously or had been started, stopped, and restarted. Nor did they look at different formulations and doses. And the observational nature of the study “limits the ability to infer causation,” continued Dr. Oelschlager, chair of the American College of Obstetrics and Gynecology Clinical Consensus Gynecology Committee. She was not involved with the study.
“This study is too flawed to use meaningfully in clinical practice,” Dr. Oelschlager concluded.
The study was primarily funded by the Swedish Research Council, the Swedish Brain Foundation, and the Uppsala University Center for Women ‘s Mental Health during the Reproductive Lifespan. The authors, Dr. Rasgon, and Dr. Oelschlager declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In addition, OC use in adolescence has been tied to an increased risk for depression later in life. However, some experts believe the study’s methodology may be flawed.
The investigators tracked more than 250,000 women from birth to menopause, gathering information about their use of combined contraceptive pills (progesterone and estrogen), the timing of the initial depression diagnosis, and the onset of depressive symptoms that were not formally diagnosed.
Women who began using these OCs before or at the age of 20 experienced a 130% higher incidence of depressive symptoms, whereas adult users saw a 92% increase. But the higher occurrence of depression tended to decline after the first 2 years of use, except in teenagers, who maintained an increased incidence of depression even after discontinuation.
This effect remained, even after analysis of potential familial confounding.
“Our findings suggest that the use of OCs, particularly during the first 2 years, increases the risk of depression. Additionally, OC use during adolescence might increase the risk of depression later in life,” Therese Johansson, of the department of immunology, genetics, and pathology, Science for Life Laboratory, Uppsala (Sweden) University, and colleagues wrote.
The study was published online in Epidemiology and Psychiatric Sciences.
Inconsistent findings
Previous studies suggest an association between adolescent use of hormonal contraceptives (HCs) and increased depression risk, but it’s “less clear” whether these effects are similar in adults, the authors wrote. Randomized clinical trials have “shown little or no effect” of HCs on mood. However, most of these studies didn’t consider previous use of HC.
The researchers wanted to estimate the incidence rate of depression associated with first initiation of OC use as well as the lifetime risk associated with use.
They studied 264,557 female participants in the UK Biobank (aged 37-71 years), collecting data from questionnaires, interviews, physical health measures, biological samples, imaging, and linked health records.
Most participants taking OCs had initiated use during the 1970s/early 1980s when second-generation OCs were predominantly used, consisting of levonorgestrel and ethinyl estradiol.
The researchers conducted a secondary outcome analysis on women who completed the UK Biobank Mental Health Questionnaire (MHQ) to evaluate depressive symptoms.
They estimated the associated risk for depression within 2 years after starting OCs in all women, as well as in groups stratified by age at initiation: before age 20 (adolescents) and age 20 and older (adults). In addition, the investigators estimated the lifetime risk for depression.
Time-dependent analysis compared the effect of OC use at initiation to the effect during the remaining years of use in recent and previous users.
They analyzed a subcohort of female siblings, utilizing “inference about causation from examination of familial confounding,” defined by the authors as a “regression-based approach for determining causality through the use of paired observational data collected from related individuals.”
Adolescents at highest risk
Of the participants, 80.6% had used OCs at some point.
The first 2 years of use were associated with a higher rate of depression among users, compared with never-users (hazard ration, 1.79; 95% confidence interval, 1.63-1.96). Although the risk became less pronounced after that, ever-use was still associated with increased lifetime risk for depression (HR, 1.05; 95% CI, 1.01-1.09).
Adolescents and adult OC users both experienced higher rates of depression during the first 2 years, with a more marked effect in adolescents than in adults (HR, 1.95; 95% CI, 1.64-2.32; and HR, 1.74; 95% CI, 1.54-1.95, respectively).
Previous users of OCs had a higher lifetime risk for depression, compared with never-users (HR, 1.05; 95% CI, 1.01-1.09).
Of the subcohort of women who completed the MHQ (n = 82,232), about half reported experiencing at least one of the core depressive symptoms.
OC initiation was associated with an increased risk for depressive symptoms during the first 2 years in ever- versus never-users (HR, 2.00; 95% CI, 1.91-2.10).
Those who began using OCs during adolescence had a dramatically higher rate of depressive symptoms, compared with never-users (HR, 2.30; 95% CI, 2.11-2.51), as did adult initiators (HR, 1.92; 95% CI, 2.11-2.51).
In the analysis of 7,354 first-degree sister pairs, 81% had initiated OCs. A sibling’s OC use was positively associated with a depression diagnosis, and the cosibling’s OC use was also associated with the sibling’s depression diagnosis. “These results support the hypothesis of a causal relationship between OC use and depression, such that OC use increases the risk of depression,” the authors wrote.
The main limitation is the potential for recall bias in the self-reported data, and that the UK Biobank sample consists of a healthier population than the overall U.K. population, which “hampers the generalizability” of the findings, the authors stated.
Flawed study
In a comment, Natalie Rasgon, MD, founder and director of the Stanford (Calif.) Center for Neuroscience in Women’s Health, said the study was “well researched” and “well written” but had “methodological issues.”
She questioned the sibling component, “which the researchers regard as confirming causality.” The effect may be “important but not causative.” Causality in people who are recalling retrospectively “is highly questionable by any adept researcher because it’s subject to memory. Different siblings may have different recall.”
The authors also didn’t study the indication for OC use. Several medical conditions are treated with OCs, including premenstrual dysphoric disorder, the “number one mood disorder among women of reproductive age.” Including this “could have made a huge difference in outcome data,” said Dr. Rasgon, who was not involved with the study.
Anne-Marie Amies Oelschlager, MD, professor of obstetrics and gynecology, University of Washington, Seattle, noted participants were asked to recall depressive symptoms and OC use as far back as 20-30 years ago, which lends itself to inaccurate recall.
And the researchers didn’t ascertain whether the contraceptives had been used continuously or had been started, stopped, and restarted. Nor did they look at different formulations and doses. And the observational nature of the study “limits the ability to infer causation,” continued Dr. Oelschlager, chair of the American College of Obstetrics and Gynecology Clinical Consensus Gynecology Committee. She was not involved with the study.
“This study is too flawed to use meaningfully in clinical practice,” Dr. Oelschlager concluded.
The study was primarily funded by the Swedish Research Council, the Swedish Brain Foundation, and the Uppsala University Center for Women ‘s Mental Health during the Reproductive Lifespan. The authors, Dr. Rasgon, and Dr. Oelschlager declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In addition, OC use in adolescence has been tied to an increased risk for depression later in life. However, some experts believe the study’s methodology may be flawed.
The investigators tracked more than 250,000 women from birth to menopause, gathering information about their use of combined contraceptive pills (progesterone and estrogen), the timing of the initial depression diagnosis, and the onset of depressive symptoms that were not formally diagnosed.
Women who began using these OCs before or at the age of 20 experienced a 130% higher incidence of depressive symptoms, whereas adult users saw a 92% increase. But the higher occurrence of depression tended to decline after the first 2 years of use, except in teenagers, who maintained an increased incidence of depression even after discontinuation.
This effect remained, even after analysis of potential familial confounding.
“Our findings suggest that the use of OCs, particularly during the first 2 years, increases the risk of depression. Additionally, OC use during adolescence might increase the risk of depression later in life,” Therese Johansson, of the department of immunology, genetics, and pathology, Science for Life Laboratory, Uppsala (Sweden) University, and colleagues wrote.
The study was published online in Epidemiology and Psychiatric Sciences.
Inconsistent findings
Previous studies suggest an association between adolescent use of hormonal contraceptives (HCs) and increased depression risk, but it’s “less clear” whether these effects are similar in adults, the authors wrote. Randomized clinical trials have “shown little or no effect” of HCs on mood. However, most of these studies didn’t consider previous use of HC.
The researchers wanted to estimate the incidence rate of depression associated with first initiation of OC use as well as the lifetime risk associated with use.
They studied 264,557 female participants in the UK Biobank (aged 37-71 years), collecting data from questionnaires, interviews, physical health measures, biological samples, imaging, and linked health records.
Most participants taking OCs had initiated use during the 1970s/early 1980s when second-generation OCs were predominantly used, consisting of levonorgestrel and ethinyl estradiol.
The researchers conducted a secondary outcome analysis on women who completed the UK Biobank Mental Health Questionnaire (MHQ) to evaluate depressive symptoms.
They estimated the associated risk for depression within 2 years after starting OCs in all women, as well as in groups stratified by age at initiation: before age 20 (adolescents) and age 20 and older (adults). In addition, the investigators estimated the lifetime risk for depression.
Time-dependent analysis compared the effect of OC use at initiation to the effect during the remaining years of use in recent and previous users.
They analyzed a subcohort of female siblings, utilizing “inference about causation from examination of familial confounding,” defined by the authors as a “regression-based approach for determining causality through the use of paired observational data collected from related individuals.”
Adolescents at highest risk
Of the participants, 80.6% had used OCs at some point.
The first 2 years of use were associated with a higher rate of depression among users, compared with never-users (hazard ration, 1.79; 95% confidence interval, 1.63-1.96). Although the risk became less pronounced after that, ever-use was still associated with increased lifetime risk for depression (HR, 1.05; 95% CI, 1.01-1.09).
Adolescents and adult OC users both experienced higher rates of depression during the first 2 years, with a more marked effect in adolescents than in adults (HR, 1.95; 95% CI, 1.64-2.32; and HR, 1.74; 95% CI, 1.54-1.95, respectively).
Previous users of OCs had a higher lifetime risk for depression, compared with never-users (HR, 1.05; 95% CI, 1.01-1.09).
Of the subcohort of women who completed the MHQ (n = 82,232), about half reported experiencing at least one of the core depressive symptoms.
OC initiation was associated with an increased risk for depressive symptoms during the first 2 years in ever- versus never-users (HR, 2.00; 95% CI, 1.91-2.10).
Those who began using OCs during adolescence had a dramatically higher rate of depressive symptoms, compared with never-users (HR, 2.30; 95% CI, 2.11-2.51), as did adult initiators (HR, 1.92; 95% CI, 2.11-2.51).
In the analysis of 7,354 first-degree sister pairs, 81% had initiated OCs. A sibling’s OC use was positively associated with a depression diagnosis, and the cosibling’s OC use was also associated with the sibling’s depression diagnosis. “These results support the hypothesis of a causal relationship between OC use and depression, such that OC use increases the risk of depression,” the authors wrote.
The main limitation is the potential for recall bias in the self-reported data, and that the UK Biobank sample consists of a healthier population than the overall U.K. population, which “hampers the generalizability” of the findings, the authors stated.
Flawed study
In a comment, Natalie Rasgon, MD, founder and director of the Stanford (Calif.) Center for Neuroscience in Women’s Health, said the study was “well researched” and “well written” but had “methodological issues.”
She questioned the sibling component, “which the researchers regard as confirming causality.” The effect may be “important but not causative.” Causality in people who are recalling retrospectively “is highly questionable by any adept researcher because it’s subject to memory. Different siblings may have different recall.”
The authors also didn’t study the indication for OC use. Several medical conditions are treated with OCs, including premenstrual dysphoric disorder, the “number one mood disorder among women of reproductive age.” Including this “could have made a huge difference in outcome data,” said Dr. Rasgon, who was not involved with the study.
Anne-Marie Amies Oelschlager, MD, professor of obstetrics and gynecology, University of Washington, Seattle, noted participants were asked to recall depressive symptoms and OC use as far back as 20-30 years ago, which lends itself to inaccurate recall.
And the researchers didn’t ascertain whether the contraceptives had been used continuously or had been started, stopped, and restarted. Nor did they look at different formulations and doses. And the observational nature of the study “limits the ability to infer causation,” continued Dr. Oelschlager, chair of the American College of Obstetrics and Gynecology Clinical Consensus Gynecology Committee. She was not involved with the study.
“This study is too flawed to use meaningfully in clinical practice,” Dr. Oelschlager concluded.
The study was primarily funded by the Swedish Research Council, the Swedish Brain Foundation, and the Uppsala University Center for Women ‘s Mental Health during the Reproductive Lifespan. The authors, Dr. Rasgon, and Dr. Oelschlager declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM EPIDEMIOLOGY AND PSYCHIATRIC SCIENCES
Can a repurposed Parkinson’s drug slow ALS progression?
Investigators randomly assigned 20 individuals with sporadic ALS to receive either ropinirole or placebo for 24 weeks. During the double-blind period, there was no difference between the groups in terms of decline in functional status.
However, during a further open-label extension period, the ropinirole group showed significant suppression of functional decline and an average of an additional 7 months of progression-free survival.
The researchers were able to predict clinical responsiveness to ropinirole in vitro by analyzing motor neurons derived from participants’ stem cells.
“We found that ropinirole is safe and tolerable for ALS patients and shows therapeutic promise at helping them sustain daily activity and muscle strength,” first author Satoru Morimoto, MD, of the department of physiology, Keio University School of Medicine, Tokyo, said in a news release.
The study was published online in Cell Stem Cell.
Feasibility study
“ALS is totally incurable and it’s a very difficult disease to treat,” senior author Hideyuki Okano, MD, PhD, professor, department of physiology, Keio University, said in the news release.
Preclinical animal models have “limited translational potential” for identifying drug candidates, but induced pluripotent stem cell (iPSC)–derived motor neurons (MNs) from ALS patients can “overcome these limitations for drug screening,” the authors write.
“We previously identified ropinirole [a dopamine D2 receptor agonist] as a potential anti-ALS drug in vitro by iPSC drug discovery,” Dr. Okano said.
The current trial was a randomized, placebo-controlled phase 1/2a feasibility trial that evaluated the safety, tolerability, and efficacy of ropinirole in patients with ALS, using several parameters:
- The revised ALS functional rating scale (ALSFRS-R) score.
- Composite functional endpoints.
- Event-free survival.
- Time to ≤ 50% forced vital capacity (FVC).
The trial consisted of a 12-week run-in period, a 24-week double-blind period, an open-label extension period that lasted from 4 to 24 weeks, and a 4-week follow-up period after administration.
Thirteen patients were assigned to receive ropinirole (23.1% women; mean age, 65.2 ± 12.6 years; 7.7% with clinically definite and 76.9% with clinically probable ALS); seven were assigned to receive placebo (57.1% women; mean age, 66.3 ± 7.5 years; 14.3% with clinically definite and 85.7% with clinically probable ALS).
Of the treatment group, 30.8% had a bulbar onset lesion vs. 57.1% in the placebo group. At baseline, the mean FVC was 94.4% ± 14.9 and 81.5% ± 23.2 in the ropinirole and placebo groups, respectively. The mean body mass index (BMI) was 22.91 ± 3.82 and 19.69 ± 2.63, respectively.
Of the participants,12 in the ropinirole and six in the control group completed the full 24-week treatment protocol; 12 in the ropinirole and five in the placebo group completed the open-label extension (participants who had received placebo were switched to the active drug).
However only seven participants in the ropinirole group and one participant in the placebo group completed the full 1-year trial.
‘Striking correlation’
“During the double-blind period, muscle strength and daily activity were maintained, but a decline in the ALSFRS-R … was not different from that in the placebo group,” the researchers write.
In the open-label extension period, the ropinirole group showed “significant suppression of ALSFRS-R decline,” with an ALSFRS-R score change of only 7.75 (95% confidence interval, 10.66-4.63) for the treatment group vs. 17.51 (95% CI, 22.46-12.56) for the placebo group.
The researchers used the assessment of function and survival (CAFS) score, which adjusts the ALSFRS-R score against mortality, to see whether functional benefits translated into improved survival.
The score “favored ropinirole” in the open-extension period and the entire treatment period but not in the double-blind period.
Disease progression events occurred in 7 of 7 (100%) participants in the placebo group and 7 of 13 (54%) in the ropinirole group, “suggesting a twofold decrease in disease progression” in the treatment group.
The ropinirole group experienced an additional 27.9 weeks of disease progression–free survival, compared with the placebo group.
“No participant discontinued treatment because of adverse experiences in either treatment group,” the authors report.
The analysis of iPSC-derived motor neurons from participants showed dopamine D2 receptor expression, as well as the potential involvement of the cholesterol pathway SREBP2 in the therapeutic effects of ropinirole. Lipid peroxide was also identified as a good “surrogate clinical marker to assess disease progression and drug efficacy.”
“We found a very striking correlation between a patient’s clinical response and the response of their motor neurons in vitro,” said Dr. Morimoto. “Patients whose motor neurons responded robustly to ropinirole in vitro had a much slower clinical disease progression with ropinirole treatment, while suboptimal responders showed much more rapid disease progression, despite taking ropinirole.”
Limitations include “small sample sizes and high attrition rates in the open-label extension period,” so “further validation” is required, the authors state.
Significant flaws
Commenting for this article, Carmel Armon, MD, MHS, professor of neurology, Loma Linda (Calif.) University, said the study “falls short of being a credible 1/2a clinical trial.”
Although the “intentions were good and the design not unusual,” the two groups were not “balanced on risk factors for faster progressing disease.” Rather, the placebo group was “tilted towards faster progressing disease” because there were more clinically definite and probable ALS patients in the placebo group than the treatment group, and there were more patients with bulbar onset.
Participants in the placebo group also had shorter median disease duration, lower BMI, and lower FVC, noted Dr. Armon, who was not involved with the study.
And only 1 in 7 control patients completed the open-label extension, compared with 7 of 13 patients in the intervention group.
“With these limitations, I would be disinclined to rely on the findings to justify a larger clinical trial,” Dr. Armon concluded.
The trial was sponsored by K Pharma. The study drug, active drugs, and placebo were supplied free of charge by GlaxoSmithKline K.K. Dr. Okano received grants from JSPS and AMED and grants and personal fees from K Pharma during the conduct of the study and personal fees from Sanbio, outside the submitted work. Dr. Okano has a patent on a therapeutic agent for ALS and composition for treatment licensed to K Pharma. The other authors’ disclosures and additional information are available in the original article. Dr. Armon reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Investigators randomly assigned 20 individuals with sporadic ALS to receive either ropinirole or placebo for 24 weeks. During the double-blind period, there was no difference between the groups in terms of decline in functional status.
However, during a further open-label extension period, the ropinirole group showed significant suppression of functional decline and an average of an additional 7 months of progression-free survival.
The researchers were able to predict clinical responsiveness to ropinirole in vitro by analyzing motor neurons derived from participants’ stem cells.
“We found that ropinirole is safe and tolerable for ALS patients and shows therapeutic promise at helping them sustain daily activity and muscle strength,” first author Satoru Morimoto, MD, of the department of physiology, Keio University School of Medicine, Tokyo, said in a news release.
The study was published online in Cell Stem Cell.
Feasibility study
“ALS is totally incurable and it’s a very difficult disease to treat,” senior author Hideyuki Okano, MD, PhD, professor, department of physiology, Keio University, said in the news release.
Preclinical animal models have “limited translational potential” for identifying drug candidates, but induced pluripotent stem cell (iPSC)–derived motor neurons (MNs) from ALS patients can “overcome these limitations for drug screening,” the authors write.
“We previously identified ropinirole [a dopamine D2 receptor agonist] as a potential anti-ALS drug in vitro by iPSC drug discovery,” Dr. Okano said.
The current trial was a randomized, placebo-controlled phase 1/2a feasibility trial that evaluated the safety, tolerability, and efficacy of ropinirole in patients with ALS, using several parameters:
- The revised ALS functional rating scale (ALSFRS-R) score.
- Composite functional endpoints.
- Event-free survival.
- Time to ≤ 50% forced vital capacity (FVC).
The trial consisted of a 12-week run-in period, a 24-week double-blind period, an open-label extension period that lasted from 4 to 24 weeks, and a 4-week follow-up period after administration.
Thirteen patients were assigned to receive ropinirole (23.1% women; mean age, 65.2 ± 12.6 years; 7.7% with clinically definite and 76.9% with clinically probable ALS); seven were assigned to receive placebo (57.1% women; mean age, 66.3 ± 7.5 years; 14.3% with clinically definite and 85.7% with clinically probable ALS).
Of the treatment group, 30.8% had a bulbar onset lesion vs. 57.1% in the placebo group. At baseline, the mean FVC was 94.4% ± 14.9 and 81.5% ± 23.2 in the ropinirole and placebo groups, respectively. The mean body mass index (BMI) was 22.91 ± 3.82 and 19.69 ± 2.63, respectively.
Of the participants,12 in the ropinirole and six in the control group completed the full 24-week treatment protocol; 12 in the ropinirole and five in the placebo group completed the open-label extension (participants who had received placebo were switched to the active drug).
However only seven participants in the ropinirole group and one participant in the placebo group completed the full 1-year trial.
‘Striking correlation’
“During the double-blind period, muscle strength and daily activity were maintained, but a decline in the ALSFRS-R … was not different from that in the placebo group,” the researchers write.
In the open-label extension period, the ropinirole group showed “significant suppression of ALSFRS-R decline,” with an ALSFRS-R score change of only 7.75 (95% confidence interval, 10.66-4.63) for the treatment group vs. 17.51 (95% CI, 22.46-12.56) for the placebo group.
The researchers used the assessment of function and survival (CAFS) score, which adjusts the ALSFRS-R score against mortality, to see whether functional benefits translated into improved survival.
The score “favored ropinirole” in the open-extension period and the entire treatment period but not in the double-blind period.
Disease progression events occurred in 7 of 7 (100%) participants in the placebo group and 7 of 13 (54%) in the ropinirole group, “suggesting a twofold decrease in disease progression” in the treatment group.
The ropinirole group experienced an additional 27.9 weeks of disease progression–free survival, compared with the placebo group.
“No participant discontinued treatment because of adverse experiences in either treatment group,” the authors report.
The analysis of iPSC-derived motor neurons from participants showed dopamine D2 receptor expression, as well as the potential involvement of the cholesterol pathway SREBP2 in the therapeutic effects of ropinirole. Lipid peroxide was also identified as a good “surrogate clinical marker to assess disease progression and drug efficacy.”
“We found a very striking correlation between a patient’s clinical response and the response of their motor neurons in vitro,” said Dr. Morimoto. “Patients whose motor neurons responded robustly to ropinirole in vitro had a much slower clinical disease progression with ropinirole treatment, while suboptimal responders showed much more rapid disease progression, despite taking ropinirole.”
Limitations include “small sample sizes and high attrition rates in the open-label extension period,” so “further validation” is required, the authors state.
Significant flaws
Commenting for this article, Carmel Armon, MD, MHS, professor of neurology, Loma Linda (Calif.) University, said the study “falls short of being a credible 1/2a clinical trial.”
Although the “intentions were good and the design not unusual,” the two groups were not “balanced on risk factors for faster progressing disease.” Rather, the placebo group was “tilted towards faster progressing disease” because there were more clinically definite and probable ALS patients in the placebo group than the treatment group, and there were more patients with bulbar onset.
Participants in the placebo group also had shorter median disease duration, lower BMI, and lower FVC, noted Dr. Armon, who was not involved with the study.
And only 1 in 7 control patients completed the open-label extension, compared with 7 of 13 patients in the intervention group.
“With these limitations, I would be disinclined to rely on the findings to justify a larger clinical trial,” Dr. Armon concluded.
The trial was sponsored by K Pharma. The study drug, active drugs, and placebo were supplied free of charge by GlaxoSmithKline K.K. Dr. Okano received grants from JSPS and AMED and grants and personal fees from K Pharma during the conduct of the study and personal fees from Sanbio, outside the submitted work. Dr. Okano has a patent on a therapeutic agent for ALS and composition for treatment licensed to K Pharma. The other authors’ disclosures and additional information are available in the original article. Dr. Armon reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Investigators randomly assigned 20 individuals with sporadic ALS to receive either ropinirole or placebo for 24 weeks. During the double-blind period, there was no difference between the groups in terms of decline in functional status.
However, during a further open-label extension period, the ropinirole group showed significant suppression of functional decline and an average of an additional 7 months of progression-free survival.
The researchers were able to predict clinical responsiveness to ropinirole in vitro by analyzing motor neurons derived from participants’ stem cells.
“We found that ropinirole is safe and tolerable for ALS patients and shows therapeutic promise at helping them sustain daily activity and muscle strength,” first author Satoru Morimoto, MD, of the department of physiology, Keio University School of Medicine, Tokyo, said in a news release.
The study was published online in Cell Stem Cell.
Feasibility study
“ALS is totally incurable and it’s a very difficult disease to treat,” senior author Hideyuki Okano, MD, PhD, professor, department of physiology, Keio University, said in the news release.
Preclinical animal models have “limited translational potential” for identifying drug candidates, but induced pluripotent stem cell (iPSC)–derived motor neurons (MNs) from ALS patients can “overcome these limitations for drug screening,” the authors write.
“We previously identified ropinirole [a dopamine D2 receptor agonist] as a potential anti-ALS drug in vitro by iPSC drug discovery,” Dr. Okano said.
The current trial was a randomized, placebo-controlled phase 1/2a feasibility trial that evaluated the safety, tolerability, and efficacy of ropinirole in patients with ALS, using several parameters:
- The revised ALS functional rating scale (ALSFRS-R) score.
- Composite functional endpoints.
- Event-free survival.
- Time to ≤ 50% forced vital capacity (FVC).
The trial consisted of a 12-week run-in period, a 24-week double-blind period, an open-label extension period that lasted from 4 to 24 weeks, and a 4-week follow-up period after administration.
Thirteen patients were assigned to receive ropinirole (23.1% women; mean age, 65.2 ± 12.6 years; 7.7% with clinically definite and 76.9% with clinically probable ALS); seven were assigned to receive placebo (57.1% women; mean age, 66.3 ± 7.5 years; 14.3% with clinically definite and 85.7% with clinically probable ALS).
Of the treatment group, 30.8% had a bulbar onset lesion vs. 57.1% in the placebo group. At baseline, the mean FVC was 94.4% ± 14.9 and 81.5% ± 23.2 in the ropinirole and placebo groups, respectively. The mean body mass index (BMI) was 22.91 ± 3.82 and 19.69 ± 2.63, respectively.
Of the participants,12 in the ropinirole and six in the control group completed the full 24-week treatment protocol; 12 in the ropinirole and five in the placebo group completed the open-label extension (participants who had received placebo were switched to the active drug).
However only seven participants in the ropinirole group and one participant in the placebo group completed the full 1-year trial.
‘Striking correlation’
“During the double-blind period, muscle strength and daily activity were maintained, but a decline in the ALSFRS-R … was not different from that in the placebo group,” the researchers write.
In the open-label extension period, the ropinirole group showed “significant suppression of ALSFRS-R decline,” with an ALSFRS-R score change of only 7.75 (95% confidence interval, 10.66-4.63) for the treatment group vs. 17.51 (95% CI, 22.46-12.56) for the placebo group.
The researchers used the assessment of function and survival (CAFS) score, which adjusts the ALSFRS-R score against mortality, to see whether functional benefits translated into improved survival.
The score “favored ropinirole” in the open-extension period and the entire treatment period but not in the double-blind period.
Disease progression events occurred in 7 of 7 (100%) participants in the placebo group and 7 of 13 (54%) in the ropinirole group, “suggesting a twofold decrease in disease progression” in the treatment group.
The ropinirole group experienced an additional 27.9 weeks of disease progression–free survival, compared with the placebo group.
“No participant discontinued treatment because of adverse experiences in either treatment group,” the authors report.
The analysis of iPSC-derived motor neurons from participants showed dopamine D2 receptor expression, as well as the potential involvement of the cholesterol pathway SREBP2 in the therapeutic effects of ropinirole. Lipid peroxide was also identified as a good “surrogate clinical marker to assess disease progression and drug efficacy.”
“We found a very striking correlation between a patient’s clinical response and the response of their motor neurons in vitro,” said Dr. Morimoto. “Patients whose motor neurons responded robustly to ropinirole in vitro had a much slower clinical disease progression with ropinirole treatment, while suboptimal responders showed much more rapid disease progression, despite taking ropinirole.”
Limitations include “small sample sizes and high attrition rates in the open-label extension period,” so “further validation” is required, the authors state.
Significant flaws
Commenting for this article, Carmel Armon, MD, MHS, professor of neurology, Loma Linda (Calif.) University, said the study “falls short of being a credible 1/2a clinical trial.”
Although the “intentions were good and the design not unusual,” the two groups were not “balanced on risk factors for faster progressing disease.” Rather, the placebo group was “tilted towards faster progressing disease” because there were more clinically definite and probable ALS patients in the placebo group than the treatment group, and there were more patients with bulbar onset.
Participants in the placebo group also had shorter median disease duration, lower BMI, and lower FVC, noted Dr. Armon, who was not involved with the study.
And only 1 in 7 control patients completed the open-label extension, compared with 7 of 13 patients in the intervention group.
“With these limitations, I would be disinclined to rely on the findings to justify a larger clinical trial,” Dr. Armon concluded.
The trial was sponsored by K Pharma. The study drug, active drugs, and placebo were supplied free of charge by GlaxoSmithKline K.K. Dr. Okano received grants from JSPS and AMED and grants and personal fees from K Pharma during the conduct of the study and personal fees from Sanbio, outside the submitted work. Dr. Okano has a patent on a therapeutic agent for ALS and composition for treatment licensed to K Pharma. The other authors’ disclosures and additional information are available in the original article. Dr. Armon reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM CELL STEM CELL
Concomitant med use may explain poor antidepressant response
Investigators studied over 800 patients who were taking antidepressants for major depressive disorder (MDD) and found that close to two-thirds were taking at least one nonpsychiatric medication with potential depressive symptom side effects (PDSS), more than 30% were taking two or more such medications, and 20% at least three such medications.
These medications, which included antihypertensive medications and corticosteroids, among others, were associated with higher odds of moderate-to-severe depressive symptoms, compared with medications without PDSS.
“When evaluating the reasons for inadequate response to treatment for depression, clinicians should consider whether their patient is also receiving a nonpsychiatric medication with a potential for depressive symptom side effects,” study investigator Mark Olfson, MD, MPH, Elizabeth K. Dollard professor of psychiatry, medicine, and law and professor of epidemiology, Columbia University Irving Medical Center, New York, said in an interview.
The study was published online in the Journal of Clinical Psychiatry.
Previous research limited
“In earlier research, we found that people who were taking medications with a potential to cause depressive symptom side effects were at increased risk of depression, especially those adults who were taking more than one of these medications,” said Dr. Olfson.
This finding led Dr. Olfson and his team to “wonder whether the risks of depressive symptoms associated with these medications extended to people who were being actively treated with antidepressants for depression.”
To investigate, they turned to the National Health and Nutrition Examination Survey (NHANES) – a nationally representative cross-sectional survey of the United States general population.
The study was based on the 2013-2014, 2015-2016, and 2017-2018 waves and included 885 adults who reported using antidepressant medications for greater than or equal to 6 weeks for depression and whose depression could be ascertained.
Prescription medications with PDSS were identified through Micromedex, whose accuracy is “established” and primarily based on the U.S. Food and Drug Administration’s labeled side effects.
Nonantidepressant psychiatric medications and medications for Alzheimer’s disease or substance use disorders were not included in the analysis.
Antidepressant-treated MDD was defined as taking an antidepressant for MDD for greater than or equal to 6 weeks. Depressive symptoms were ascertained using the Patient Health Questionnaire-9 (PHQ-9) with a score of less than 5 representing no/minimal depressive symptoms and a score of greater than or equal to 10 indicating moderate/severe symptoms.
Other variables included self-reported sex, age, race/ethnicity, income, education, health insurance, and common chronic medical conditions such as hypertension, arthritis, lung disease, diabetes mellitus, thyroid disease, cancer, heart disease, liver disease, stroke, and congestive heart failure.
Recovery interrupted
Of the patients in the study treated with antidepressants, most were female, greater than or equal to 50 years, non-Hispanic White, and with a college education (70.55, 62.0%, 81.7%, and 69.4%, respectively).
Selective serotonin reuptake inhibitors were used by 67.9% of participants with MDD. Most had been on the same antidepressant medication for a “long time,” the authors report, with 79.2% and 67.8% taking them for greater than 1 year and greater than 2 years, respectively.
Despite the large number of patients on antidepressants, only 43.0% scored in the no/minimal symptoms range, based on the PHQ-9, while 28.4% scored in the moderate/severe range.
Most patients (85%) took at least one medication for medical conditions, with the majority medications with PDSS: 66.7% took at least one medication with PDSS, 37.3% took at least two, 21.6% took at least three, 10.7% took at least four, and 4.9% took at least five.
Almost 75% were using greater than or equal to 1 medication without PDSS, and about 50% were using greater than 1.
The number of medications with PDSS was significantly associated with lower odds of no/minimal depressive symptoms (AOR, 0.75 [95% CI, 0.64-0.87]; P < .001) and higher odds of moderate/severe symptoms (AOR, 1.14 [1.004-1.29]; P = .044).
“The predicted probability of no/minimal symptoms in those taking 5 medications with PDSS was less than half the predicted probability in those taking no medications with PDSS (0.23 vs. 0.52),” the authors report.
Conversely, the predicted probability of moderate/severe symptoms was ~50% higher in individuals taking 5 versus 0 medications with PDSS (0.36 vs. 0.24).
No corresponding associations were found for medications without PDSS.
The results were even stronger when the researchers repeated their adjusted regression analyses to focus on the 10 individual medications most associated with the severity of depressive symptoms. These were omeprazole, gabapentin, meloxicam, tramadol, ranitidine, baclofen, oxycodone, tizanidine, propranolol, and morphine, with an AOR of 0.42 [0.30-0.60] for no/minimal symptoms and 1.68 [1.24-2.27] for moderate/severe symptoms.
“Many widely prescribed medications, from antihypertensives, such as atenolol and metoprolol to corticosteroids, such as dexamethasone and triamcinolone, are associated with depression side effects,” said Dr. Olfson.
“These medications could interfere with recovery from depression. When available, consideration should be given to selecting a substitute with lower risk for depressive symptoms,” he said.
Role in treatment-resistant depression
In a comment, Dima Qato, PharmD, MPH, PhD, Hygeia Centennial chair and associate professor, University of Southern California School of Pharmacy, Los Angeles, said the study “is an important reminder that the use of medications with depressive symptoms side effects is increasingly common and may contribute to delays in responsiveness or worsen depressive symptoms among individuals being treated for depression.”
Dr. Qato, who is also the director of the Program on Medicines and Public Health, USC School of Pharmacy, and was not involved with the study, recommended that clinicians “consider the role of medications with depression side effects when evaluating patients with treatment-resistant depression.”
The study was not supported by any funding agency. Dr. Olfson and coauthors have disclosed no relevant financial relationships. Dr. Qato is a consultant for the Public Citizen Health Research Group.
A version of this article first appeared on Medscape.com.
Investigators studied over 800 patients who were taking antidepressants for major depressive disorder (MDD) and found that close to two-thirds were taking at least one nonpsychiatric medication with potential depressive symptom side effects (PDSS), more than 30% were taking two or more such medications, and 20% at least three such medications.
These medications, which included antihypertensive medications and corticosteroids, among others, were associated with higher odds of moderate-to-severe depressive symptoms, compared with medications without PDSS.
“When evaluating the reasons for inadequate response to treatment for depression, clinicians should consider whether their patient is also receiving a nonpsychiatric medication with a potential for depressive symptom side effects,” study investigator Mark Olfson, MD, MPH, Elizabeth K. Dollard professor of psychiatry, medicine, and law and professor of epidemiology, Columbia University Irving Medical Center, New York, said in an interview.
The study was published online in the Journal of Clinical Psychiatry.
Previous research limited
“In earlier research, we found that people who were taking medications with a potential to cause depressive symptom side effects were at increased risk of depression, especially those adults who were taking more than one of these medications,” said Dr. Olfson.
This finding led Dr. Olfson and his team to “wonder whether the risks of depressive symptoms associated with these medications extended to people who were being actively treated with antidepressants for depression.”
To investigate, they turned to the National Health and Nutrition Examination Survey (NHANES) – a nationally representative cross-sectional survey of the United States general population.
The study was based on the 2013-2014, 2015-2016, and 2017-2018 waves and included 885 adults who reported using antidepressant medications for greater than or equal to 6 weeks for depression and whose depression could be ascertained.
Prescription medications with PDSS were identified through Micromedex, whose accuracy is “established” and primarily based on the U.S. Food and Drug Administration’s labeled side effects.
Nonantidepressant psychiatric medications and medications for Alzheimer’s disease or substance use disorders were not included in the analysis.
Antidepressant-treated MDD was defined as taking an antidepressant for MDD for greater than or equal to 6 weeks. Depressive symptoms were ascertained using the Patient Health Questionnaire-9 (PHQ-9) with a score of less than 5 representing no/minimal depressive symptoms and a score of greater than or equal to 10 indicating moderate/severe symptoms.
Other variables included self-reported sex, age, race/ethnicity, income, education, health insurance, and common chronic medical conditions such as hypertension, arthritis, lung disease, diabetes mellitus, thyroid disease, cancer, heart disease, liver disease, stroke, and congestive heart failure.
Recovery interrupted
Of the patients in the study treated with antidepressants, most were female, greater than or equal to 50 years, non-Hispanic White, and with a college education (70.55, 62.0%, 81.7%, and 69.4%, respectively).
Selective serotonin reuptake inhibitors were used by 67.9% of participants with MDD. Most had been on the same antidepressant medication for a “long time,” the authors report, with 79.2% and 67.8% taking them for greater than 1 year and greater than 2 years, respectively.
Despite the large number of patients on antidepressants, only 43.0% scored in the no/minimal symptoms range, based on the PHQ-9, while 28.4% scored in the moderate/severe range.
Most patients (85%) took at least one medication for medical conditions, with the majority medications with PDSS: 66.7% took at least one medication with PDSS, 37.3% took at least two, 21.6% took at least three, 10.7% took at least four, and 4.9% took at least five.
Almost 75% were using greater than or equal to 1 medication without PDSS, and about 50% were using greater than 1.
The number of medications with PDSS was significantly associated with lower odds of no/minimal depressive symptoms (AOR, 0.75 [95% CI, 0.64-0.87]; P < .001) and higher odds of moderate/severe symptoms (AOR, 1.14 [1.004-1.29]; P = .044).
“The predicted probability of no/minimal symptoms in those taking 5 medications with PDSS was less than half the predicted probability in those taking no medications with PDSS (0.23 vs. 0.52),” the authors report.
Conversely, the predicted probability of moderate/severe symptoms was ~50% higher in individuals taking 5 versus 0 medications with PDSS (0.36 vs. 0.24).
No corresponding associations were found for medications without PDSS.
The results were even stronger when the researchers repeated their adjusted regression analyses to focus on the 10 individual medications most associated with the severity of depressive symptoms. These were omeprazole, gabapentin, meloxicam, tramadol, ranitidine, baclofen, oxycodone, tizanidine, propranolol, and morphine, with an AOR of 0.42 [0.30-0.60] for no/minimal symptoms and 1.68 [1.24-2.27] for moderate/severe symptoms.
“Many widely prescribed medications, from antihypertensives, such as atenolol and metoprolol to corticosteroids, such as dexamethasone and triamcinolone, are associated with depression side effects,” said Dr. Olfson.
“These medications could interfere with recovery from depression. When available, consideration should be given to selecting a substitute with lower risk for depressive symptoms,” he said.
Role in treatment-resistant depression
In a comment, Dima Qato, PharmD, MPH, PhD, Hygeia Centennial chair and associate professor, University of Southern California School of Pharmacy, Los Angeles, said the study “is an important reminder that the use of medications with depressive symptoms side effects is increasingly common and may contribute to delays in responsiveness or worsen depressive symptoms among individuals being treated for depression.”
Dr. Qato, who is also the director of the Program on Medicines and Public Health, USC School of Pharmacy, and was not involved with the study, recommended that clinicians “consider the role of medications with depression side effects when evaluating patients with treatment-resistant depression.”
The study was not supported by any funding agency. Dr. Olfson and coauthors have disclosed no relevant financial relationships. Dr. Qato is a consultant for the Public Citizen Health Research Group.
A version of this article first appeared on Medscape.com.
Investigators studied over 800 patients who were taking antidepressants for major depressive disorder (MDD) and found that close to two-thirds were taking at least one nonpsychiatric medication with potential depressive symptom side effects (PDSS), more than 30% were taking two or more such medications, and 20% at least three such medications.
These medications, which included antihypertensive medications and corticosteroids, among others, were associated with higher odds of moderate-to-severe depressive symptoms, compared with medications without PDSS.
“When evaluating the reasons for inadequate response to treatment for depression, clinicians should consider whether their patient is also receiving a nonpsychiatric medication with a potential for depressive symptom side effects,” study investigator Mark Olfson, MD, MPH, Elizabeth K. Dollard professor of psychiatry, medicine, and law and professor of epidemiology, Columbia University Irving Medical Center, New York, said in an interview.
The study was published online in the Journal of Clinical Psychiatry.
Previous research limited
“In earlier research, we found that people who were taking medications with a potential to cause depressive symptom side effects were at increased risk of depression, especially those adults who were taking more than one of these medications,” said Dr. Olfson.
This finding led Dr. Olfson and his team to “wonder whether the risks of depressive symptoms associated with these medications extended to people who were being actively treated with antidepressants for depression.”
To investigate, they turned to the National Health and Nutrition Examination Survey (NHANES) – a nationally representative cross-sectional survey of the United States general population.
The study was based on the 2013-2014, 2015-2016, and 2017-2018 waves and included 885 adults who reported using antidepressant medications for greater than or equal to 6 weeks for depression and whose depression could be ascertained.
Prescription medications with PDSS were identified through Micromedex, whose accuracy is “established” and primarily based on the U.S. Food and Drug Administration’s labeled side effects.
Nonantidepressant psychiatric medications and medications for Alzheimer’s disease or substance use disorders were not included in the analysis.
Antidepressant-treated MDD was defined as taking an antidepressant for MDD for greater than or equal to 6 weeks. Depressive symptoms were ascertained using the Patient Health Questionnaire-9 (PHQ-9) with a score of less than 5 representing no/minimal depressive symptoms and a score of greater than or equal to 10 indicating moderate/severe symptoms.
Other variables included self-reported sex, age, race/ethnicity, income, education, health insurance, and common chronic medical conditions such as hypertension, arthritis, lung disease, diabetes mellitus, thyroid disease, cancer, heart disease, liver disease, stroke, and congestive heart failure.
Recovery interrupted
Of the patients in the study treated with antidepressants, most were female, greater than or equal to 50 years, non-Hispanic White, and with a college education (70.55, 62.0%, 81.7%, and 69.4%, respectively).
Selective serotonin reuptake inhibitors were used by 67.9% of participants with MDD. Most had been on the same antidepressant medication for a “long time,” the authors report, with 79.2% and 67.8% taking them for greater than 1 year and greater than 2 years, respectively.
Despite the large number of patients on antidepressants, only 43.0% scored in the no/minimal symptoms range, based on the PHQ-9, while 28.4% scored in the moderate/severe range.
Most patients (85%) took at least one medication for medical conditions, with the majority medications with PDSS: 66.7% took at least one medication with PDSS, 37.3% took at least two, 21.6% took at least three, 10.7% took at least four, and 4.9% took at least five.
Almost 75% were using greater than or equal to 1 medication without PDSS, and about 50% were using greater than 1.
The number of medications with PDSS was significantly associated with lower odds of no/minimal depressive symptoms (AOR, 0.75 [95% CI, 0.64-0.87]; P < .001) and higher odds of moderate/severe symptoms (AOR, 1.14 [1.004-1.29]; P = .044).
“The predicted probability of no/minimal symptoms in those taking 5 medications with PDSS was less than half the predicted probability in those taking no medications with PDSS (0.23 vs. 0.52),” the authors report.
Conversely, the predicted probability of moderate/severe symptoms was ~50% higher in individuals taking 5 versus 0 medications with PDSS (0.36 vs. 0.24).
No corresponding associations were found for medications without PDSS.
The results were even stronger when the researchers repeated their adjusted regression analyses to focus on the 10 individual medications most associated with the severity of depressive symptoms. These were omeprazole, gabapentin, meloxicam, tramadol, ranitidine, baclofen, oxycodone, tizanidine, propranolol, and morphine, with an AOR of 0.42 [0.30-0.60] for no/minimal symptoms and 1.68 [1.24-2.27] for moderate/severe symptoms.
“Many widely prescribed medications, from antihypertensives, such as atenolol and metoprolol to corticosteroids, such as dexamethasone and triamcinolone, are associated with depression side effects,” said Dr. Olfson.
“These medications could interfere with recovery from depression. When available, consideration should be given to selecting a substitute with lower risk for depressive symptoms,” he said.
Role in treatment-resistant depression
In a comment, Dima Qato, PharmD, MPH, PhD, Hygeia Centennial chair and associate professor, University of Southern California School of Pharmacy, Los Angeles, said the study “is an important reminder that the use of medications with depressive symptoms side effects is increasingly common and may contribute to delays in responsiveness or worsen depressive symptoms among individuals being treated for depression.”
Dr. Qato, who is also the director of the Program on Medicines and Public Health, USC School of Pharmacy, and was not involved with the study, recommended that clinicians “consider the role of medications with depression side effects when evaluating patients with treatment-resistant depression.”
The study was not supported by any funding agency. Dr. Olfson and coauthors have disclosed no relevant financial relationships. Dr. Qato is a consultant for the Public Citizen Health Research Group.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF CLINICAL PSYCHIATRY
B vitamin may help boost antidepressant efficacy
The investigators analyzed six studies and found support for adjunctive use of LMF with patients with MDD not responding to antidepressant monotherapy. Treatment response was highest in those with obesity and inflammatory biomarkers.
“If clinicians try LMF on their patients with treatment-resistant depression, the treatment is very robust in patients who have high BMI [body mass index] or inflammatory biomarkers, and it’s worth a try even in patients who don’t have these indicators, since it’s safe and well tolerated, with no downside,” study investigator Vladimir Maletic, MD, MS, clinical professor of psychiatry and behavioral science, University of South Carolina, Greenville, said in an interview.
The study was published online in the Journal of Clinical Psychiatry.
‘Shortcut’ to the brain
A considerable percentage of patients with MDD fail to achieve an adequate response to treatment, the authors wrote.
Previous research shows benefits of folate (vitamin B9) and other B vitamins in the pathophysiology and treatment of depression.
Folate is available in several forms, including LMF, which differs from dietary folate and synthetic folic acid supplements because it’s a reduced metabolite that readily crosses the blood-brain barrier.
“This is a ‘shortcut’ that gets directly to the brain, especially in those with higher BMI or inflammatory indicators, allowing their antidepressant to work better,” Dr. Maletic said.
LMF is available as a prescription medical food and approved for the clinical dietary management of patients with MDD.
The authors wanted to understand the potential role of LMF in treating patients with MDD with insufficient response to current antidepressant therapy.
They analyzed six studies:
- Two multicenter, randomized, double-blind, placebo-controlled sequential parallel trials for patients with SSRI-resistant MDD (n = 148 and n = 75).
- A 12-month open-label extension trial of the two randomized, controlled trials (n = 68).
- A retrospective cohort study evaluating patients previously prescribed LMF (n = 554).
- Two post hoc exploratory analyses of the second randomized, controlled trial, stratifying patients by specific biological and genetic markers (n = 74) and evaluating the effect of biomarkers on treatment effect (n = 74).
The primary endpoints were improvement on the 17-item Hamilton Depression Rating Scale (HDRS-17) or the Patient Health Questionnaire (PHQ-9).
Patients in all trials were treated with either 7.5 mg or 15 mg of LMF.
Both RCTs were divided into two 30-day phases, with patients assessed every 10 days. Response was defined as at least a 50% reduction in HDRS-17 score during treatment or a final score of 7 or less.
‘Salvage pathway’
In the RCTs, patients who received 7.5 mg of LMF did not achieve efficacy superior to placebo, while those receiving 15 mg/day of LMF for 30 days showed significantly greater reduction in HDRS-17 scores (–5.6 vs. –3.0; P = .05, respectively) and higher response rates (32.3% vs. 14.6%; P = .05, respectively).
The 12-month open extension trial showed that among patients who received the 15-mg dose, 61% achieved remission at any point, and 38% achieved recovery. Among initial nonresponders, 60% eventually achieved remission, with no serious adverse events.
“These results indicate that patients who respond well to shorter-term treatment are likely to maintain that response over the subsequent year and shows that those not adequately responding within the first 8 weeks of therapy may benefit from longer-term LMF treatments,” the investigators noted.
In the prospective observational study, the pooled mean change in PHQ-9 was –8.5, with response and remission rates of 67.9% and 45.7%, respectively.
“These outcomes suggest that the results seen in the controlled trial are likely to extend to patients in real-world practice,” the researchers wrote.
The post hoc analyses focusing on the findings of the two RCTs explored the differences in response to LMF, based on biomarker, BMI, and genotype.
Individuals with BMI less than 30 did not have a significant change from baseline with LMF treatment, in contrast to those with BMI of 30 or higher (pooled treatment effect, –4.66;95% CI, –7.22 to –1.98) – a difference the authors call “striking.”
Levels of inflammatory markers (tumor necrosis factor–alpha, interleukin-8, heart-specific C-reactive protein, and leptin) above the median value were associated with significantly greater treatment effect – a finding that remained significant even after adjustment for BMI.
Although BMI and cytokines all showed significant main effects, the “synergy” between them “suggests that these risk factors may interact with each other to influence response to LMF,” the authors wrote.
The mechanism by which LMF augments antidepressant treatment is tied to monoamine synthesis, since LMF promotes the synthesis of key monoamine neurotransmitters associated with MDD (serotonin, norepinephrine, and dopamine), Dr. Maletic explained.
High levels of inflammation (often tied to obesity) cause oxidative stress, which inhibits the synthesis of these neurotransmitters and depletes them more rapidly. LMF provides a “salvage pathway” that may prevent this from happening, thus increasing the antidepressant response of the monoamines, he said.
A ‘good addition’
In a comment, David Mischoulon, MD, PhD, Joyce R. Tedlow Professor of Psychiatry at Harvard Medical School and director of the depression clinical and research program at Massachusetts General Hospital, both in Boston, said the paper “does a good job of synthesizing what we know about LMF as an adjunctive treatment in major depression.”
However, he recommended “caution” when interpreting the findings, since “relatively few” studies were reviewed.
Dr. Mischoulon, who was not involved with the study, said that a “particularly interesting finding from these studies is individuals who are overweight and/or have elevation in inflammatory activity ... seemed to respond better to the addition of LMF.” This finding is similar to what his research team observed when investigating the potential role of fish oils in treating depression.
“These findings overall are not surprising, in view of the well-established multidirectional relationship between depression, inflammation, and overweight status,” he said.
LMF “seems like a good addition to the pharmacological armamentarium for depression; and because it is safe and has minimal side effects, it can be added to the treatment regimen of patients who are depressed and not responding adequately to standard antidepressants,” he said.
This work was funded by Alfasigma USA. The authors did not receive payment for their participation. Dr. Maletic has received writing support from Alfasigma USA; consulting/advisory fees from AbbVie/Allergan, Acadia, Alfasigma USA, Alkermes, Eisai-Purdue, Intra-Cellular Therapies, Janssen, Lundbeck, Jazz, Noven, Otsuka America, Sage, Sunovion, Supernus, and Takeda; and honoraria for lectures from AbbVie, Acadia, Alkermes, Allergan, Eisai, Ironshore, Intra-Cellular, Janssen, Lundbeck, Otsuka America, Sunovion, Supernus, and Takeda. Dr. Mischoulon has received research support from Nordic Naturals and Heckel Medizintechnik. He has received honoraria for speaking from the Massachusetts General Hospital Psychiatry Academy, PeerPoint Medical Education Institute, and Harvard blog.
A version of this article first appeared on Medscape.com.
The investigators analyzed six studies and found support for adjunctive use of LMF with patients with MDD not responding to antidepressant monotherapy. Treatment response was highest in those with obesity and inflammatory biomarkers.
“If clinicians try LMF on their patients with treatment-resistant depression, the treatment is very robust in patients who have high BMI [body mass index] or inflammatory biomarkers, and it’s worth a try even in patients who don’t have these indicators, since it’s safe and well tolerated, with no downside,” study investigator Vladimir Maletic, MD, MS, clinical professor of psychiatry and behavioral science, University of South Carolina, Greenville, said in an interview.
The study was published online in the Journal of Clinical Psychiatry.
‘Shortcut’ to the brain
A considerable percentage of patients with MDD fail to achieve an adequate response to treatment, the authors wrote.
Previous research shows benefits of folate (vitamin B9) and other B vitamins in the pathophysiology and treatment of depression.
Folate is available in several forms, including LMF, which differs from dietary folate and synthetic folic acid supplements because it’s a reduced metabolite that readily crosses the blood-brain barrier.
“This is a ‘shortcut’ that gets directly to the brain, especially in those with higher BMI or inflammatory indicators, allowing their antidepressant to work better,” Dr. Maletic said.
LMF is available as a prescription medical food and approved for the clinical dietary management of patients with MDD.
The authors wanted to understand the potential role of LMF in treating patients with MDD with insufficient response to current antidepressant therapy.
They analyzed six studies:
- Two multicenter, randomized, double-blind, placebo-controlled sequential parallel trials for patients with SSRI-resistant MDD (n = 148 and n = 75).
- A 12-month open-label extension trial of the two randomized, controlled trials (n = 68).
- A retrospective cohort study evaluating patients previously prescribed LMF (n = 554).
- Two post hoc exploratory analyses of the second randomized, controlled trial, stratifying patients by specific biological and genetic markers (n = 74) and evaluating the effect of biomarkers on treatment effect (n = 74).
The primary endpoints were improvement on the 17-item Hamilton Depression Rating Scale (HDRS-17) or the Patient Health Questionnaire (PHQ-9).
Patients in all trials were treated with either 7.5 mg or 15 mg of LMF.
Both RCTs were divided into two 30-day phases, with patients assessed every 10 days. Response was defined as at least a 50% reduction in HDRS-17 score during treatment or a final score of 7 or less.
‘Salvage pathway’
In the RCTs, patients who received 7.5 mg of LMF did not achieve efficacy superior to placebo, while those receiving 15 mg/day of LMF for 30 days showed significantly greater reduction in HDRS-17 scores (–5.6 vs. –3.0; P = .05, respectively) and higher response rates (32.3% vs. 14.6%; P = .05, respectively).
The 12-month open extension trial showed that among patients who received the 15-mg dose, 61% achieved remission at any point, and 38% achieved recovery. Among initial nonresponders, 60% eventually achieved remission, with no serious adverse events.
“These results indicate that patients who respond well to shorter-term treatment are likely to maintain that response over the subsequent year and shows that those not adequately responding within the first 8 weeks of therapy may benefit from longer-term LMF treatments,” the investigators noted.
In the prospective observational study, the pooled mean change in PHQ-9 was –8.5, with response and remission rates of 67.9% and 45.7%, respectively.
“These outcomes suggest that the results seen in the controlled trial are likely to extend to patients in real-world practice,” the researchers wrote.
The post hoc analyses focusing on the findings of the two RCTs explored the differences in response to LMF, based on biomarker, BMI, and genotype.
Individuals with BMI less than 30 did not have a significant change from baseline with LMF treatment, in contrast to those with BMI of 30 or higher (pooled treatment effect, –4.66;95% CI, –7.22 to –1.98) – a difference the authors call “striking.”
Levels of inflammatory markers (tumor necrosis factor–alpha, interleukin-8, heart-specific C-reactive protein, and leptin) above the median value were associated with significantly greater treatment effect – a finding that remained significant even after adjustment for BMI.
Although BMI and cytokines all showed significant main effects, the “synergy” between them “suggests that these risk factors may interact with each other to influence response to LMF,” the authors wrote.
The mechanism by which LMF augments antidepressant treatment is tied to monoamine synthesis, since LMF promotes the synthesis of key monoamine neurotransmitters associated with MDD (serotonin, norepinephrine, and dopamine), Dr. Maletic explained.
High levels of inflammation (often tied to obesity) cause oxidative stress, which inhibits the synthesis of these neurotransmitters and depletes them more rapidly. LMF provides a “salvage pathway” that may prevent this from happening, thus increasing the antidepressant response of the monoamines, he said.
A ‘good addition’
In a comment, David Mischoulon, MD, PhD, Joyce R. Tedlow Professor of Psychiatry at Harvard Medical School and director of the depression clinical and research program at Massachusetts General Hospital, both in Boston, said the paper “does a good job of synthesizing what we know about LMF as an adjunctive treatment in major depression.”
However, he recommended “caution” when interpreting the findings, since “relatively few” studies were reviewed.
Dr. Mischoulon, who was not involved with the study, said that a “particularly interesting finding from these studies is individuals who are overweight and/or have elevation in inflammatory activity ... seemed to respond better to the addition of LMF.” This finding is similar to what his research team observed when investigating the potential role of fish oils in treating depression.
“These findings overall are not surprising, in view of the well-established multidirectional relationship between depression, inflammation, and overweight status,” he said.
LMF “seems like a good addition to the pharmacological armamentarium for depression; and because it is safe and has minimal side effects, it can be added to the treatment regimen of patients who are depressed and not responding adequately to standard antidepressants,” he said.
This work was funded by Alfasigma USA. The authors did not receive payment for their participation. Dr. Maletic has received writing support from Alfasigma USA; consulting/advisory fees from AbbVie/Allergan, Acadia, Alfasigma USA, Alkermes, Eisai-Purdue, Intra-Cellular Therapies, Janssen, Lundbeck, Jazz, Noven, Otsuka America, Sage, Sunovion, Supernus, and Takeda; and honoraria for lectures from AbbVie, Acadia, Alkermes, Allergan, Eisai, Ironshore, Intra-Cellular, Janssen, Lundbeck, Otsuka America, Sunovion, Supernus, and Takeda. Dr. Mischoulon has received research support from Nordic Naturals and Heckel Medizintechnik. He has received honoraria for speaking from the Massachusetts General Hospital Psychiatry Academy, PeerPoint Medical Education Institute, and Harvard blog.
A version of this article first appeared on Medscape.com.
The investigators analyzed six studies and found support for adjunctive use of LMF with patients with MDD not responding to antidepressant monotherapy. Treatment response was highest in those with obesity and inflammatory biomarkers.
“If clinicians try LMF on their patients with treatment-resistant depression, the treatment is very robust in patients who have high BMI [body mass index] or inflammatory biomarkers, and it’s worth a try even in patients who don’t have these indicators, since it’s safe and well tolerated, with no downside,” study investigator Vladimir Maletic, MD, MS, clinical professor of psychiatry and behavioral science, University of South Carolina, Greenville, said in an interview.
The study was published online in the Journal of Clinical Psychiatry.
‘Shortcut’ to the brain
A considerable percentage of patients with MDD fail to achieve an adequate response to treatment, the authors wrote.
Previous research shows benefits of folate (vitamin B9) and other B vitamins in the pathophysiology and treatment of depression.
Folate is available in several forms, including LMF, which differs from dietary folate and synthetic folic acid supplements because it’s a reduced metabolite that readily crosses the blood-brain barrier.
“This is a ‘shortcut’ that gets directly to the brain, especially in those with higher BMI or inflammatory indicators, allowing their antidepressant to work better,” Dr. Maletic said.
LMF is available as a prescription medical food and approved for the clinical dietary management of patients with MDD.
The authors wanted to understand the potential role of LMF in treating patients with MDD with insufficient response to current antidepressant therapy.
They analyzed six studies:
- Two multicenter, randomized, double-blind, placebo-controlled sequential parallel trials for patients with SSRI-resistant MDD (n = 148 and n = 75).
- A 12-month open-label extension trial of the two randomized, controlled trials (n = 68).
- A retrospective cohort study evaluating patients previously prescribed LMF (n = 554).
- Two post hoc exploratory analyses of the second randomized, controlled trial, stratifying patients by specific biological and genetic markers (n = 74) and evaluating the effect of biomarkers on treatment effect (n = 74).
The primary endpoints were improvement on the 17-item Hamilton Depression Rating Scale (HDRS-17) or the Patient Health Questionnaire (PHQ-9).
Patients in all trials were treated with either 7.5 mg or 15 mg of LMF.
Both RCTs were divided into two 30-day phases, with patients assessed every 10 days. Response was defined as at least a 50% reduction in HDRS-17 score during treatment or a final score of 7 or less.
‘Salvage pathway’
In the RCTs, patients who received 7.5 mg of LMF did not achieve efficacy superior to placebo, while those receiving 15 mg/day of LMF for 30 days showed significantly greater reduction in HDRS-17 scores (–5.6 vs. –3.0; P = .05, respectively) and higher response rates (32.3% vs. 14.6%; P = .05, respectively).
The 12-month open extension trial showed that among patients who received the 15-mg dose, 61% achieved remission at any point, and 38% achieved recovery. Among initial nonresponders, 60% eventually achieved remission, with no serious adverse events.
“These results indicate that patients who respond well to shorter-term treatment are likely to maintain that response over the subsequent year and shows that those not adequately responding within the first 8 weeks of therapy may benefit from longer-term LMF treatments,” the investigators noted.
In the prospective observational study, the pooled mean change in PHQ-9 was –8.5, with response and remission rates of 67.9% and 45.7%, respectively.
“These outcomes suggest that the results seen in the controlled trial are likely to extend to patients in real-world practice,” the researchers wrote.
The post hoc analyses focusing on the findings of the two RCTs explored the differences in response to LMF, based on biomarker, BMI, and genotype.
Individuals with BMI less than 30 did not have a significant change from baseline with LMF treatment, in contrast to those with BMI of 30 or higher (pooled treatment effect, –4.66;95% CI, –7.22 to –1.98) – a difference the authors call “striking.”
Levels of inflammatory markers (tumor necrosis factor–alpha, interleukin-8, heart-specific C-reactive protein, and leptin) above the median value were associated with significantly greater treatment effect – a finding that remained significant even after adjustment for BMI.
Although BMI and cytokines all showed significant main effects, the “synergy” between them “suggests that these risk factors may interact with each other to influence response to LMF,” the authors wrote.
The mechanism by which LMF augments antidepressant treatment is tied to monoamine synthesis, since LMF promotes the synthesis of key monoamine neurotransmitters associated with MDD (serotonin, norepinephrine, and dopamine), Dr. Maletic explained.
High levels of inflammation (often tied to obesity) cause oxidative stress, which inhibits the synthesis of these neurotransmitters and depletes them more rapidly. LMF provides a “salvage pathway” that may prevent this from happening, thus increasing the antidepressant response of the monoamines, he said.
A ‘good addition’
In a comment, David Mischoulon, MD, PhD, Joyce R. Tedlow Professor of Psychiatry at Harvard Medical School and director of the depression clinical and research program at Massachusetts General Hospital, both in Boston, said the paper “does a good job of synthesizing what we know about LMF as an adjunctive treatment in major depression.”
However, he recommended “caution” when interpreting the findings, since “relatively few” studies were reviewed.
Dr. Mischoulon, who was not involved with the study, said that a “particularly interesting finding from these studies is individuals who are overweight and/or have elevation in inflammatory activity ... seemed to respond better to the addition of LMF.” This finding is similar to what his research team observed when investigating the potential role of fish oils in treating depression.
“These findings overall are not surprising, in view of the well-established multidirectional relationship between depression, inflammation, and overweight status,” he said.
LMF “seems like a good addition to the pharmacological armamentarium for depression; and because it is safe and has minimal side effects, it can be added to the treatment regimen of patients who are depressed and not responding adequately to standard antidepressants,” he said.
This work was funded by Alfasigma USA. The authors did not receive payment for their participation. Dr. Maletic has received writing support from Alfasigma USA; consulting/advisory fees from AbbVie/Allergan, Acadia, Alfasigma USA, Alkermes, Eisai-Purdue, Intra-Cellular Therapies, Janssen, Lundbeck, Jazz, Noven, Otsuka America, Sage, Sunovion, Supernus, and Takeda; and honoraria for lectures from AbbVie, Acadia, Alkermes, Allergan, Eisai, Ironshore, Intra-Cellular, Janssen, Lundbeck, Otsuka America, Sunovion, Supernus, and Takeda. Dr. Mischoulon has received research support from Nordic Naturals and Heckel Medizintechnik. He has received honoraria for speaking from the Massachusetts General Hospital Psychiatry Academy, PeerPoint Medical Education Institute, and Harvard blog.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF CLINICAL PSYCHIATRY