New blood test may detect preclinical Alzheimer’s years in advance

Article Type
Changed
Mon, 01/03/2022 - 13:35

A new blood test that identifies a variant of the protein P53 appears to predict Alzheimer’s disease (AD) progression up to 6 years in advance of a clinical diagnosis, early research suggests.

Analysis of two studies showed the test (AlzoSure Predict), which uses less than 1 ml of blood, had numerous benefits compared with other blood tests that track AD pathology.

“We believe this has the potential to radically improve early stratification and identification of patients for trials 6 years in advance of a diagnosis, which can potentially enable more rapid and efficient approvals of therapies,” Paul Kinnon, CEO of Diadem, the test’s manufacturer, said in an interview.

The findings were presented at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference.
 

Positive “discovery” results

P53, which is present in both the brain and elsewhere in the body, “is one of the most targeted proteins” for drug development in cancer and other conditions, said Mr. Kinnon.

The current blood test measures a derivative of P53 (U-p53AZ). Previous research suggests this derivative, which affects amyloid and oxidative stress, is also implicated in AD pathogenesis.

Researchers used blood samples from patients aged 60 years and older from the Australia Imaging, Biomarkers, and Lifestyles (AIBL) study who had various levels of cognitive function.

They analyzed samples at multiple timepoints over a 10-year period, “so we know when the marker is most accurate at predicting decline,” Mr. Kinnon said.

The first of two studies was considered a “discovery” study and included blood samples from 224 patients.

Results showed the test predicted decline from mild cognitive impairment (MCI) to AD at the end of 6 years, with an area under the curve (AUC) greater than 90%.

These results are “massive,” said Mr. Kinnon. “It’s the most accurate test I’ve seen anywhere for predicting decline of a patient.”

The test can also accurately classify a patient’s stage of cognition, he added. “Not only does it allow us to predict 6 years in advance, it also tells us if the patient has SMC [subjective memory complaints], MCI, or AD with a 95% certainty,” Mr. Kinnon said.

He noted that test sensitivity was higher than results found from traditional methods that are currently being used. The positive predictive value (PPV) and negative predictive value (NPV), which were at 90% or more, were “absolutely fantastic,” said Mr. Kinnon.
 

“Better than expected” results

In the second “validation” study, investigators examined samples from a completely different group of 482 patients. The “very compelling” results showed AUCs over 90%, PPVs over 90%, and “very high” NPVs, Mr. Kinnon said.

“These are great data, better than we expected,” he added.

However, he noted the test is “very specific” for decline to AD and not to other dementias.

In addition, Mr. Kinnon noted the test does not monitor levels of amyloid beta or tau, which accumulate at a later stage of AD. “Amyloid and tau tell you you’ve got it. We’re there way before those concentrations become detectable,” he said.

Identifying patients who will progress to AD years before they have symptoms gives them time to make medical decisions. These patients may also try treatments at an earlier stage of the disease, when these therapies are most likely to be helpful, said Mr. Kinnon.

In addition, using the test could speed up the approval of prospective drug treatments for AD. Currently, pharmaceutical companies enroll thousands of patients into a clinical study “and they don’t know which ones will have AD,” Mr. Kinnon noted.

“This test tells you these are the ones who are going to progress and should go into the study, and these are the ones that aren’t. So it makes the studies statistically relevant and accurate,” he said.

Investigators can also use the test to monitor patients during a study instead of relying on expensive PET scans and painful and costly spinal fluid taps, he added.

Previous surveys and market research have shown that neurologists and general practitioners “want a blood test to screen patients early, to help educate and inform patients,” said Mr. Kinnon.

Further results that will include biobank data on more than 1,000 patients in the United States and Europe are due for completion toward the end of this year.

The company is currently in negotiations to bring the product to North America, Europe, and elsewhere. “Our goal is to have it on the market by the middle of next year in multiple regions,” Mr. Kinnon said.
 

Encouraging, preliminary

Commenting on the findings, Percy Griffin, PhD, MSc, director of scientific engagement at the Alzheimer’s Association, said “it’s exciting” to see development of novel ways for detecting or predicting AD.

“There is an urgent need for simple, inexpensive, noninvasive, and accessible early detection tools for Alzheimer’s, such as a blood test,” he said.

However, Dr. Griffin cautioned the test is still in the early stages of development and has not been tested extensively in large, diverse clinical trials.

In addition, although the test predicts whether a person will progress, it does not predict when the person will progress, he added.

“These preliminary results are encouraging, but further validation is needed before this test can be implemented widely,” he said.

Technologies that facilitate the early detection and intervention before significant loss of brain cells from AD “would be game-changing” for individuals, families, and the healthcare system, Dr. Griffin concluded.

A version of this article first appeared on Medscape.com.

Issue
Neurology reviews - 30(1)
Publications
Topics
Sections

A new blood test that identifies a variant of the protein P53 appears to predict Alzheimer’s disease (AD) progression up to 6 years in advance of a clinical diagnosis, early research suggests.

Analysis of two studies showed the test (AlzoSure Predict), which uses less than 1 ml of blood, had numerous benefits compared with other blood tests that track AD pathology.

“We believe this has the potential to radically improve early stratification and identification of patients for trials 6 years in advance of a diagnosis, which can potentially enable more rapid and efficient approvals of therapies,” Paul Kinnon, CEO of Diadem, the test’s manufacturer, said in an interview.

The findings were presented at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference.
 

Positive “discovery” results

P53, which is present in both the brain and elsewhere in the body, “is one of the most targeted proteins” for drug development in cancer and other conditions, said Mr. Kinnon.

The current blood test measures a derivative of P53 (U-p53AZ). Previous research suggests this derivative, which affects amyloid and oxidative stress, is also implicated in AD pathogenesis.

Researchers used blood samples from patients aged 60 years and older from the Australia Imaging, Biomarkers, and Lifestyles (AIBL) study who had various levels of cognitive function.

They analyzed samples at multiple timepoints over a 10-year period, “so we know when the marker is most accurate at predicting decline,” Mr. Kinnon said.

The first of two studies was considered a “discovery” study and included blood samples from 224 patients.

Results showed the test predicted decline from mild cognitive impairment (MCI) to AD at the end of 6 years, with an area under the curve (AUC) greater than 90%.

These results are “massive,” said Mr. Kinnon. “It’s the most accurate test I’ve seen anywhere for predicting decline of a patient.”

The test can also accurately classify a patient’s stage of cognition, he added. “Not only does it allow us to predict 6 years in advance, it also tells us if the patient has SMC [subjective memory complaints], MCI, or AD with a 95% certainty,” Mr. Kinnon said.

He noted that test sensitivity was higher than results found from traditional methods that are currently being used. The positive predictive value (PPV) and negative predictive value (NPV), which were at 90% or more, were “absolutely fantastic,” said Mr. Kinnon.
 

“Better than expected” results

In the second “validation” study, investigators examined samples from a completely different group of 482 patients. The “very compelling” results showed AUCs over 90%, PPVs over 90%, and “very high” NPVs, Mr. Kinnon said.

“These are great data, better than we expected,” he added.

However, he noted the test is “very specific” for decline to AD and not to other dementias.

In addition, Mr. Kinnon noted the test does not monitor levels of amyloid beta or tau, which accumulate at a later stage of AD. “Amyloid and tau tell you you’ve got it. We’re there way before those concentrations become detectable,” he said.

Identifying patients who will progress to AD years before they have symptoms gives them time to make medical decisions. These patients may also try treatments at an earlier stage of the disease, when these therapies are most likely to be helpful, said Mr. Kinnon.

In addition, using the test could speed up the approval of prospective drug treatments for AD. Currently, pharmaceutical companies enroll thousands of patients into a clinical study “and they don’t know which ones will have AD,” Mr. Kinnon noted.

“This test tells you these are the ones who are going to progress and should go into the study, and these are the ones that aren’t. So it makes the studies statistically relevant and accurate,” he said.

Investigators can also use the test to monitor patients during a study instead of relying on expensive PET scans and painful and costly spinal fluid taps, he added.

Previous surveys and market research have shown that neurologists and general practitioners “want a blood test to screen patients early, to help educate and inform patients,” said Mr. Kinnon.

Further results that will include biobank data on more than 1,000 patients in the United States and Europe are due for completion toward the end of this year.

The company is currently in negotiations to bring the product to North America, Europe, and elsewhere. “Our goal is to have it on the market by the middle of next year in multiple regions,” Mr. Kinnon said.
 

Encouraging, preliminary

Commenting on the findings, Percy Griffin, PhD, MSc, director of scientific engagement at the Alzheimer’s Association, said “it’s exciting” to see development of novel ways for detecting or predicting AD.

“There is an urgent need for simple, inexpensive, noninvasive, and accessible early detection tools for Alzheimer’s, such as a blood test,” he said.

However, Dr. Griffin cautioned the test is still in the early stages of development and has not been tested extensively in large, diverse clinical trials.

In addition, although the test predicts whether a person will progress, it does not predict when the person will progress, he added.

“These preliminary results are encouraging, but further validation is needed before this test can be implemented widely,” he said.

Technologies that facilitate the early detection and intervention before significant loss of brain cells from AD “would be game-changing” for individuals, families, and the healthcare system, Dr. Griffin concluded.

A version of this article first appeared on Medscape.com.

A new blood test that identifies a variant of the protein P53 appears to predict Alzheimer’s disease (AD) progression up to 6 years in advance of a clinical diagnosis, early research suggests.

Analysis of two studies showed the test (AlzoSure Predict), which uses less than 1 ml of blood, had numerous benefits compared with other blood tests that track AD pathology.

“We believe this has the potential to radically improve early stratification and identification of patients for trials 6 years in advance of a diagnosis, which can potentially enable more rapid and efficient approvals of therapies,” Paul Kinnon, CEO of Diadem, the test’s manufacturer, said in an interview.

The findings were presented at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference.
 

Positive “discovery” results

P53, which is present in both the brain and elsewhere in the body, “is one of the most targeted proteins” for drug development in cancer and other conditions, said Mr. Kinnon.

The current blood test measures a derivative of P53 (U-p53AZ). Previous research suggests this derivative, which affects amyloid and oxidative stress, is also implicated in AD pathogenesis.

Researchers used blood samples from patients aged 60 years and older from the Australia Imaging, Biomarkers, and Lifestyles (AIBL) study who had various levels of cognitive function.

They analyzed samples at multiple timepoints over a 10-year period, “so we know when the marker is most accurate at predicting decline,” Mr. Kinnon said.

The first of two studies was considered a “discovery” study and included blood samples from 224 patients.

Results showed the test predicted decline from mild cognitive impairment (MCI) to AD at the end of 6 years, with an area under the curve (AUC) greater than 90%.

These results are “massive,” said Mr. Kinnon. “It’s the most accurate test I’ve seen anywhere for predicting decline of a patient.”

The test can also accurately classify a patient’s stage of cognition, he added. “Not only does it allow us to predict 6 years in advance, it also tells us if the patient has SMC [subjective memory complaints], MCI, or AD with a 95% certainty,” Mr. Kinnon said.

He noted that test sensitivity was higher than results found from traditional methods that are currently being used. The positive predictive value (PPV) and negative predictive value (NPV), which were at 90% or more, were “absolutely fantastic,” said Mr. Kinnon.
 

“Better than expected” results

In the second “validation” study, investigators examined samples from a completely different group of 482 patients. The “very compelling” results showed AUCs over 90%, PPVs over 90%, and “very high” NPVs, Mr. Kinnon said.

“These are great data, better than we expected,” he added.

However, he noted the test is “very specific” for decline to AD and not to other dementias.

In addition, Mr. Kinnon noted the test does not monitor levels of amyloid beta or tau, which accumulate at a later stage of AD. “Amyloid and tau tell you you’ve got it. We’re there way before those concentrations become detectable,” he said.

Identifying patients who will progress to AD years before they have symptoms gives them time to make medical decisions. These patients may also try treatments at an earlier stage of the disease, when these therapies are most likely to be helpful, said Mr. Kinnon.

In addition, using the test could speed up the approval of prospective drug treatments for AD. Currently, pharmaceutical companies enroll thousands of patients into a clinical study “and they don’t know which ones will have AD,” Mr. Kinnon noted.

“This test tells you these are the ones who are going to progress and should go into the study, and these are the ones that aren’t. So it makes the studies statistically relevant and accurate,” he said.

Investigators can also use the test to monitor patients during a study instead of relying on expensive PET scans and painful and costly spinal fluid taps, he added.

Previous surveys and market research have shown that neurologists and general practitioners “want a blood test to screen patients early, to help educate and inform patients,” said Mr. Kinnon.

Further results that will include biobank data on more than 1,000 patients in the United States and Europe are due for completion toward the end of this year.

The company is currently in negotiations to bring the product to North America, Europe, and elsewhere. “Our goal is to have it on the market by the middle of next year in multiple regions,” Mr. Kinnon said.
 

Encouraging, preliminary

Commenting on the findings, Percy Griffin, PhD, MSc, director of scientific engagement at the Alzheimer’s Association, said “it’s exciting” to see development of novel ways for detecting or predicting AD.

“There is an urgent need for simple, inexpensive, noninvasive, and accessible early detection tools for Alzheimer’s, such as a blood test,” he said.

However, Dr. Griffin cautioned the test is still in the early stages of development and has not been tested extensively in large, diverse clinical trials.

In addition, although the test predicts whether a person will progress, it does not predict when the person will progress, he added.

“These preliminary results are encouraging, but further validation is needed before this test can be implemented widely,” he said.

Technologies that facilitate the early detection and intervention before significant loss of brain cells from AD “would be game-changing” for individuals, families, and the healthcare system, Dr. Griffin concluded.

A version of this article first appeared on Medscape.com.

Issue
Neurology reviews - 30(1)
Issue
Neurology reviews - 30(1)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CTAD21

Citation Override
Publish date: November 24, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Multivitamins, but not cocoa, tied to slowed brain aging

Article Type
Changed
Fri, 11/12/2021 - 13:21

 

Taking a daily multivitamin for 3 years is associated with a 60% slowing of cognitive aging, with the effects especially pronounced in patients with cardiovascular (CVD) disease, new research suggests.

©Graça Victoria/iStockphoto.com

In addition to testing the effect of a daily multivitamin on cognition, the COSMOS-Mind study examined the effect of cocoa flavanols, but showed no beneficial effect.

The findings “may have important public health implications, particularly for brain health, given the accessibility of multivitamins and minerals, and their low cost and safety,” said study investigator Laura D. Baker, PhD, professor, gerontology and geriatric medicine, Wake Forest University, Winston-Salem, N.C.

The findings were presented at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference.

 

Placebo-controlled study

The study is a substudy of a large parent trial that compared the effects of cocoa extract (500 mg/day cocoa flavanols) and a standard multivitamin-mineral (MVM) to placebo on cardiovascular and cancer outcomes in more than 21,000 older participants.

COSMOS-Mind included 2,262 adults aged 65 and over without dementia who underwent cognitive testing at baseline and annually for 3 years. The mean age at baseline was 73 years, and 40.4% were men. Most participants (88.7%) were non-Hispanic White and almost half (49.2%) had some post-college education.

All study groups were balanced with respect to demographics, CVD history, diabetes, depression, smoking status, alcohol intake, chocolate intake, and prior multivitamin use. Baseline cognitive scores were similar between study groups. Researchers had complete data on 77% of study participants.

The primary endpoint was the effect of cocoa extract (CE) vs. placebo on Global Cognitive Function composite score. The secondary outcome was the effect of MVM vs. placebo on global cognitive function.

Additional outcomes included the impact of supplements on executive function and memory and the treatment effects for prespecified subgroups, including subjects with a history of CVD.

Using a graph of change over time, Dr. Baker showed there was no effect of cocoa on global cognitive function (effect: 0.03; 95% confidence interval, –0.02 to 0.08; P = .28). “We see the to-be-expected practice effects, but there’s no separation between the active and placebo groups,” she said.

It was a different story for MVM. Here, there was the same practice effect, but the graph showed the lines separated for global cognitive function composite score (effect: 0.07; 95% CI, 0.02-0.12; P = .007).

“We see a positive effect of multivitamins for the active group relative to placebo, peaking at 2 years and then remaining stable over time,” said Dr. Baker.

There were similar findings with MVM for the memory composite score, and the executive function composite score. “We have significance in all three, where the two lines do separate over and above the practice effects,” said Dr. Baker.
 

New evidence

Investigators found a baseline history of CVD, including transient ischemic attack, heart failure, coronary artery bypass graft, percutaneous transluminal coronary angioplasty, and stent, but not myocardial infarction or stroke as these were excluded in the parent trial because they affected the response to multivitamins.

As expected, those with CVD had lower cognitive scores at baseline. “But after an initial bump due to practice effect, at year 1, the cardiovascular disease history folks continue to benefit from multivitamins, whereas those who got placebo multivitamins continue to decline over time,” said Dr. Baker.

Based on information from a baseline scatter plot of cognitive function scores by age, the study’s modeling estimated the multivitamin treatment effect had a positive benefit of .028 standard deviations (SD) per year.

“Daily multivitamin-mineral supplementation appears to slow cognitive aging by 60% or by 1.8 years,” Dr. Baker added.

To date, the effect of MVM supplementation on cognition has been tested in only one large randomized clinical trial – the Physicians Health Study II. That study did not show an effect, but included only older male physicians – and cognitive testing began 2.5 years after randomization, said Dr. Baker.

“Our study provides new evidence that daily multivitamin supplementation may benefit cognitive function in older women and men, and the multivitamin effects may be more pronounced in participants with cardiovascular disease,” she noted.

For effects of multivitamins on Alzheimer’s disease prevalence and progression, “stay tuned,” Dr. Baker concluded.

Following the presentation, session cochair Suzanne Schindler, MD, PhD, instructor in the department of neurology at Washington University, St. Louis, said she and her colleagues “always check vitamin B12 levels” in patients with memory and cognitive difficulties and wondered if study subjects with a low level or deficiency of vitamin B12 benefited from the intervention.

“We are asking ourselves that as well,” said Dr. Baker.

“Some of this is a work in progress,” Dr. Baker added. “We still need to look at that more in-depth to understand whether it might be a mechanism for improvement. I think the results are still out on that topic.”

The study received support from the National Institute on Aging. Pfizer Consumer Healthcare (now GSK Consumer Healthcare) provided study pills and packaging. Dr. Baker has disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Taking a daily multivitamin for 3 years is associated with a 60% slowing of cognitive aging, with the effects especially pronounced in patients with cardiovascular (CVD) disease, new research suggests.

©Graça Victoria/iStockphoto.com

In addition to testing the effect of a daily multivitamin on cognition, the COSMOS-Mind study examined the effect of cocoa flavanols, but showed no beneficial effect.

The findings “may have important public health implications, particularly for brain health, given the accessibility of multivitamins and minerals, and their low cost and safety,” said study investigator Laura D. Baker, PhD, professor, gerontology and geriatric medicine, Wake Forest University, Winston-Salem, N.C.

The findings were presented at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference.

 

Placebo-controlled study

The study is a substudy of a large parent trial that compared the effects of cocoa extract (500 mg/day cocoa flavanols) and a standard multivitamin-mineral (MVM) to placebo on cardiovascular and cancer outcomes in more than 21,000 older participants.

COSMOS-Mind included 2,262 adults aged 65 and over without dementia who underwent cognitive testing at baseline and annually for 3 years. The mean age at baseline was 73 years, and 40.4% were men. Most participants (88.7%) were non-Hispanic White and almost half (49.2%) had some post-college education.

All study groups were balanced with respect to demographics, CVD history, diabetes, depression, smoking status, alcohol intake, chocolate intake, and prior multivitamin use. Baseline cognitive scores were similar between study groups. Researchers had complete data on 77% of study participants.

The primary endpoint was the effect of cocoa extract (CE) vs. placebo on Global Cognitive Function composite score. The secondary outcome was the effect of MVM vs. placebo on global cognitive function.

Additional outcomes included the impact of supplements on executive function and memory and the treatment effects for prespecified subgroups, including subjects with a history of CVD.

Using a graph of change over time, Dr. Baker showed there was no effect of cocoa on global cognitive function (effect: 0.03; 95% confidence interval, –0.02 to 0.08; P = .28). “We see the to-be-expected practice effects, but there’s no separation between the active and placebo groups,” she said.

It was a different story for MVM. Here, there was the same practice effect, but the graph showed the lines separated for global cognitive function composite score (effect: 0.07; 95% CI, 0.02-0.12; P = .007).

“We see a positive effect of multivitamins for the active group relative to placebo, peaking at 2 years and then remaining stable over time,” said Dr. Baker.

There were similar findings with MVM for the memory composite score, and the executive function composite score. “We have significance in all three, where the two lines do separate over and above the practice effects,” said Dr. Baker.
 

New evidence

Investigators found a baseline history of CVD, including transient ischemic attack, heart failure, coronary artery bypass graft, percutaneous transluminal coronary angioplasty, and stent, but not myocardial infarction or stroke as these were excluded in the parent trial because they affected the response to multivitamins.

As expected, those with CVD had lower cognitive scores at baseline. “But after an initial bump due to practice effect, at year 1, the cardiovascular disease history folks continue to benefit from multivitamins, whereas those who got placebo multivitamins continue to decline over time,” said Dr. Baker.

Based on information from a baseline scatter plot of cognitive function scores by age, the study’s modeling estimated the multivitamin treatment effect had a positive benefit of .028 standard deviations (SD) per year.

“Daily multivitamin-mineral supplementation appears to slow cognitive aging by 60% or by 1.8 years,” Dr. Baker added.

To date, the effect of MVM supplementation on cognition has been tested in only one large randomized clinical trial – the Physicians Health Study II. That study did not show an effect, but included only older male physicians – and cognitive testing began 2.5 years after randomization, said Dr. Baker.

“Our study provides new evidence that daily multivitamin supplementation may benefit cognitive function in older women and men, and the multivitamin effects may be more pronounced in participants with cardiovascular disease,” she noted.

For effects of multivitamins on Alzheimer’s disease prevalence and progression, “stay tuned,” Dr. Baker concluded.

Following the presentation, session cochair Suzanne Schindler, MD, PhD, instructor in the department of neurology at Washington University, St. Louis, said she and her colleagues “always check vitamin B12 levels” in patients with memory and cognitive difficulties and wondered if study subjects with a low level or deficiency of vitamin B12 benefited from the intervention.

“We are asking ourselves that as well,” said Dr. Baker.

“Some of this is a work in progress,” Dr. Baker added. “We still need to look at that more in-depth to understand whether it might be a mechanism for improvement. I think the results are still out on that topic.”

The study received support from the National Institute on Aging. Pfizer Consumer Healthcare (now GSK Consumer Healthcare) provided study pills and packaging. Dr. Baker has disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

 

Taking a daily multivitamin for 3 years is associated with a 60% slowing of cognitive aging, with the effects especially pronounced in patients with cardiovascular (CVD) disease, new research suggests.

©Graça Victoria/iStockphoto.com

In addition to testing the effect of a daily multivitamin on cognition, the COSMOS-Mind study examined the effect of cocoa flavanols, but showed no beneficial effect.

The findings “may have important public health implications, particularly for brain health, given the accessibility of multivitamins and minerals, and their low cost and safety,” said study investigator Laura D. Baker, PhD, professor, gerontology and geriatric medicine, Wake Forest University, Winston-Salem, N.C.

The findings were presented at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference.

 

Placebo-controlled study

The study is a substudy of a large parent trial that compared the effects of cocoa extract (500 mg/day cocoa flavanols) and a standard multivitamin-mineral (MVM) to placebo on cardiovascular and cancer outcomes in more than 21,000 older participants.

COSMOS-Mind included 2,262 adults aged 65 and over without dementia who underwent cognitive testing at baseline and annually for 3 years. The mean age at baseline was 73 years, and 40.4% were men. Most participants (88.7%) were non-Hispanic White and almost half (49.2%) had some post-college education.

All study groups were balanced with respect to demographics, CVD history, diabetes, depression, smoking status, alcohol intake, chocolate intake, and prior multivitamin use. Baseline cognitive scores were similar between study groups. Researchers had complete data on 77% of study participants.

The primary endpoint was the effect of cocoa extract (CE) vs. placebo on Global Cognitive Function composite score. The secondary outcome was the effect of MVM vs. placebo on global cognitive function.

Additional outcomes included the impact of supplements on executive function and memory and the treatment effects for prespecified subgroups, including subjects with a history of CVD.

Using a graph of change over time, Dr. Baker showed there was no effect of cocoa on global cognitive function (effect: 0.03; 95% confidence interval, –0.02 to 0.08; P = .28). “We see the to-be-expected practice effects, but there’s no separation between the active and placebo groups,” she said.

It was a different story for MVM. Here, there was the same practice effect, but the graph showed the lines separated for global cognitive function composite score (effect: 0.07; 95% CI, 0.02-0.12; P = .007).

“We see a positive effect of multivitamins for the active group relative to placebo, peaking at 2 years and then remaining stable over time,” said Dr. Baker.

There were similar findings with MVM for the memory composite score, and the executive function composite score. “We have significance in all three, where the two lines do separate over and above the practice effects,” said Dr. Baker.
 

New evidence

Investigators found a baseline history of CVD, including transient ischemic attack, heart failure, coronary artery bypass graft, percutaneous transluminal coronary angioplasty, and stent, but not myocardial infarction or stroke as these were excluded in the parent trial because they affected the response to multivitamins.

As expected, those with CVD had lower cognitive scores at baseline. “But after an initial bump due to practice effect, at year 1, the cardiovascular disease history folks continue to benefit from multivitamins, whereas those who got placebo multivitamins continue to decline over time,” said Dr. Baker.

Based on information from a baseline scatter plot of cognitive function scores by age, the study’s modeling estimated the multivitamin treatment effect had a positive benefit of .028 standard deviations (SD) per year.

“Daily multivitamin-mineral supplementation appears to slow cognitive aging by 60% or by 1.8 years,” Dr. Baker added.

To date, the effect of MVM supplementation on cognition has been tested in only one large randomized clinical trial – the Physicians Health Study II. That study did not show an effect, but included only older male physicians – and cognitive testing began 2.5 years after randomization, said Dr. Baker.

“Our study provides new evidence that daily multivitamin supplementation may benefit cognitive function in older women and men, and the multivitamin effects may be more pronounced in participants with cardiovascular disease,” she noted.

For effects of multivitamins on Alzheimer’s disease prevalence and progression, “stay tuned,” Dr. Baker concluded.

Following the presentation, session cochair Suzanne Schindler, MD, PhD, instructor in the department of neurology at Washington University, St. Louis, said she and her colleagues “always check vitamin B12 levels” in patients with memory and cognitive difficulties and wondered if study subjects with a low level or deficiency of vitamin B12 benefited from the intervention.

“We are asking ourselves that as well,” said Dr. Baker.

“Some of this is a work in progress,” Dr. Baker added. “We still need to look at that more in-depth to understand whether it might be a mechanism for improvement. I think the results are still out on that topic.”

The study received support from the National Institute on Aging. Pfizer Consumer Healthcare (now GSK Consumer Healthcare) provided study pills and packaging. Dr. Baker has disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Antihypertensives tied to lower Alzheimer’s disease pathology

Article Type
Changed
Mon, 11/29/2021 - 11:02

 

Certain antihypertensive medications, particularly diuretics, are linked to lower Alzheimer’s disease neuropathology and other brain disease processes, new research shows.

Investigators found that use of any antihypertensive was associated with an 18% decrease in Alzheimer’s disease neuropathology, a 22% decrease in Lewy bodies, and a 40% decrease in TAR DNA-binding protein 43 (TDP-43), a protein relevant to several neurodegenerative diseases. Diuretics in particular appear to be driving the association.

Although diuretics might be a better option for preventing brain neuropathology, it’s too early to make firm recommendations solely on the basis of these results as to what blood pressure–lowering agent to prescribe a particular patient, said study investigator Ahmad Sajjadi, MD, assistant professor of neurology, University of California, Irvine.

“This is early stages and preliminary results,” said Dr. Sajjadi, “but it’s food for thought.”

The findings were presented at the 2021 annual meeting of the American Neurological Association.
 

Autopsy data

The study included 3,315 individuals who had donated their brains to research. The National Alzheimer’s Coordinating Center maintains a database that includes data from 32 Alzheimer’s disease research centers in the United States. Participants in the study must have visited one of these centers within 4 years of death. Each person whose brain was included in the study underwent two or more BP measurements on at least 50% of visits.

The mean age at death was 81.7 years, and the mean time between last visit and death was 13.1 months. About 44.4% of participants were women, 57.0% had at least a college degree, and 84.7% had cognitive impairment.

Researchers defined hypertension as systolic BP of at least 130 mm Hg, diastolic BP of at least 80 mm Hg, mean arterial pressure of at least 100 mm Hg, and pulse pressure of at least 60 mm Hg.

Antihypertensive medications that were evaluated included antiadrenergic agents, ACE inhibitors, angiotensin II receptor blockers, beta blockers, calcium channel blockers, diuretics, vasodilators, and combination therapies.

The investigators assessed the number of neuropathologies. In addition to Alzheimer’s disease neuropathology, which included amyloid-beta, tau, Lewy bodies, and TDP-43, they also assessed for atherosclerosis, arteriolosclerosis, cerebral amyloid angiopathy, frontotemporal lobar degeneration, and hippocampal sclerosis.

Results showed that use of any antihypertensive was associated with a lower likelihood of Alzheimer’s disease neuropathology (odds ratio, 0.822), Lewy bodies (OR, 0.786), and TDP 43 (OR, 0.597). Use of antihypertensives was also associated with increased odds of atherosclerosis (OR, 1.217) (all P < .5.)

The study showed that hypertensive systolic BP was associated with higher odds of Alzheimer’s disease neuropathology (OR, 1.28; P < .5).

 

 

Differences by drug type

Results differed in accordance with antihypertensive class. Angiotensin II receptor blockers decreased the odds of Alzheimer’s disease neuropathology by 40% (OR, 0.60; P < .5). Diuretics decreased the odds of Alzheimer’s disease by 36% (OR, 0.64; P < .001) and of hippocampal sclerosis by 32% (OR, 0.68; P < .5).

“We see diuretics are a main driver, especially for lower odds of Alzheimer’s disease and lower odds of hippocampal sclerosis,” said lead author Hanna L. Nguyen, a first-year medical student at the University of California, Irvine.

The results indicate that it is the medications, not BP levels, that account for these associations, she added.

One potential mechanism linking antihypertensives to brain pathology is that with these agents, BP is maintained in the target zone. Blood pressure that’s too high can damage blood vessels, whereas BP that’s too low may result in less than adequate perfusion, said Ms. Nguyen.

These medications may also alter pathways leading to degeneration and could, for example, affect the apo E mechanism of Alzheimer’s disease, she added.

The researchers plan to conduct subset analyses using apo E genetic status and age of death.

Although this is a “massive database,” it has limitations. For example, said Dr. Sajjadi, it does not reveal when patients started taking BP medication, how long they had been taking it, or why.

“We don’t know the exact the reason they were taking these medications. Was it just hypertension, or did they also have heart disease, stroke, a kidney problem, or was there another explanation,” he said.

Following the study presentation, session comoderator Krish Sathian, MBBS, PhD, professor of neurology, neural, and behavioral sciences, and psychology and director of the Neuroscience Institute, Penn State University, Hershey, called this work “fascinating. It provides a lot of data that really touches on everyday practice,” inasmuch as clinicians often prescribe antihypertensive medications and see patients with these kinds of brain disorders.

The investigators and Dr. Sathian reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Issue
Neurology Reviews - 29(12)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Certain antihypertensive medications, particularly diuretics, are linked to lower Alzheimer’s disease neuropathology and other brain disease processes, new research shows.

Investigators found that use of any antihypertensive was associated with an 18% decrease in Alzheimer’s disease neuropathology, a 22% decrease in Lewy bodies, and a 40% decrease in TAR DNA-binding protein 43 (TDP-43), a protein relevant to several neurodegenerative diseases. Diuretics in particular appear to be driving the association.

Although diuretics might be a better option for preventing brain neuropathology, it’s too early to make firm recommendations solely on the basis of these results as to what blood pressure–lowering agent to prescribe a particular patient, said study investigator Ahmad Sajjadi, MD, assistant professor of neurology, University of California, Irvine.

“This is early stages and preliminary results,” said Dr. Sajjadi, “but it’s food for thought.”

The findings were presented at the 2021 annual meeting of the American Neurological Association.
 

Autopsy data

The study included 3,315 individuals who had donated their brains to research. The National Alzheimer’s Coordinating Center maintains a database that includes data from 32 Alzheimer’s disease research centers in the United States. Participants in the study must have visited one of these centers within 4 years of death. Each person whose brain was included in the study underwent two or more BP measurements on at least 50% of visits.

The mean age at death was 81.7 years, and the mean time between last visit and death was 13.1 months. About 44.4% of participants were women, 57.0% had at least a college degree, and 84.7% had cognitive impairment.

Researchers defined hypertension as systolic BP of at least 130 mm Hg, diastolic BP of at least 80 mm Hg, mean arterial pressure of at least 100 mm Hg, and pulse pressure of at least 60 mm Hg.

Antihypertensive medications that were evaluated included antiadrenergic agents, ACE inhibitors, angiotensin II receptor blockers, beta blockers, calcium channel blockers, diuretics, vasodilators, and combination therapies.

The investigators assessed the number of neuropathologies. In addition to Alzheimer’s disease neuropathology, which included amyloid-beta, tau, Lewy bodies, and TDP-43, they also assessed for atherosclerosis, arteriolosclerosis, cerebral amyloid angiopathy, frontotemporal lobar degeneration, and hippocampal sclerosis.

Results showed that use of any antihypertensive was associated with a lower likelihood of Alzheimer’s disease neuropathology (odds ratio, 0.822), Lewy bodies (OR, 0.786), and TDP 43 (OR, 0.597). Use of antihypertensives was also associated with increased odds of atherosclerosis (OR, 1.217) (all P < .5.)

The study showed that hypertensive systolic BP was associated with higher odds of Alzheimer’s disease neuropathology (OR, 1.28; P < .5).

 

 

Differences by drug type

Results differed in accordance with antihypertensive class. Angiotensin II receptor blockers decreased the odds of Alzheimer’s disease neuropathology by 40% (OR, 0.60; P < .5). Diuretics decreased the odds of Alzheimer’s disease by 36% (OR, 0.64; P < .001) and of hippocampal sclerosis by 32% (OR, 0.68; P < .5).

“We see diuretics are a main driver, especially for lower odds of Alzheimer’s disease and lower odds of hippocampal sclerosis,” said lead author Hanna L. Nguyen, a first-year medical student at the University of California, Irvine.

The results indicate that it is the medications, not BP levels, that account for these associations, she added.

One potential mechanism linking antihypertensives to brain pathology is that with these agents, BP is maintained in the target zone. Blood pressure that’s too high can damage blood vessels, whereas BP that’s too low may result in less than adequate perfusion, said Ms. Nguyen.

These medications may also alter pathways leading to degeneration and could, for example, affect the apo E mechanism of Alzheimer’s disease, she added.

The researchers plan to conduct subset analyses using apo E genetic status and age of death.

Although this is a “massive database,” it has limitations. For example, said Dr. Sajjadi, it does not reveal when patients started taking BP medication, how long they had been taking it, or why.

“We don’t know the exact the reason they were taking these medications. Was it just hypertension, or did they also have heart disease, stroke, a kidney problem, or was there another explanation,” he said.

Following the study presentation, session comoderator Krish Sathian, MBBS, PhD, professor of neurology, neural, and behavioral sciences, and psychology and director of the Neuroscience Institute, Penn State University, Hershey, called this work “fascinating. It provides a lot of data that really touches on everyday practice,” inasmuch as clinicians often prescribe antihypertensive medications and see patients with these kinds of brain disorders.

The investigators and Dr. Sathian reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Certain antihypertensive medications, particularly diuretics, are linked to lower Alzheimer’s disease neuropathology and other brain disease processes, new research shows.

Investigators found that use of any antihypertensive was associated with an 18% decrease in Alzheimer’s disease neuropathology, a 22% decrease in Lewy bodies, and a 40% decrease in TAR DNA-binding protein 43 (TDP-43), a protein relevant to several neurodegenerative diseases. Diuretics in particular appear to be driving the association.

Although diuretics might be a better option for preventing brain neuropathology, it’s too early to make firm recommendations solely on the basis of these results as to what blood pressure–lowering agent to prescribe a particular patient, said study investigator Ahmad Sajjadi, MD, assistant professor of neurology, University of California, Irvine.

“This is early stages and preliminary results,” said Dr. Sajjadi, “but it’s food for thought.”

The findings were presented at the 2021 annual meeting of the American Neurological Association.
 

Autopsy data

The study included 3,315 individuals who had donated their brains to research. The National Alzheimer’s Coordinating Center maintains a database that includes data from 32 Alzheimer’s disease research centers in the United States. Participants in the study must have visited one of these centers within 4 years of death. Each person whose brain was included in the study underwent two or more BP measurements on at least 50% of visits.

The mean age at death was 81.7 years, and the mean time between last visit and death was 13.1 months. About 44.4% of participants were women, 57.0% had at least a college degree, and 84.7% had cognitive impairment.

Researchers defined hypertension as systolic BP of at least 130 mm Hg, diastolic BP of at least 80 mm Hg, mean arterial pressure of at least 100 mm Hg, and pulse pressure of at least 60 mm Hg.

Antihypertensive medications that were evaluated included antiadrenergic agents, ACE inhibitors, angiotensin II receptor blockers, beta blockers, calcium channel blockers, diuretics, vasodilators, and combination therapies.

The investigators assessed the number of neuropathologies. In addition to Alzheimer’s disease neuropathology, which included amyloid-beta, tau, Lewy bodies, and TDP-43, they also assessed for atherosclerosis, arteriolosclerosis, cerebral amyloid angiopathy, frontotemporal lobar degeneration, and hippocampal sclerosis.

Results showed that use of any antihypertensive was associated with a lower likelihood of Alzheimer’s disease neuropathology (odds ratio, 0.822), Lewy bodies (OR, 0.786), and TDP 43 (OR, 0.597). Use of antihypertensives was also associated with increased odds of atherosclerosis (OR, 1.217) (all P < .5.)

The study showed that hypertensive systolic BP was associated with higher odds of Alzheimer’s disease neuropathology (OR, 1.28; P < .5).

 

 

Differences by drug type

Results differed in accordance with antihypertensive class. Angiotensin II receptor blockers decreased the odds of Alzheimer’s disease neuropathology by 40% (OR, 0.60; P < .5). Diuretics decreased the odds of Alzheimer’s disease by 36% (OR, 0.64; P < .001) and of hippocampal sclerosis by 32% (OR, 0.68; P < .5).

“We see diuretics are a main driver, especially for lower odds of Alzheimer’s disease and lower odds of hippocampal sclerosis,” said lead author Hanna L. Nguyen, a first-year medical student at the University of California, Irvine.

The results indicate that it is the medications, not BP levels, that account for these associations, she added.

One potential mechanism linking antihypertensives to brain pathology is that with these agents, BP is maintained in the target zone. Blood pressure that’s too high can damage blood vessels, whereas BP that’s too low may result in less than adequate perfusion, said Ms. Nguyen.

These medications may also alter pathways leading to degeneration and could, for example, affect the apo E mechanism of Alzheimer’s disease, she added.

The researchers plan to conduct subset analyses using apo E genetic status and age of death.

Although this is a “massive database,” it has limitations. For example, said Dr. Sajjadi, it does not reveal when patients started taking BP medication, how long they had been taking it, or why.

“We don’t know the exact the reason they were taking these medications. Was it just hypertension, or did they also have heart disease, stroke, a kidney problem, or was there another explanation,” he said.

Following the study presentation, session comoderator Krish Sathian, MBBS, PhD, professor of neurology, neural, and behavioral sciences, and psychology and director of the Neuroscience Institute, Penn State University, Hershey, called this work “fascinating. It provides a lot of data that really touches on everyday practice,” inasmuch as clinicians often prescribe antihypertensive medications and see patients with these kinds of brain disorders.

The investigators and Dr. Sathian reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 29(12)
Issue
Neurology Reviews - 29(12)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANA 2021

Citation Override
Publish date: November 5, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Influenza tied to long-term increased risk for Parkinson’s disease

Article Type
Changed
Thu, 12/15/2022 - 15:40

Influenza infection is linked to a subsequent diagnosis of Parkinson’s disease (PD) more than 10 years later, resurfacing a long-held debate about whether infection increases the risk for movement disorders over the long term.

In a large case-control study, investigators found the odds of PD were elevated by approximately 90% for PD that occurred more than 15 years after influenza infection and by more than 70% for PD occurring more than 10 years after the flu.

“This study is not definitive by any means, but it certainly suggests there are potential long-term consequences from influenza,” study investigator Noelle M. Cocoros, DSc, research scientist at Harvard Pilgrim Health Care Institute and Harvard Medical School, Boston, said in an interview.

The study was published online Oct. 25 in JAMA Neurology.

Ongoing debate

The debate about whether influenza is associated with PD has been going on as far back as the 1918 influenza pandemic, when experts documented parkinsonism in affected individuals.

Using data from the Danish patient registry, researchers identified 10,271 subjects diagnosed with PD during a 17-year period (2000-2016). Of these, 38.7% were female, and the mean age was 71.4 years.

They matched these subjects for age and sex to 51,355 controls without PD. Compared with controls, slightly fewer individuals with PD had chronic obstructive pulmonary disease (COPD) or emphysema, but there was a similar distribution of cardiovascular disease and various other conditions.

Researchers collected data on influenza diagnoses from inpatient and outpatient hospital clinics from 1977 to 2016. They plotted these by month and year on a graph, calculated the median number of diagnoses per month, and identified peaks as those with more than threefold the median.

They categorized cases in groups related to the time between the infection and PD: More than 10 years, 10-15 years, and more than 15 years.

The time lapse accounts for a rather long “run-up” to PD, said Dr. Cocoros. There’s a sometimes decades-long preclinical phase before patients develop typical motor signs and a prodromal phase where they may present with nonmotor symptoms such as sleep disorders and constipation.

“We expected there would be at least 10 years between any infection and PD if there was an association present,” said Dr. Cocoros.

Investigators found an association between influenza exposure and PD diagnosis “that held up over time,” she said.

For more than 10 years before PD, the likelihood of a diagnosis for the infected compared with the unexposed was increased 73% (odds ratio [OR] 1.73; 95% confidence interval, 1.11-2.71; P = .02) after adjustment for cardiovascular disease, diabetes, chronic obstructive pulmonary disease, emphysema, lung cancer, Crohn’s disease, and ulcerative colitis.

The odds increased with more time from infection. For more than 15 years, the adjusted OR was 1.91 (95% CI, 1.14 - 3.19; P =.01).

However, for the 10- to 15-year time frame, the point estimate was reduced and the CI nonsignificant (OR, 1.33; 95% CI, 0.54-3.27; P = .53). This “is a little hard to interpret,” but could be a result of the small numbers, exposure misclassification, or because “the longer time interval is what’s meaningful,” said Dr. Cocoros.
 

 

 

Potential COVID-19–related PD surge?

In a sensitivity analysis, researchers looked at peak infection activity. “We wanted to increase the likelihood of these diagnoses representing actual infection,” Dr. Cocoros noted.

Here, the OR was still elevated at more than 10 years, but the CI was quite wide and included 1 (OR, 1.52; 95% CI, 0.80-2.89; P = .21). “So the association holds up, but the estimates are quite unstable,” said Dr. Cocoros.

Researchers examined associations with numerous other infection types, but did not see the same trend over time. Some infections – for example, gastrointestinal infections and septicemia – were associated with PD within 5 years, but most associations appeared to be null after more than 10 years.

“There seemed to be associations earlier between the infection and PD, which we interpret to suggest there’s actually not a meaningful association,” said Dr. Cocoros.

An exception might be urinary tract infections (UTIs), where after 10 years, the adjusted OR was 1.19 (95% CI, 1.01-1.40). Research suggests patients with PD often have UTIs and neurogenic bladder.

“It’s possible that UTIs could be an early symptom of PD rather than a causative factor,” said Dr. Cocoros.

It’s unclear how influenza might lead to PD but it could be that the virus gets into the central nervous system, resulting in neuroinflammation. Cytokines generated in response to the influenza infection might damage the brain.

“The infection could be a ‘primer’ or an initial ‘hit’ to the system, maybe setting people up for PD,” said Dr. Cocoros.

As for the current COVID-19 pandemic, some experts are concerned about a potential surge in PD cases in decades to come, and are calling for prospective monitoring of patients with this infection, said Dr. Cocoros.

However, she noted that infections don’t account for all PD cases and that genetic and environmental factors also influence risk.

Many individuals who contract influenza don’t seek medical care or get tested, so it’s possible the study counted those who had the infection as unexposed. Another potential study limitation was that small numbers for some infections, for example, Helicobacter pylori and hepatitis C, limited the ability to interpret results.
 

‘Exciting and important’ findings

Commenting on the research for this news organization, Aparna Wagle Shukla, MD, professor, Norman Fixel Institute for Neurological Diseases, University of Florida, Gainesville, said the results amid the current pandemic are “exciting and important” and “have reinvigorated interest” in the role of infection in PD.

However, the study had some limitations, an important one being lack of accounting for confounding factors, including environmental factors, she said. Exposure to pesticides, living in a rural area, drinking well water, and having had a head injury may increase PD risk, whereas high intake of caffeine, nicotine, alcohol, and nonsteroidal anti-inflammatory drugs might lower the risk.

The researchers did not take into account exposure to multiple microbes or “infection burden,” said Dr. Wagle Shukla, who was not involved in the current study. In addition, as the data are from a single country with exposure to specific influenza strains, application of the findings elsewhere may be limited.

Dr. Wagle Shukla noted that a case-control design “isn’t ideal” from an epidemiological perspective. “Future studies should involve large cohorts followed longitudinally.”

The study was supported by grants from the Lundbeck Foundation and the Augustinus Foundation. Dr. Cocoros has disclosed no relevant financial relationships. Several coauthors have disclosed relationships with industry. The full list can be found with the original article.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 29(12)
Publications
Topics
Sections

Influenza infection is linked to a subsequent diagnosis of Parkinson’s disease (PD) more than 10 years later, resurfacing a long-held debate about whether infection increases the risk for movement disorders over the long term.

In a large case-control study, investigators found the odds of PD were elevated by approximately 90% for PD that occurred more than 15 years after influenza infection and by more than 70% for PD occurring more than 10 years after the flu.

“This study is not definitive by any means, but it certainly suggests there are potential long-term consequences from influenza,” study investigator Noelle M. Cocoros, DSc, research scientist at Harvard Pilgrim Health Care Institute and Harvard Medical School, Boston, said in an interview.

The study was published online Oct. 25 in JAMA Neurology.

Ongoing debate

The debate about whether influenza is associated with PD has been going on as far back as the 1918 influenza pandemic, when experts documented parkinsonism in affected individuals.

Using data from the Danish patient registry, researchers identified 10,271 subjects diagnosed with PD during a 17-year period (2000-2016). Of these, 38.7% were female, and the mean age was 71.4 years.

They matched these subjects for age and sex to 51,355 controls without PD. Compared with controls, slightly fewer individuals with PD had chronic obstructive pulmonary disease (COPD) or emphysema, but there was a similar distribution of cardiovascular disease and various other conditions.

Researchers collected data on influenza diagnoses from inpatient and outpatient hospital clinics from 1977 to 2016. They plotted these by month and year on a graph, calculated the median number of diagnoses per month, and identified peaks as those with more than threefold the median.

They categorized cases in groups related to the time between the infection and PD: More than 10 years, 10-15 years, and more than 15 years.

The time lapse accounts for a rather long “run-up” to PD, said Dr. Cocoros. There’s a sometimes decades-long preclinical phase before patients develop typical motor signs and a prodromal phase where they may present with nonmotor symptoms such as sleep disorders and constipation.

“We expected there would be at least 10 years between any infection and PD if there was an association present,” said Dr. Cocoros.

Investigators found an association between influenza exposure and PD diagnosis “that held up over time,” she said.

For more than 10 years before PD, the likelihood of a diagnosis for the infected compared with the unexposed was increased 73% (odds ratio [OR] 1.73; 95% confidence interval, 1.11-2.71; P = .02) after adjustment for cardiovascular disease, diabetes, chronic obstructive pulmonary disease, emphysema, lung cancer, Crohn’s disease, and ulcerative colitis.

The odds increased with more time from infection. For more than 15 years, the adjusted OR was 1.91 (95% CI, 1.14 - 3.19; P =.01).

However, for the 10- to 15-year time frame, the point estimate was reduced and the CI nonsignificant (OR, 1.33; 95% CI, 0.54-3.27; P = .53). This “is a little hard to interpret,” but could be a result of the small numbers, exposure misclassification, or because “the longer time interval is what’s meaningful,” said Dr. Cocoros.
 

 

 

Potential COVID-19–related PD surge?

In a sensitivity analysis, researchers looked at peak infection activity. “We wanted to increase the likelihood of these diagnoses representing actual infection,” Dr. Cocoros noted.

Here, the OR was still elevated at more than 10 years, but the CI was quite wide and included 1 (OR, 1.52; 95% CI, 0.80-2.89; P = .21). “So the association holds up, but the estimates are quite unstable,” said Dr. Cocoros.

Researchers examined associations with numerous other infection types, but did not see the same trend over time. Some infections – for example, gastrointestinal infections and septicemia – were associated with PD within 5 years, but most associations appeared to be null after more than 10 years.

“There seemed to be associations earlier between the infection and PD, which we interpret to suggest there’s actually not a meaningful association,” said Dr. Cocoros.

An exception might be urinary tract infections (UTIs), where after 10 years, the adjusted OR was 1.19 (95% CI, 1.01-1.40). Research suggests patients with PD often have UTIs and neurogenic bladder.

“It’s possible that UTIs could be an early symptom of PD rather than a causative factor,” said Dr. Cocoros.

It’s unclear how influenza might lead to PD but it could be that the virus gets into the central nervous system, resulting in neuroinflammation. Cytokines generated in response to the influenza infection might damage the brain.

“The infection could be a ‘primer’ or an initial ‘hit’ to the system, maybe setting people up for PD,” said Dr. Cocoros.

As for the current COVID-19 pandemic, some experts are concerned about a potential surge in PD cases in decades to come, and are calling for prospective monitoring of patients with this infection, said Dr. Cocoros.

However, she noted that infections don’t account for all PD cases and that genetic and environmental factors also influence risk.

Many individuals who contract influenza don’t seek medical care or get tested, so it’s possible the study counted those who had the infection as unexposed. Another potential study limitation was that small numbers for some infections, for example, Helicobacter pylori and hepatitis C, limited the ability to interpret results.
 

‘Exciting and important’ findings

Commenting on the research for this news organization, Aparna Wagle Shukla, MD, professor, Norman Fixel Institute for Neurological Diseases, University of Florida, Gainesville, said the results amid the current pandemic are “exciting and important” and “have reinvigorated interest” in the role of infection in PD.

However, the study had some limitations, an important one being lack of accounting for confounding factors, including environmental factors, she said. Exposure to pesticides, living in a rural area, drinking well water, and having had a head injury may increase PD risk, whereas high intake of caffeine, nicotine, alcohol, and nonsteroidal anti-inflammatory drugs might lower the risk.

The researchers did not take into account exposure to multiple microbes or “infection burden,” said Dr. Wagle Shukla, who was not involved in the current study. In addition, as the data are from a single country with exposure to specific influenza strains, application of the findings elsewhere may be limited.

Dr. Wagle Shukla noted that a case-control design “isn’t ideal” from an epidemiological perspective. “Future studies should involve large cohorts followed longitudinally.”

The study was supported by grants from the Lundbeck Foundation and the Augustinus Foundation. Dr. Cocoros has disclosed no relevant financial relationships. Several coauthors have disclosed relationships with industry. The full list can be found with the original article.

A version of this article first appeared on Medscape.com.

Influenza infection is linked to a subsequent diagnosis of Parkinson’s disease (PD) more than 10 years later, resurfacing a long-held debate about whether infection increases the risk for movement disorders over the long term.

In a large case-control study, investigators found the odds of PD were elevated by approximately 90% for PD that occurred more than 15 years after influenza infection and by more than 70% for PD occurring more than 10 years after the flu.

“This study is not definitive by any means, but it certainly suggests there are potential long-term consequences from influenza,” study investigator Noelle M. Cocoros, DSc, research scientist at Harvard Pilgrim Health Care Institute and Harvard Medical School, Boston, said in an interview.

The study was published online Oct. 25 in JAMA Neurology.

Ongoing debate

The debate about whether influenza is associated with PD has been going on as far back as the 1918 influenza pandemic, when experts documented parkinsonism in affected individuals.

Using data from the Danish patient registry, researchers identified 10,271 subjects diagnosed with PD during a 17-year period (2000-2016). Of these, 38.7% were female, and the mean age was 71.4 years.

They matched these subjects for age and sex to 51,355 controls without PD. Compared with controls, slightly fewer individuals with PD had chronic obstructive pulmonary disease (COPD) or emphysema, but there was a similar distribution of cardiovascular disease and various other conditions.

Researchers collected data on influenza diagnoses from inpatient and outpatient hospital clinics from 1977 to 2016. They plotted these by month and year on a graph, calculated the median number of diagnoses per month, and identified peaks as those with more than threefold the median.

They categorized cases in groups related to the time between the infection and PD: More than 10 years, 10-15 years, and more than 15 years.

The time lapse accounts for a rather long “run-up” to PD, said Dr. Cocoros. There’s a sometimes decades-long preclinical phase before patients develop typical motor signs and a prodromal phase where they may present with nonmotor symptoms such as sleep disorders and constipation.

“We expected there would be at least 10 years between any infection and PD if there was an association present,” said Dr. Cocoros.

Investigators found an association between influenza exposure and PD diagnosis “that held up over time,” she said.

For more than 10 years before PD, the likelihood of a diagnosis for the infected compared with the unexposed was increased 73% (odds ratio [OR] 1.73; 95% confidence interval, 1.11-2.71; P = .02) after adjustment for cardiovascular disease, diabetes, chronic obstructive pulmonary disease, emphysema, lung cancer, Crohn’s disease, and ulcerative colitis.

The odds increased with more time from infection. For more than 15 years, the adjusted OR was 1.91 (95% CI, 1.14 - 3.19; P =.01).

However, for the 10- to 15-year time frame, the point estimate was reduced and the CI nonsignificant (OR, 1.33; 95% CI, 0.54-3.27; P = .53). This “is a little hard to interpret,” but could be a result of the small numbers, exposure misclassification, or because “the longer time interval is what’s meaningful,” said Dr. Cocoros.
 

 

 

Potential COVID-19–related PD surge?

In a sensitivity analysis, researchers looked at peak infection activity. “We wanted to increase the likelihood of these diagnoses representing actual infection,” Dr. Cocoros noted.

Here, the OR was still elevated at more than 10 years, but the CI was quite wide and included 1 (OR, 1.52; 95% CI, 0.80-2.89; P = .21). “So the association holds up, but the estimates are quite unstable,” said Dr. Cocoros.

Researchers examined associations with numerous other infection types, but did not see the same trend over time. Some infections – for example, gastrointestinal infections and septicemia – were associated with PD within 5 years, but most associations appeared to be null after more than 10 years.

“There seemed to be associations earlier between the infection and PD, which we interpret to suggest there’s actually not a meaningful association,” said Dr. Cocoros.

An exception might be urinary tract infections (UTIs), where after 10 years, the adjusted OR was 1.19 (95% CI, 1.01-1.40). Research suggests patients with PD often have UTIs and neurogenic bladder.

“It’s possible that UTIs could be an early symptom of PD rather than a causative factor,” said Dr. Cocoros.

It’s unclear how influenza might lead to PD but it could be that the virus gets into the central nervous system, resulting in neuroinflammation. Cytokines generated in response to the influenza infection might damage the brain.

“The infection could be a ‘primer’ or an initial ‘hit’ to the system, maybe setting people up for PD,” said Dr. Cocoros.

As for the current COVID-19 pandemic, some experts are concerned about a potential surge in PD cases in decades to come, and are calling for prospective monitoring of patients with this infection, said Dr. Cocoros.

However, she noted that infections don’t account for all PD cases and that genetic and environmental factors also influence risk.

Many individuals who contract influenza don’t seek medical care or get tested, so it’s possible the study counted those who had the infection as unexposed. Another potential study limitation was that small numbers for some infections, for example, Helicobacter pylori and hepatitis C, limited the ability to interpret results.
 

‘Exciting and important’ findings

Commenting on the research for this news organization, Aparna Wagle Shukla, MD, professor, Norman Fixel Institute for Neurological Diseases, University of Florida, Gainesville, said the results amid the current pandemic are “exciting and important” and “have reinvigorated interest” in the role of infection in PD.

However, the study had some limitations, an important one being lack of accounting for confounding factors, including environmental factors, she said. Exposure to pesticides, living in a rural area, drinking well water, and having had a head injury may increase PD risk, whereas high intake of caffeine, nicotine, alcohol, and nonsteroidal anti-inflammatory drugs might lower the risk.

The researchers did not take into account exposure to multiple microbes or “infection burden,” said Dr. Wagle Shukla, who was not involved in the current study. In addition, as the data are from a single country with exposure to specific influenza strains, application of the findings elsewhere may be limited.

Dr. Wagle Shukla noted that a case-control design “isn’t ideal” from an epidemiological perspective. “Future studies should involve large cohorts followed longitudinally.”

The study was supported by grants from the Lundbeck Foundation and the Augustinus Foundation. Dr. Cocoros has disclosed no relevant financial relationships. Several coauthors have disclosed relationships with industry. The full list can be found with the original article.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 29(12)
Issue
Neurology Reviews - 29(12)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: November 2, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

DIY nerve stimulation effective in episodic migraine

Article Type
Changed
Mon, 11/29/2021 - 11:03

Self-administered external trigeminal nerve stimulation (E-TNS) that is available over the counter is superior to sham stimulation in relieving pain for patients with episodic migraine, results from a phase 3 study show.

This is great news for headache patients who want to explore nondrug treatment options, said study investigator Deena E. Kuruvilla, MD, neurologist and headache specialist at the Westport Headache Institute, Connecticut.

She added that such devices “aren’t always part of the conversation when we’re discussing preventive and acute treatments with our patients. Making this a regular part of the conversation might be helpful to patients.”

The findings were presented at ANA 2021: 146th Annual Meeting of the American Neurological Association (ANA), which was held online.
 

A key therapeutic target

The randomized, double-blind trial compared E-TNS with sham stimulation for the acute treatment of migraine.

The E-TNS device (Verum Cefaly Abortive Program) stimulates the supraorbital nerve in the forehead. “This nerve is a branch of the trigeminal nerve, which is thought to be the key player in migraine pathophysiology,” Dr. Kuruvilla noted.

The device has been cleared by the U.S. Food and Drug Administration for acute and preventive treatment of migraine.

During a run-in period before randomization, patients were asked to keep a detailed headache diary and to become comfortable using the trial device to treat an acute migraine attack at home.

The study enrolled 538 adult patients at 10 centers. The patients were aged 18 to 65 years, and they had been having episodic migraines, with or without aura, for at least a year. The participants had to have received a migraine diagnosis before age 50, and they had to be experiencing an attack of migraine 2 to 8 days per month.

The patients used the device only for a migraine of at least moderate intensity that was accompanied by at least one migraine-associated symptom, such as photophobia, phonophobia, or nausea. They were asked not to take rescue medication prior to or during a therapy session.

Study participants applied either neurostimulation or sham stimulation for a continuous 2-hour period within 4 hours of a migraine attack over the 2-month study period.

The two primary endpoints were pain freedom and freedom from the most bothersome migraine-associated symptoms at 2 hours.

Compared to sham treatment, active stimulation was more effective in achieving pain freedom (P = .043) and freedom from the most bothersome migraine-associated symptom (P = .001) at 2 hours.

“So the study did meet both primary endpoints with statistical significance,” said Dr. Kuruvilla.

The five secondary endpoints included pain relief at 2 hours; absence of all migraine-associated symptoms at 2 hours; use of rescue medication within 24 hours; sustained pain freedom at 24 hours; and sustained pain relief at 24 hours.

All but one of these endpoints reached statistical significance, showing superiority for the active intervention. The only exception was in regard to use of rescue medication.

The most common adverse event (AE) was forehead paresthesia, discomfort, or burning, which was more common in the active-treatment group than in the sham-treatment group (P = .009). There were four cases of nausea or vomiting in the active-treatment group and none in the sham-treatment group. There were no serious AEs.
 

 

 

Available over the counter

Both moderators of the headache poster tour that featured this study – Justin C. McArthur, MBBS, from Johns Hopkins University, Baltimore, and Steven Galetta, MD, from NYU Grossman School of Medicine – praised the presentation.

Dr. Galetta questioned whether patients were receiving preventive therapies. Dr. Kuruvilla said that the patients were allowed to enter the trial while taking preventive therapies, including antiepileptic treatments, blood pressure medications, and antidepressants, but that they had to be receiving stable doses.

The investigators didn’t distinguish between participants who were taking preventive therapies and those who weren’t, she said. “The aim was really to look at acute treatment for migraine,” and patients taking such medication “had been stable on their regimen for a pretty prolonged period of time.”

Dr. McArthur asked about the origin of the nausea some patients experienced.

It was difficult to determine whether the nausea was an aspect of an individual patient’s migraine attack or was an effect of the stimulation, said Dr. Kuruvilla. She noted that some patients found the vibrating sensation from the device uncomfortable and that nausea could be associated with pain at the site.

The device costs $300 to $400 (U.S.) and is available over the counter.

Dr. Kuruvilla is a consultant for Cefaly, Neurolief, Theranica, Now What Media, and Kx Advisors. She is on the speakers bureau for AbbVie/Allergan, Amgen/Novartis, Lilly, the American Headache Society, Biohaven, and CME meeting, and she is on an advisory board at AbbVie/Allergan, Lilly, Theranica, and Amgen/Novartis. She is editor and associate editor of Healthline and is an author for WebMD/Medscape, Healthline.

A version of this article first appeared on Medscape.com.

Meeting/Event
Issue
Neurology Reviews - 29(12)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Self-administered external trigeminal nerve stimulation (E-TNS) that is available over the counter is superior to sham stimulation in relieving pain for patients with episodic migraine, results from a phase 3 study show.

This is great news for headache patients who want to explore nondrug treatment options, said study investigator Deena E. Kuruvilla, MD, neurologist and headache specialist at the Westport Headache Institute, Connecticut.

She added that such devices “aren’t always part of the conversation when we’re discussing preventive and acute treatments with our patients. Making this a regular part of the conversation might be helpful to patients.”

The findings were presented at ANA 2021: 146th Annual Meeting of the American Neurological Association (ANA), which was held online.
 

A key therapeutic target

The randomized, double-blind trial compared E-TNS with sham stimulation for the acute treatment of migraine.

The E-TNS device (Verum Cefaly Abortive Program) stimulates the supraorbital nerve in the forehead. “This nerve is a branch of the trigeminal nerve, which is thought to be the key player in migraine pathophysiology,” Dr. Kuruvilla noted.

The device has been cleared by the U.S. Food and Drug Administration for acute and preventive treatment of migraine.

During a run-in period before randomization, patients were asked to keep a detailed headache diary and to become comfortable using the trial device to treat an acute migraine attack at home.

The study enrolled 538 adult patients at 10 centers. The patients were aged 18 to 65 years, and they had been having episodic migraines, with or without aura, for at least a year. The participants had to have received a migraine diagnosis before age 50, and they had to be experiencing an attack of migraine 2 to 8 days per month.

The patients used the device only for a migraine of at least moderate intensity that was accompanied by at least one migraine-associated symptom, such as photophobia, phonophobia, or nausea. They were asked not to take rescue medication prior to or during a therapy session.

Study participants applied either neurostimulation or sham stimulation for a continuous 2-hour period within 4 hours of a migraine attack over the 2-month study period.

The two primary endpoints were pain freedom and freedom from the most bothersome migraine-associated symptoms at 2 hours.

Compared to sham treatment, active stimulation was more effective in achieving pain freedom (P = .043) and freedom from the most bothersome migraine-associated symptom (P = .001) at 2 hours.

“So the study did meet both primary endpoints with statistical significance,” said Dr. Kuruvilla.

The five secondary endpoints included pain relief at 2 hours; absence of all migraine-associated symptoms at 2 hours; use of rescue medication within 24 hours; sustained pain freedom at 24 hours; and sustained pain relief at 24 hours.

All but one of these endpoints reached statistical significance, showing superiority for the active intervention. The only exception was in regard to use of rescue medication.

The most common adverse event (AE) was forehead paresthesia, discomfort, or burning, which was more common in the active-treatment group than in the sham-treatment group (P = .009). There were four cases of nausea or vomiting in the active-treatment group and none in the sham-treatment group. There were no serious AEs.
 

 

 

Available over the counter

Both moderators of the headache poster tour that featured this study – Justin C. McArthur, MBBS, from Johns Hopkins University, Baltimore, and Steven Galetta, MD, from NYU Grossman School of Medicine – praised the presentation.

Dr. Galetta questioned whether patients were receiving preventive therapies. Dr. Kuruvilla said that the patients were allowed to enter the trial while taking preventive therapies, including antiepileptic treatments, blood pressure medications, and antidepressants, but that they had to be receiving stable doses.

The investigators didn’t distinguish between participants who were taking preventive therapies and those who weren’t, she said. “The aim was really to look at acute treatment for migraine,” and patients taking such medication “had been stable on their regimen for a pretty prolonged period of time.”

Dr. McArthur asked about the origin of the nausea some patients experienced.

It was difficult to determine whether the nausea was an aspect of an individual patient’s migraine attack or was an effect of the stimulation, said Dr. Kuruvilla. She noted that some patients found the vibrating sensation from the device uncomfortable and that nausea could be associated with pain at the site.

The device costs $300 to $400 (U.S.) and is available over the counter.

Dr. Kuruvilla is a consultant for Cefaly, Neurolief, Theranica, Now What Media, and Kx Advisors. She is on the speakers bureau for AbbVie/Allergan, Amgen/Novartis, Lilly, the American Headache Society, Biohaven, and CME meeting, and she is on an advisory board at AbbVie/Allergan, Lilly, Theranica, and Amgen/Novartis. She is editor and associate editor of Healthline and is an author for WebMD/Medscape, Healthline.

A version of this article first appeared on Medscape.com.

Self-administered external trigeminal nerve stimulation (E-TNS) that is available over the counter is superior to sham stimulation in relieving pain for patients with episodic migraine, results from a phase 3 study show.

This is great news for headache patients who want to explore nondrug treatment options, said study investigator Deena E. Kuruvilla, MD, neurologist and headache specialist at the Westport Headache Institute, Connecticut.

She added that such devices “aren’t always part of the conversation when we’re discussing preventive and acute treatments with our patients. Making this a regular part of the conversation might be helpful to patients.”

The findings were presented at ANA 2021: 146th Annual Meeting of the American Neurological Association (ANA), which was held online.
 

A key therapeutic target

The randomized, double-blind trial compared E-TNS with sham stimulation for the acute treatment of migraine.

The E-TNS device (Verum Cefaly Abortive Program) stimulates the supraorbital nerve in the forehead. “This nerve is a branch of the trigeminal nerve, which is thought to be the key player in migraine pathophysiology,” Dr. Kuruvilla noted.

The device has been cleared by the U.S. Food and Drug Administration for acute and preventive treatment of migraine.

During a run-in period before randomization, patients were asked to keep a detailed headache diary and to become comfortable using the trial device to treat an acute migraine attack at home.

The study enrolled 538 adult patients at 10 centers. The patients were aged 18 to 65 years, and they had been having episodic migraines, with or without aura, for at least a year. The participants had to have received a migraine diagnosis before age 50, and they had to be experiencing an attack of migraine 2 to 8 days per month.

The patients used the device only for a migraine of at least moderate intensity that was accompanied by at least one migraine-associated symptom, such as photophobia, phonophobia, or nausea. They were asked not to take rescue medication prior to or during a therapy session.

Study participants applied either neurostimulation or sham stimulation for a continuous 2-hour period within 4 hours of a migraine attack over the 2-month study period.

The two primary endpoints were pain freedom and freedom from the most bothersome migraine-associated symptoms at 2 hours.

Compared to sham treatment, active stimulation was more effective in achieving pain freedom (P = .043) and freedom from the most bothersome migraine-associated symptom (P = .001) at 2 hours.

“So the study did meet both primary endpoints with statistical significance,” said Dr. Kuruvilla.

The five secondary endpoints included pain relief at 2 hours; absence of all migraine-associated symptoms at 2 hours; use of rescue medication within 24 hours; sustained pain freedom at 24 hours; and sustained pain relief at 24 hours.

All but one of these endpoints reached statistical significance, showing superiority for the active intervention. The only exception was in regard to use of rescue medication.

The most common adverse event (AE) was forehead paresthesia, discomfort, or burning, which was more common in the active-treatment group than in the sham-treatment group (P = .009). There were four cases of nausea or vomiting in the active-treatment group and none in the sham-treatment group. There were no serious AEs.
 

 

 

Available over the counter

Both moderators of the headache poster tour that featured this study – Justin C. McArthur, MBBS, from Johns Hopkins University, Baltimore, and Steven Galetta, MD, from NYU Grossman School of Medicine – praised the presentation.

Dr. Galetta questioned whether patients were receiving preventive therapies. Dr. Kuruvilla said that the patients were allowed to enter the trial while taking preventive therapies, including antiepileptic treatments, blood pressure medications, and antidepressants, but that they had to be receiving stable doses.

The investigators didn’t distinguish between participants who were taking preventive therapies and those who weren’t, she said. “The aim was really to look at acute treatment for migraine,” and patients taking such medication “had been stable on their regimen for a pretty prolonged period of time.”

Dr. McArthur asked about the origin of the nausea some patients experienced.

It was difficult to determine whether the nausea was an aspect of an individual patient’s migraine attack or was an effect of the stimulation, said Dr. Kuruvilla. She noted that some patients found the vibrating sensation from the device uncomfortable and that nausea could be associated with pain at the site.

The device costs $300 to $400 (U.S.) and is available over the counter.

Dr. Kuruvilla is a consultant for Cefaly, Neurolief, Theranica, Now What Media, and Kx Advisors. She is on the speakers bureau for AbbVie/Allergan, Amgen/Novartis, Lilly, the American Headache Society, Biohaven, and CME meeting, and she is on an advisory board at AbbVie/Allergan, Lilly, Theranica, and Amgen/Novartis. She is editor and associate editor of Healthline and is an author for WebMD/Medscape, Healthline.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 29(12)
Issue
Neurology Reviews - 29(12)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANA

Citation Override
Publish date: October 20, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID-19: Greater mortality among psych patients remains a mystery

Article Type
Changed
Thu, 09/30/2021 - 11:18

 

Dr. Katlyn Nemani

Antipsychotics are not responsible for the increased COVID-related death rate among patients with serious mental illness (SMI), new research shows.

The significant increase in COVID-19 mortality that continues to be reported among those with schizophrenia and schizoaffective disorder “underscores the importance of protective interventions for this group, including priority vaccination,” study investigator Katlyn Nemani, MD, research assistant professor, department of psychiatry, New York University, told this news organization.

The study was published online September 22 in JAMA Psychiatry.
 

Threefold increase in death

Previous research has linked a diagnosis of a schizophrenia spectrum disorder, which includes schizophrenia and schizoaffective disorder, to an almost threefold increase in mortality among patients with COVID-19.

Some population-based research has also reported a link between antipsychotic medication use and increased risk for COVID-related mortality, but these studies did not take psychiatric diagnoses into account.

“This raised the question of whether the increased risk observed in this population is related to underlying psychiatric illness or its treatment,” said Dr. Nemani.

The retrospective cohort study included 464 adults (mean age, 53 years) who were diagnosed with COVID-19 between March 3, 2020, and Feb. 17, 2021, and who had previously been diagnosed with schizophrenia spectrum disorder or bipolar disorder. Of these, 42.2% were treated with an antipsychotic medication.

The primary endpoint was death within 60 days of COVID-19 diagnosis. Covariates included sociodemographic characteristics, such as patient-reported race and ethnicity, age, and insurance type, a psychiatric diagnosis, medical comorbidities, and smoking status.

Of the total, 41 patients (8.8%) died. The 60-day fatality rate was 13.7% among patients with a schizophrenia spectrum disorder (n = 182) and 5.7% among patients with bipolar disorder (n = 282).

Antipsychotic treatment was not significantly associated with mortality (odds ratio, 1.00; 95% confidence interval, 0.48-2.08; P = .99).

“This suggests that antipsychotic medication is unlikely to be responsible for the increased risk we’ve observed in this population, although this finding needs to be replicated,” said Dr. Nemani.
 

Surprise finding

A diagnosis of a schizophrenia spectrum disorder was associated with an almost threefold increased risk for mortality compared with bipolar disorder (OR, 2.88; 95% CI, 1.36-6.11; P = .006).

“This was a surprising finding,” said Dr. Nemani. “A possible explanation is differences in immune function associated with schizophrenia spectrum illness.”

She noted that there is evidence suggesting the immune system may play a role in the pathogenesis of schizophrenia, and research has shown that pneumonia and infection are among the leading causes of premature mortality in this population.

As well, several potential risk factors disproportionately affect people with serious mental illness, including an increase in the prevalence of medical comorbidities such as cardiovascular disease and diabetes, socioeconomic disadvantages, and barriers to accessing timely care. Prior studies have also found that people with SMI are less likely to receive preventive care interventions, including vaccination, said Dr. Nemani.

However, these factors are unlikely to fully account for the increased risk found in the study, she said.

“Our study population was limited to people who had received treatment within the NYU Langone Health System. We took a comprehensive list of sociodemographic and medical risk factors into account, and our research was conducted prior to the availability of COVID-19 vaccines,” she said.

Further research is necessary to understand what underlies the increase in susceptibility to severe infection among patients with schizophrenia and to identify interventions that may mitigate risk, said Dr. Nemani.

“This includes evaluating systems-level factors, such as access to preventive interventions and treatment, as well as investigating underlying immune mechanisms that may contribute to severe and fatal infection,” she said.

The researchers could not validate psychiatric diagnoses or capture deaths not documented in the electronic health record. In addition, the limited sample size precluded analysis of the use of individual antipsychotic medications, which may differ in their associated effects.

“It’s possible individual antipsychotic medications may be associated with harmful or protective effects,” said Dr. Nemani.

The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Dr. Katlyn Nemani

Antipsychotics are not responsible for the increased COVID-related death rate among patients with serious mental illness (SMI), new research shows.

The significant increase in COVID-19 mortality that continues to be reported among those with schizophrenia and schizoaffective disorder “underscores the importance of protective interventions for this group, including priority vaccination,” study investigator Katlyn Nemani, MD, research assistant professor, department of psychiatry, New York University, told this news organization.

The study was published online September 22 in JAMA Psychiatry.
 

Threefold increase in death

Previous research has linked a diagnosis of a schizophrenia spectrum disorder, which includes schizophrenia and schizoaffective disorder, to an almost threefold increase in mortality among patients with COVID-19.

Some population-based research has also reported a link between antipsychotic medication use and increased risk for COVID-related mortality, but these studies did not take psychiatric diagnoses into account.

“This raised the question of whether the increased risk observed in this population is related to underlying psychiatric illness or its treatment,” said Dr. Nemani.

The retrospective cohort study included 464 adults (mean age, 53 years) who were diagnosed with COVID-19 between March 3, 2020, and Feb. 17, 2021, and who had previously been diagnosed with schizophrenia spectrum disorder or bipolar disorder. Of these, 42.2% were treated with an antipsychotic medication.

The primary endpoint was death within 60 days of COVID-19 diagnosis. Covariates included sociodemographic characteristics, such as patient-reported race and ethnicity, age, and insurance type, a psychiatric diagnosis, medical comorbidities, and smoking status.

Of the total, 41 patients (8.8%) died. The 60-day fatality rate was 13.7% among patients with a schizophrenia spectrum disorder (n = 182) and 5.7% among patients with bipolar disorder (n = 282).

Antipsychotic treatment was not significantly associated with mortality (odds ratio, 1.00; 95% confidence interval, 0.48-2.08; P = .99).

“This suggests that antipsychotic medication is unlikely to be responsible for the increased risk we’ve observed in this population, although this finding needs to be replicated,” said Dr. Nemani.
 

Surprise finding

A diagnosis of a schizophrenia spectrum disorder was associated with an almost threefold increased risk for mortality compared with bipolar disorder (OR, 2.88; 95% CI, 1.36-6.11; P = .006).

“This was a surprising finding,” said Dr. Nemani. “A possible explanation is differences in immune function associated with schizophrenia spectrum illness.”

She noted that there is evidence suggesting the immune system may play a role in the pathogenesis of schizophrenia, and research has shown that pneumonia and infection are among the leading causes of premature mortality in this population.

As well, several potential risk factors disproportionately affect people with serious mental illness, including an increase in the prevalence of medical comorbidities such as cardiovascular disease and diabetes, socioeconomic disadvantages, and barriers to accessing timely care. Prior studies have also found that people with SMI are less likely to receive preventive care interventions, including vaccination, said Dr. Nemani.

However, these factors are unlikely to fully account for the increased risk found in the study, she said.

“Our study population was limited to people who had received treatment within the NYU Langone Health System. We took a comprehensive list of sociodemographic and medical risk factors into account, and our research was conducted prior to the availability of COVID-19 vaccines,” she said.

Further research is necessary to understand what underlies the increase in susceptibility to severe infection among patients with schizophrenia and to identify interventions that may mitigate risk, said Dr. Nemani.

“This includes evaluating systems-level factors, such as access to preventive interventions and treatment, as well as investigating underlying immune mechanisms that may contribute to severe and fatal infection,” she said.

The researchers could not validate psychiatric diagnoses or capture deaths not documented in the electronic health record. In addition, the limited sample size precluded analysis of the use of individual antipsychotic medications, which may differ in their associated effects.

“It’s possible individual antipsychotic medications may be associated with harmful or protective effects,” said Dr. Nemani.

The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Dr. Katlyn Nemani

Antipsychotics are not responsible for the increased COVID-related death rate among patients with serious mental illness (SMI), new research shows.

The significant increase in COVID-19 mortality that continues to be reported among those with schizophrenia and schizoaffective disorder “underscores the importance of protective interventions for this group, including priority vaccination,” study investigator Katlyn Nemani, MD, research assistant professor, department of psychiatry, New York University, told this news organization.

The study was published online September 22 in JAMA Psychiatry.
 

Threefold increase in death

Previous research has linked a diagnosis of a schizophrenia spectrum disorder, which includes schizophrenia and schizoaffective disorder, to an almost threefold increase in mortality among patients with COVID-19.

Some population-based research has also reported a link between antipsychotic medication use and increased risk for COVID-related mortality, but these studies did not take psychiatric diagnoses into account.

“This raised the question of whether the increased risk observed in this population is related to underlying psychiatric illness or its treatment,” said Dr. Nemani.

The retrospective cohort study included 464 adults (mean age, 53 years) who were diagnosed with COVID-19 between March 3, 2020, and Feb. 17, 2021, and who had previously been diagnosed with schizophrenia spectrum disorder or bipolar disorder. Of these, 42.2% were treated with an antipsychotic medication.

The primary endpoint was death within 60 days of COVID-19 diagnosis. Covariates included sociodemographic characteristics, such as patient-reported race and ethnicity, age, and insurance type, a psychiatric diagnosis, medical comorbidities, and smoking status.

Of the total, 41 patients (8.8%) died. The 60-day fatality rate was 13.7% among patients with a schizophrenia spectrum disorder (n = 182) and 5.7% among patients with bipolar disorder (n = 282).

Antipsychotic treatment was not significantly associated with mortality (odds ratio, 1.00; 95% confidence interval, 0.48-2.08; P = .99).

“This suggests that antipsychotic medication is unlikely to be responsible for the increased risk we’ve observed in this population, although this finding needs to be replicated,” said Dr. Nemani.
 

Surprise finding

A diagnosis of a schizophrenia spectrum disorder was associated with an almost threefold increased risk for mortality compared with bipolar disorder (OR, 2.88; 95% CI, 1.36-6.11; P = .006).

“This was a surprising finding,” said Dr. Nemani. “A possible explanation is differences in immune function associated with schizophrenia spectrum illness.”

She noted that there is evidence suggesting the immune system may play a role in the pathogenesis of schizophrenia, and research has shown that pneumonia and infection are among the leading causes of premature mortality in this population.

As well, several potential risk factors disproportionately affect people with serious mental illness, including an increase in the prevalence of medical comorbidities such as cardiovascular disease and diabetes, socioeconomic disadvantages, and barriers to accessing timely care. Prior studies have also found that people with SMI are less likely to receive preventive care interventions, including vaccination, said Dr. Nemani.

However, these factors are unlikely to fully account for the increased risk found in the study, she said.

“Our study population was limited to people who had received treatment within the NYU Langone Health System. We took a comprehensive list of sociodemographic and medical risk factors into account, and our research was conducted prior to the availability of COVID-19 vaccines,” she said.

Further research is necessary to understand what underlies the increase in susceptibility to severe infection among patients with schizophrenia and to identify interventions that may mitigate risk, said Dr. Nemani.

“This includes evaluating systems-level factors, such as access to preventive interventions and treatment, as well as investigating underlying immune mechanisms that may contribute to severe and fatal infection,” she said.

The researchers could not validate psychiatric diagnoses or capture deaths not documented in the electronic health record. In addition, the limited sample size precluded analysis of the use of individual antipsychotic medications, which may differ in their associated effects.

“It’s possible individual antipsychotic medications may be associated with harmful or protective effects,” said Dr. Nemani.

The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lipid levels tied to ALS risk

Article Type
Changed
Thu, 12/15/2022 - 15:40

Elevated levels of high-density lipoprotein (HDL) and apolipoprotein A1 (apoA1) are associated with a reduced risk for amyotrophic lateral sclerosis (ALS), new research shows.

The study also linked a higher ratio of total cholesterol to HDL with an increased risk for ALS. These findings, investigators noted, point to potential future biomarkers in screening for ALS and perhaps an approach to reduce risk or delay onset of ALS in the longer term.

“They may help build a biochemical picture of what’s going on and who might be at risk of developing ALS in the near future, particularly in people with a genetic predisposition to ALS,” study investigator Alexander G. Thompson, DPhil, Medical Research Council clinician scientist, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom, said in an interview.

He emphasized that although the current observational study cannot show cause and effect, such a relationship may exist.

The study was published online September 13 in the Journal of Neurology, Neurosurgery and Psychiatry.
 

Registry data

ALS is a disorder of progressive degeneration of upper and lower motor neurons. Genetic variants account for fewer than 15% of cases. The factors that are associated with the greatest risk are unclear.

To investigate, the researchers used data from the UK Biobank, a prospective cohort study of persons aged 39-72 years. Participants underwent an initial assessment between March 2006 and October 2010 and were followed for a median of 11.9 years.

In addition to providing demographic and health information, participants provided blood samples for biochemical analysis. This included measurements of total cholesterol, HDL, low-density lipoprotein (LDL) cholesterol, triglycerides, apoA1, apolipoprotein B (apoB), A1c, and creatinine.

Researchers used diagnostic codes in inpatient health records and death certificate information to verify ALS diagnoses.

The analysis included data from 502,409 participants. The mean age of the participants was 58 years, and 54.4% were women. During follow-up, 343 participants were diagnosed with ALS, yielding a crude incidence of 5.85 per 100,000 per year (95% confidence interval, 5.25-6.51).

After controlling for sex and age, results showed that higher HDL (hazard ratio, 0.84; 95% CI, 0.73-0.96; P = .010) and higher apoA1 (HR, 0.83; 95% CI, 0.72-0.94, P = .005) were associated with a reduced risk for subsequent ALS.

A higher ratio of total cholesterol to HDL was associated with increased ALS risk.

A rise in neurofilaments and other markers of neuronal loss typically occur within about a year of ALS symptom onset. To ensure that they were capturing participants whose blood samples were taken before the onset of neurodegeneration, the researchers performed a secondary analysis that excluded ALS diagnoses within 5 years of the baseline study visit.

Results of the analysis were largely consistent with models incorporating all participants with regard to magnitude and direction of associations. In addition, the findings persisted in models that controlled for statin use, smoking, and vascular disease.
 

Mechanism unclear

To more closely examine lipid status prior to ALS diagnosis, the researchers performed a nested case-control analysis that involved matching each participant who developed ALS with 20 participants of similar age, sex, and time of enrollment who did not develop the disease.

Linear models showed that levels of LDL and apoB, which are closely correlated, decrease over time in those who developed ALS. This was not the case for HDL and apoA1. “This suggests LDL levels are going down, and we think it’s happening quite some time before symptoms start, even before neurodegeneration starts,” said Dr. Thompson.

How blood lipid levels correlate with ALS risk is unclear. Dr. Thompson noted that LDL is an oxidative stressor and can provoke inflammation, whereas HDL is an antioxidant that is involved in healing. However, given that LDL and HDL don’t cross into the brain in great amounts, “the lipid changes may be a reflection of something else going on that contributes to the risk of ALS,” he said.

More evidence of a causal relationship is needed before any clinical implications can be drawn, including the potential manipulation of lipid levels to prevent ALS, said Dr. Thompson. In addition, even were such a relationship to be established, altering lipid levels in a healthy individual who has no family history of ALS would be unlikely to alter risk.

Dr. Thompson added that among those with a genetic predisposition, lipid changes “may be a marker or clue that something’s going wrong in the nervous system and that ALS might be about to start. That would be the ideal time to treat people at risk of ALS with gene therapy.”
 

Metabolism gone awry

Commenting on the findings, Stephen Goutman, MD, director, Pranger ALS Clinic, associate professor of neurology, Neuromuscular Program, University of Michigan, Ann Arbor, called the study “very interesting.” Of particular note was a trend of decreasing LDL and apoB levels prior to an ALS diagnosis, said Dr. Goutman.

The results are in agreement with several studies that show an alteration in metabolism in individuals with ALS, he said. “These altered metabolic pathways may provide some signal that something has gone awry,” he commented.

He agreed that an “ultimate goal” is to identify factors or biomarkers that can be used to predict whether individuals will develop ALS and to enable intervention to decrease the risk.

This new research highlights the value of population-based registries and large prospective cohorts, said Dr. Goutman. “These help to better define the genetic, environmental, and metabolic factors that increase and predict ALS risk,” he said.

But more work is needed, said Dr. Goutman. He noted that in the study, only 192 participants were diagnosed with ALS more than 5 years after enrollment. “This means additional large cohort studies are needed, especially those that reflect the diversity of the population, for us to solve the mystery of ALS and to prevent it,” he said.

Dr. Thompson and Dr. Goutman have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(10)
Publications
Topics
Sections

Elevated levels of high-density lipoprotein (HDL) and apolipoprotein A1 (apoA1) are associated with a reduced risk for amyotrophic lateral sclerosis (ALS), new research shows.

The study also linked a higher ratio of total cholesterol to HDL with an increased risk for ALS. These findings, investigators noted, point to potential future biomarkers in screening for ALS and perhaps an approach to reduce risk or delay onset of ALS in the longer term.

“They may help build a biochemical picture of what’s going on and who might be at risk of developing ALS in the near future, particularly in people with a genetic predisposition to ALS,” study investigator Alexander G. Thompson, DPhil, Medical Research Council clinician scientist, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom, said in an interview.

He emphasized that although the current observational study cannot show cause and effect, such a relationship may exist.

The study was published online September 13 in the Journal of Neurology, Neurosurgery and Psychiatry.
 

Registry data

ALS is a disorder of progressive degeneration of upper and lower motor neurons. Genetic variants account for fewer than 15% of cases. The factors that are associated with the greatest risk are unclear.

To investigate, the researchers used data from the UK Biobank, a prospective cohort study of persons aged 39-72 years. Participants underwent an initial assessment between March 2006 and October 2010 and were followed for a median of 11.9 years.

In addition to providing demographic and health information, participants provided blood samples for biochemical analysis. This included measurements of total cholesterol, HDL, low-density lipoprotein (LDL) cholesterol, triglycerides, apoA1, apolipoprotein B (apoB), A1c, and creatinine.

Researchers used diagnostic codes in inpatient health records and death certificate information to verify ALS diagnoses.

The analysis included data from 502,409 participants. The mean age of the participants was 58 years, and 54.4% were women. During follow-up, 343 participants were diagnosed with ALS, yielding a crude incidence of 5.85 per 100,000 per year (95% confidence interval, 5.25-6.51).

After controlling for sex and age, results showed that higher HDL (hazard ratio, 0.84; 95% CI, 0.73-0.96; P = .010) and higher apoA1 (HR, 0.83; 95% CI, 0.72-0.94, P = .005) were associated with a reduced risk for subsequent ALS.

A higher ratio of total cholesterol to HDL was associated with increased ALS risk.

A rise in neurofilaments and other markers of neuronal loss typically occur within about a year of ALS symptom onset. To ensure that they were capturing participants whose blood samples were taken before the onset of neurodegeneration, the researchers performed a secondary analysis that excluded ALS diagnoses within 5 years of the baseline study visit.

Results of the analysis were largely consistent with models incorporating all participants with regard to magnitude and direction of associations. In addition, the findings persisted in models that controlled for statin use, smoking, and vascular disease.
 

Mechanism unclear

To more closely examine lipid status prior to ALS diagnosis, the researchers performed a nested case-control analysis that involved matching each participant who developed ALS with 20 participants of similar age, sex, and time of enrollment who did not develop the disease.

Linear models showed that levels of LDL and apoB, which are closely correlated, decrease over time in those who developed ALS. This was not the case for HDL and apoA1. “This suggests LDL levels are going down, and we think it’s happening quite some time before symptoms start, even before neurodegeneration starts,” said Dr. Thompson.

How blood lipid levels correlate with ALS risk is unclear. Dr. Thompson noted that LDL is an oxidative stressor and can provoke inflammation, whereas HDL is an antioxidant that is involved in healing. However, given that LDL and HDL don’t cross into the brain in great amounts, “the lipid changes may be a reflection of something else going on that contributes to the risk of ALS,” he said.

More evidence of a causal relationship is needed before any clinical implications can be drawn, including the potential manipulation of lipid levels to prevent ALS, said Dr. Thompson. In addition, even were such a relationship to be established, altering lipid levels in a healthy individual who has no family history of ALS would be unlikely to alter risk.

Dr. Thompson added that among those with a genetic predisposition, lipid changes “may be a marker or clue that something’s going wrong in the nervous system and that ALS might be about to start. That would be the ideal time to treat people at risk of ALS with gene therapy.”
 

Metabolism gone awry

Commenting on the findings, Stephen Goutman, MD, director, Pranger ALS Clinic, associate professor of neurology, Neuromuscular Program, University of Michigan, Ann Arbor, called the study “very interesting.” Of particular note was a trend of decreasing LDL and apoB levels prior to an ALS diagnosis, said Dr. Goutman.

The results are in agreement with several studies that show an alteration in metabolism in individuals with ALS, he said. “These altered metabolic pathways may provide some signal that something has gone awry,” he commented.

He agreed that an “ultimate goal” is to identify factors or biomarkers that can be used to predict whether individuals will develop ALS and to enable intervention to decrease the risk.

This new research highlights the value of population-based registries and large prospective cohorts, said Dr. Goutman. “These help to better define the genetic, environmental, and metabolic factors that increase and predict ALS risk,” he said.

But more work is needed, said Dr. Goutman. He noted that in the study, only 192 participants were diagnosed with ALS more than 5 years after enrollment. “This means additional large cohort studies are needed, especially those that reflect the diversity of the population, for us to solve the mystery of ALS and to prevent it,” he said.

Dr. Thompson and Dr. Goutman have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Elevated levels of high-density lipoprotein (HDL) and apolipoprotein A1 (apoA1) are associated with a reduced risk for amyotrophic lateral sclerosis (ALS), new research shows.

The study also linked a higher ratio of total cholesterol to HDL with an increased risk for ALS. These findings, investigators noted, point to potential future biomarkers in screening for ALS and perhaps an approach to reduce risk or delay onset of ALS in the longer term.

“They may help build a biochemical picture of what’s going on and who might be at risk of developing ALS in the near future, particularly in people with a genetic predisposition to ALS,” study investigator Alexander G. Thompson, DPhil, Medical Research Council clinician scientist, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom, said in an interview.

He emphasized that although the current observational study cannot show cause and effect, such a relationship may exist.

The study was published online September 13 in the Journal of Neurology, Neurosurgery and Psychiatry.
 

Registry data

ALS is a disorder of progressive degeneration of upper and lower motor neurons. Genetic variants account for fewer than 15% of cases. The factors that are associated with the greatest risk are unclear.

To investigate, the researchers used data from the UK Biobank, a prospective cohort study of persons aged 39-72 years. Participants underwent an initial assessment between March 2006 and October 2010 and were followed for a median of 11.9 years.

In addition to providing demographic and health information, participants provided blood samples for biochemical analysis. This included measurements of total cholesterol, HDL, low-density lipoprotein (LDL) cholesterol, triglycerides, apoA1, apolipoprotein B (apoB), A1c, and creatinine.

Researchers used diagnostic codes in inpatient health records and death certificate information to verify ALS diagnoses.

The analysis included data from 502,409 participants. The mean age of the participants was 58 years, and 54.4% were women. During follow-up, 343 participants were diagnosed with ALS, yielding a crude incidence of 5.85 per 100,000 per year (95% confidence interval, 5.25-6.51).

After controlling for sex and age, results showed that higher HDL (hazard ratio, 0.84; 95% CI, 0.73-0.96; P = .010) and higher apoA1 (HR, 0.83; 95% CI, 0.72-0.94, P = .005) were associated with a reduced risk for subsequent ALS.

A higher ratio of total cholesterol to HDL was associated with increased ALS risk.

A rise in neurofilaments and other markers of neuronal loss typically occur within about a year of ALS symptom onset. To ensure that they were capturing participants whose blood samples were taken before the onset of neurodegeneration, the researchers performed a secondary analysis that excluded ALS diagnoses within 5 years of the baseline study visit.

Results of the analysis were largely consistent with models incorporating all participants with regard to magnitude and direction of associations. In addition, the findings persisted in models that controlled for statin use, smoking, and vascular disease.
 

Mechanism unclear

To more closely examine lipid status prior to ALS diagnosis, the researchers performed a nested case-control analysis that involved matching each participant who developed ALS with 20 participants of similar age, sex, and time of enrollment who did not develop the disease.

Linear models showed that levels of LDL and apoB, which are closely correlated, decrease over time in those who developed ALS. This was not the case for HDL and apoA1. “This suggests LDL levels are going down, and we think it’s happening quite some time before symptoms start, even before neurodegeneration starts,” said Dr. Thompson.

How blood lipid levels correlate with ALS risk is unclear. Dr. Thompson noted that LDL is an oxidative stressor and can provoke inflammation, whereas HDL is an antioxidant that is involved in healing. However, given that LDL and HDL don’t cross into the brain in great amounts, “the lipid changes may be a reflection of something else going on that contributes to the risk of ALS,” he said.

More evidence of a causal relationship is needed before any clinical implications can be drawn, including the potential manipulation of lipid levels to prevent ALS, said Dr. Thompson. In addition, even were such a relationship to be established, altering lipid levels in a healthy individual who has no family history of ALS would be unlikely to alter risk.

Dr. Thompson added that among those with a genetic predisposition, lipid changes “may be a marker or clue that something’s going wrong in the nervous system and that ALS might be about to start. That would be the ideal time to treat people at risk of ALS with gene therapy.”
 

Metabolism gone awry

Commenting on the findings, Stephen Goutman, MD, director, Pranger ALS Clinic, associate professor of neurology, Neuromuscular Program, University of Michigan, Ann Arbor, called the study “very interesting.” Of particular note was a trend of decreasing LDL and apoB levels prior to an ALS diagnosis, said Dr. Goutman.

The results are in agreement with several studies that show an alteration in metabolism in individuals with ALS, he said. “These altered metabolic pathways may provide some signal that something has gone awry,” he commented.

He agreed that an “ultimate goal” is to identify factors or biomarkers that can be used to predict whether individuals will develop ALS and to enable intervention to decrease the risk.

This new research highlights the value of population-based registries and large prospective cohorts, said Dr. Goutman. “These help to better define the genetic, environmental, and metabolic factors that increase and predict ALS risk,” he said.

But more work is needed, said Dr. Goutman. He noted that in the study, only 192 participants were diagnosed with ALS more than 5 years after enrollment. “This means additional large cohort studies are needed, especially those that reflect the diversity of the population, for us to solve the mystery of ALS and to prevent it,” he said.

Dr. Thompson and Dr. Goutman have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(10)
Issue
Neurology Reviews- 29(10)
Publications
Publications
Topics
Article Type
Sections
Article Source

From Journal of Neurology, Neurosurgery, and Psychiatry

Citation Override
Publish date: September 15, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Optimal antipsychotic dose for schizophrenia relapse identified

Article Type
Changed
Wed, 11/17/2021 - 11:05

A middle-of-the-road dose of an antipsychotic appears to be optimal for relapse prevention in stable schizophrenia, new research suggests.

Results of a meta-analysis show a 5-mg/day equivalent risperidone dose worked best. Higher doses were associated with more adverse events without showing substantial gains in relapse prevention, and lower doses were associated with greater relapse risk.

“The safest approach is to just to carry on with 5 mg,” which in many cases represents a full dose, lead author Stefan Leucht, MD, professor, department of psychiatry and psychotherapy, Technical University of Munich School of Medicine, Germany, told this news organization.

However, he added, patient preferences and other factors should be considered in dosage decision-making.

The findings were published online August 18 in JAMA Psychiatry.
 

Unique meta-analysis

Antipsychotic drugs are effective for short-term treatment of schizophrenia and prevention of relapse but are associated with movement disorders, weight gain, and other metabolic changes. They are also associated with even more severe adverse events, including tardive dyskinesia and increased cardiovascular risk.

For years, researchers have tried to find the optimal dose of antipsychotic drugs to prevent relapse in patients with stable schizophrenia while mitigating adverse event risk.

For the meta-analysis, researchers searched for fixed-dose, randomized, blinded, or open trials that lasted longer than 3 months and compared two first-generation antipsychotics – haloperidol or fluphenazine – or a second-generation antipsychotic with placebo or a different dose of the same drug.

The analysis included 26 studies with 72 individual dose arms and 4,776 participants with stable schizophrenia.  

Researchers used a dose-response meta-analysis. Unlike a simple meta-analysis that provides an “arbitrary” cut-off of superiority of one drug over placebo or another drug, a dose-response meta-analysis gives a plot or curve “that shows how this evolves with different doses,” Dr. Leucht noted.

The investigators estimated dose-response curves for each antipsychotic drug compared with placebo separately and as a group.

They did not have enough data for most of the single antipsychotics, so they converted doses to risperidone equivalents for a pooled analysis across drugs. They chose risperidone because its equivalents “are pretty well-defined,” said Dr. Leucht.
 

Go slow to go low

For the primary outcome of relapse, the dose-response curve showed a hyperbolic shape with a clear plateau. Initially, the plot decreased sharply but then flattened at about 5-mg/day risperidone equivalent (odds ratio, 0.20; 95% confidence interval, 0.13-0.31; relative risk, 0.43; 95% CI, 0.31-0.57).

“We were a little disappointed because we hoped that a dose lower than 5 mg would be most efficacious in terms of relapse rate because this would have reduced the side-effect burden,” Dr. Leucht said.

Nevertheless, he emphasized that doses lower than 5 mg/day risperidone equivalent are not completely ineffective. For example, the 2.5-mg dose reduced risk to relapse in relative terms by about 40% (RR, 0.63).

Dr. Leucht also pointed out there is “huge interindividual variability.” Therefore, 2.5 mg or even 1 mg may be sufficient for some patients. “It just means for the average patient it’s safest, let’s say, to keep her or him on 5 mg,” he said.  

When lowering the dose, Dr. Leucht noted clinicians should “be very careful and to do it very slowly. It should be very small reductions every 3 to 6 months.”

For the secondary endpoint of rehospitalizations, the shape of the curve was similar to the one for relapse but with lower rates.

“If patients need to be rehospitalized, it usually means that the relapse was major and not only a minor increase in symptoms,” said Dr. Leucht.

The curves for all-cause discontinuation and reduction in overall symptoms were also similar to that of relapse.

However, the curve for dropouts because of adverse events showed that higher doses led to more adverse events. For example, with 5-mg/day dose, the OR was 1.4 (95% CI, 0.87-2.25) and the RR was 1.38 (95% CI, 0.87-2.15), but for the 15-mg/day dose, the OR was 2.88 (95% CI, 1.52-5.45) and the RR was 2.68 (95% CI, 1.49-4.62).
 

 

 

Patient preference key

The data were insufficient to assess differences between men and women or between older and younger patients, Dr. Leucht noted.

However, post-hoc subgroup analyses turned up some interesting findings, he added. For example, patients who take high-potency first-generation antipsychotics such as haloperidol might do well on a lower dose, said Dr. Leucht.

“They may need a dose even lower than 5 mg, perhaps something like 2.5 mg, because these drugs bind so strongly to dopamine receptors,” he said.

He reiterated that patient preferences should always be considered when deciding on antipsychotic dosage.

“Many patients will say they don’t want to relapse anymore, but others will say these drugs have horrible side effects, and they want to go on a lower dose,” said Dr. Leucht.

Clinicians should also factor in patient characteristics, such as comorbidities or substance abuse, as well as severity of past relapses and properties of individual drugs, he added.
 

Reflects real-world experience

Commenting on the findings, Thomas Sedlak, MD, PhD, director, Schizophrenia and Psychosis Consult Clinic and assistant professor of psychiatry and behavioral sciences, Johns Hopkins School of Medicine, Baltimore, said the research “is a fine addition” to a previous analysis that explored dose-response relationships of antipsychotic drugs in the acute phase.

Crunching all the data from studies that have different types of patients and extracting a single dosage that provides maximum benefit is “a great challenge,” said Dr. Sedlak, who was not involved with the research.

The fact that most patients won’t get additional benefit above 5 mg, at which point they start getting more adverse events, and that 2.5 mg is sufficient for certain subgroups “agrees well with the experience of many who use these medications regularly,” Dr. Sedlak said.

However, he cautioned that psychiatrists “don’t always intuitively know which patients fall into which dose category or who might require clozapine.”

“Clinicians need to be mindful that it’s easy to overshoot an optimal dose and elicit side effects,” said Dr. Sedlak.

He also noted that severely ill patients are often underrepresented in clinical trials because they are too impaired to participate, “so they may have a different optimal dosage,” he concluded.

Dr. Leucht has reported receiving personal fees for consulting, advising, and/or speaking outside the submitted work from Angelini, Boehringer Ingelheim, Geodon & Richter, Janssen, Johnson & Johnson, Lundbeck, LTS Lohmann, MSD, Otsuka, Recordati, Sanofi Aventis, Sandoz, Sunovion, Teva, Eisai, Rovi, and Amiabel. Dr. Sedlak has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A middle-of-the-road dose of an antipsychotic appears to be optimal for relapse prevention in stable schizophrenia, new research suggests.

Results of a meta-analysis show a 5-mg/day equivalent risperidone dose worked best. Higher doses were associated with more adverse events without showing substantial gains in relapse prevention, and lower doses were associated with greater relapse risk.

“The safest approach is to just to carry on with 5 mg,” which in many cases represents a full dose, lead author Stefan Leucht, MD, professor, department of psychiatry and psychotherapy, Technical University of Munich School of Medicine, Germany, told this news organization.

However, he added, patient preferences and other factors should be considered in dosage decision-making.

The findings were published online August 18 in JAMA Psychiatry.
 

Unique meta-analysis

Antipsychotic drugs are effective for short-term treatment of schizophrenia and prevention of relapse but are associated with movement disorders, weight gain, and other metabolic changes. They are also associated with even more severe adverse events, including tardive dyskinesia and increased cardiovascular risk.

For years, researchers have tried to find the optimal dose of antipsychotic drugs to prevent relapse in patients with stable schizophrenia while mitigating adverse event risk.

For the meta-analysis, researchers searched for fixed-dose, randomized, blinded, or open trials that lasted longer than 3 months and compared two first-generation antipsychotics – haloperidol or fluphenazine – or a second-generation antipsychotic with placebo or a different dose of the same drug.

The analysis included 26 studies with 72 individual dose arms and 4,776 participants with stable schizophrenia.  

Researchers used a dose-response meta-analysis. Unlike a simple meta-analysis that provides an “arbitrary” cut-off of superiority of one drug over placebo or another drug, a dose-response meta-analysis gives a plot or curve “that shows how this evolves with different doses,” Dr. Leucht noted.

The investigators estimated dose-response curves for each antipsychotic drug compared with placebo separately and as a group.

They did not have enough data for most of the single antipsychotics, so they converted doses to risperidone equivalents for a pooled analysis across drugs. They chose risperidone because its equivalents “are pretty well-defined,” said Dr. Leucht.
 

Go slow to go low

For the primary outcome of relapse, the dose-response curve showed a hyperbolic shape with a clear plateau. Initially, the plot decreased sharply but then flattened at about 5-mg/day risperidone equivalent (odds ratio, 0.20; 95% confidence interval, 0.13-0.31; relative risk, 0.43; 95% CI, 0.31-0.57).

“We were a little disappointed because we hoped that a dose lower than 5 mg would be most efficacious in terms of relapse rate because this would have reduced the side-effect burden,” Dr. Leucht said.

Nevertheless, he emphasized that doses lower than 5 mg/day risperidone equivalent are not completely ineffective. For example, the 2.5-mg dose reduced risk to relapse in relative terms by about 40% (RR, 0.63).

Dr. Leucht also pointed out there is “huge interindividual variability.” Therefore, 2.5 mg or even 1 mg may be sufficient for some patients. “It just means for the average patient it’s safest, let’s say, to keep her or him on 5 mg,” he said.  

When lowering the dose, Dr. Leucht noted clinicians should “be very careful and to do it very slowly. It should be very small reductions every 3 to 6 months.”

For the secondary endpoint of rehospitalizations, the shape of the curve was similar to the one for relapse but with lower rates.

“If patients need to be rehospitalized, it usually means that the relapse was major and not only a minor increase in symptoms,” said Dr. Leucht.

The curves for all-cause discontinuation and reduction in overall symptoms were also similar to that of relapse.

However, the curve for dropouts because of adverse events showed that higher doses led to more adverse events. For example, with 5-mg/day dose, the OR was 1.4 (95% CI, 0.87-2.25) and the RR was 1.38 (95% CI, 0.87-2.15), but for the 15-mg/day dose, the OR was 2.88 (95% CI, 1.52-5.45) and the RR was 2.68 (95% CI, 1.49-4.62).
 

 

 

Patient preference key

The data were insufficient to assess differences between men and women or between older and younger patients, Dr. Leucht noted.

However, post-hoc subgroup analyses turned up some interesting findings, he added. For example, patients who take high-potency first-generation antipsychotics such as haloperidol might do well on a lower dose, said Dr. Leucht.

“They may need a dose even lower than 5 mg, perhaps something like 2.5 mg, because these drugs bind so strongly to dopamine receptors,” he said.

He reiterated that patient preferences should always be considered when deciding on antipsychotic dosage.

“Many patients will say they don’t want to relapse anymore, but others will say these drugs have horrible side effects, and they want to go on a lower dose,” said Dr. Leucht.

Clinicians should also factor in patient characteristics, such as comorbidities or substance abuse, as well as severity of past relapses and properties of individual drugs, he added.
 

Reflects real-world experience

Commenting on the findings, Thomas Sedlak, MD, PhD, director, Schizophrenia and Psychosis Consult Clinic and assistant professor of psychiatry and behavioral sciences, Johns Hopkins School of Medicine, Baltimore, said the research “is a fine addition” to a previous analysis that explored dose-response relationships of antipsychotic drugs in the acute phase.

Crunching all the data from studies that have different types of patients and extracting a single dosage that provides maximum benefit is “a great challenge,” said Dr. Sedlak, who was not involved with the research.

The fact that most patients won’t get additional benefit above 5 mg, at which point they start getting more adverse events, and that 2.5 mg is sufficient for certain subgroups “agrees well with the experience of many who use these medications regularly,” Dr. Sedlak said.

However, he cautioned that psychiatrists “don’t always intuitively know which patients fall into which dose category or who might require clozapine.”

“Clinicians need to be mindful that it’s easy to overshoot an optimal dose and elicit side effects,” said Dr. Sedlak.

He also noted that severely ill patients are often underrepresented in clinical trials because they are too impaired to participate, “so they may have a different optimal dosage,” he concluded.

Dr. Leucht has reported receiving personal fees for consulting, advising, and/or speaking outside the submitted work from Angelini, Boehringer Ingelheim, Geodon & Richter, Janssen, Johnson & Johnson, Lundbeck, LTS Lohmann, MSD, Otsuka, Recordati, Sanofi Aventis, Sandoz, Sunovion, Teva, Eisai, Rovi, and Amiabel. Dr. Sedlak has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A middle-of-the-road dose of an antipsychotic appears to be optimal for relapse prevention in stable schizophrenia, new research suggests.

Results of a meta-analysis show a 5-mg/day equivalent risperidone dose worked best. Higher doses were associated with more adverse events without showing substantial gains in relapse prevention, and lower doses were associated with greater relapse risk.

“The safest approach is to just to carry on with 5 mg,” which in many cases represents a full dose, lead author Stefan Leucht, MD, professor, department of psychiatry and psychotherapy, Technical University of Munich School of Medicine, Germany, told this news organization.

However, he added, patient preferences and other factors should be considered in dosage decision-making.

The findings were published online August 18 in JAMA Psychiatry.
 

Unique meta-analysis

Antipsychotic drugs are effective for short-term treatment of schizophrenia and prevention of relapse but are associated with movement disorders, weight gain, and other metabolic changes. They are also associated with even more severe adverse events, including tardive dyskinesia and increased cardiovascular risk.

For years, researchers have tried to find the optimal dose of antipsychotic drugs to prevent relapse in patients with stable schizophrenia while mitigating adverse event risk.

For the meta-analysis, researchers searched for fixed-dose, randomized, blinded, or open trials that lasted longer than 3 months and compared two first-generation antipsychotics – haloperidol or fluphenazine – or a second-generation antipsychotic with placebo or a different dose of the same drug.

The analysis included 26 studies with 72 individual dose arms and 4,776 participants with stable schizophrenia.  

Researchers used a dose-response meta-analysis. Unlike a simple meta-analysis that provides an “arbitrary” cut-off of superiority of one drug over placebo or another drug, a dose-response meta-analysis gives a plot or curve “that shows how this evolves with different doses,” Dr. Leucht noted.

The investigators estimated dose-response curves for each antipsychotic drug compared with placebo separately and as a group.

They did not have enough data for most of the single antipsychotics, so they converted doses to risperidone equivalents for a pooled analysis across drugs. They chose risperidone because its equivalents “are pretty well-defined,” said Dr. Leucht.
 

Go slow to go low

For the primary outcome of relapse, the dose-response curve showed a hyperbolic shape with a clear plateau. Initially, the plot decreased sharply but then flattened at about 5-mg/day risperidone equivalent (odds ratio, 0.20; 95% confidence interval, 0.13-0.31; relative risk, 0.43; 95% CI, 0.31-0.57).

“We were a little disappointed because we hoped that a dose lower than 5 mg would be most efficacious in terms of relapse rate because this would have reduced the side-effect burden,” Dr. Leucht said.

Nevertheless, he emphasized that doses lower than 5 mg/day risperidone equivalent are not completely ineffective. For example, the 2.5-mg dose reduced risk to relapse in relative terms by about 40% (RR, 0.63).

Dr. Leucht also pointed out there is “huge interindividual variability.” Therefore, 2.5 mg or even 1 mg may be sufficient for some patients. “It just means for the average patient it’s safest, let’s say, to keep her or him on 5 mg,” he said.  

When lowering the dose, Dr. Leucht noted clinicians should “be very careful and to do it very slowly. It should be very small reductions every 3 to 6 months.”

For the secondary endpoint of rehospitalizations, the shape of the curve was similar to the one for relapse but with lower rates.

“If patients need to be rehospitalized, it usually means that the relapse was major and not only a minor increase in symptoms,” said Dr. Leucht.

The curves for all-cause discontinuation and reduction in overall symptoms were also similar to that of relapse.

However, the curve for dropouts because of adverse events showed that higher doses led to more adverse events. For example, with 5-mg/day dose, the OR was 1.4 (95% CI, 0.87-2.25) and the RR was 1.38 (95% CI, 0.87-2.15), but for the 15-mg/day dose, the OR was 2.88 (95% CI, 1.52-5.45) and the RR was 2.68 (95% CI, 1.49-4.62).
 

 

 

Patient preference key

The data were insufficient to assess differences between men and women or between older and younger patients, Dr. Leucht noted.

However, post-hoc subgroup analyses turned up some interesting findings, he added. For example, patients who take high-potency first-generation antipsychotics such as haloperidol might do well on a lower dose, said Dr. Leucht.

“They may need a dose even lower than 5 mg, perhaps something like 2.5 mg, because these drugs bind so strongly to dopamine receptors,” he said.

He reiterated that patient preferences should always be considered when deciding on antipsychotic dosage.

“Many patients will say they don’t want to relapse anymore, but others will say these drugs have horrible side effects, and they want to go on a lower dose,” said Dr. Leucht.

Clinicians should also factor in patient characteristics, such as comorbidities or substance abuse, as well as severity of past relapses and properties of individual drugs, he added.
 

Reflects real-world experience

Commenting on the findings, Thomas Sedlak, MD, PhD, director, Schizophrenia and Psychosis Consult Clinic and assistant professor of psychiatry and behavioral sciences, Johns Hopkins School of Medicine, Baltimore, said the research “is a fine addition” to a previous analysis that explored dose-response relationships of antipsychotic drugs in the acute phase.

Crunching all the data from studies that have different types of patients and extracting a single dosage that provides maximum benefit is “a great challenge,” said Dr. Sedlak, who was not involved with the research.

The fact that most patients won’t get additional benefit above 5 mg, at which point they start getting more adverse events, and that 2.5 mg is sufficient for certain subgroups “agrees well with the experience of many who use these medications regularly,” Dr. Sedlak said.

However, he cautioned that psychiatrists “don’t always intuitively know which patients fall into which dose category or who might require clozapine.”

“Clinicians need to be mindful that it’s easy to overshoot an optimal dose and elicit side effects,” said Dr. Sedlak.

He also noted that severely ill patients are often underrepresented in clinical trials because they are too impaired to participate, “so they may have a different optimal dosage,” he concluded.

Dr. Leucht has reported receiving personal fees for consulting, advising, and/or speaking outside the submitted work from Angelini, Boehringer Ingelheim, Geodon & Richter, Janssen, Johnson & Johnson, Lundbeck, LTS Lohmann, MSD, Otsuka, Recordati, Sanofi Aventis, Sandoz, Sunovion, Teva, Eisai, Rovi, and Amiabel. Dr. Sedlak has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Innovative’ equine therapy helps overcome PTSD symptoms

Article Type
Changed
Fri, 09/03/2021 - 13:08

Equine therapy, which involves interactions with horses in a controlled environment, reduces fear and other symptoms of posttraumatic stress disorder, new research suggests.

Man O&#039;War Project
Dr. Yuval Neria with Crafty, one of the Man O'War Project's equine therapy horses.

Results from a study of about 60 military veterans who underwent weekly sessions of horse-assisted therapy showed “marked reductions” in clinician-rated and self-reported symptoms of PTSD and depression up to 3 months post treatment.

“What we’re doing here with horses is helping people overcome something very specific to PTSD,” coinvestigator Yuval Neria, PhD, professor of clinical medical psychology and director of the PTSD treatment and research program, Columbia University Medical Center, New York, said in an interview.

“It offers the opportunity to overcome fear, to facilitate self-efficacy, to facilitate trust in yourself, to understand your feelings, and perhaps to change them over time, he said.

In addition, veterans loved the experience, Dr. Neria reported. He noted that many patients with PTSD have trouble with traditional treatments and are eager to try something “creative and new.”

The findings were published online Aug. 31, 2021, in the Journal of Clinical Psychiatry.
 

Building bonds

PTSD affects an estimated 10%-30% of U.S. military personnel. These rates are higher than in the general population because veterans may experience increased trauma through combat, injury, and sexual assault, the investigators noted.

Dr. Yuval Neria

Previous research has suggested that horse-human interactions can build bonds that foster behavioral changes. These powerful animals provide instantaneous feedback, allowing patients to develop emotional awareness.

“Horses are very sensitive to whatever we communicate with them, whether it’s fear or anger or stress,” said Dr. Neria.

Equine-assisted therapy is increasingly being used for various mental and physical conditions. Launching an open-label study to examine this type of treatment for PTSD “was an opportunity to look at something very, very different,” Dr. Neria said.

“This is not psychotherapy, it’s not medication, and it’s not neural stimulation,” he added.

The study included 63 veterans with PTSD (mean age, 50 years; 37% women). Of these, 47 were receiving psychotherapy alone, pharmacotherapy alone, or both. In addition, 48 had at least one comorbid disorder. All were divided into 16 groups of three to five participants each.

The program consisted of eight 90-minute weekly sessions conducted at a large equestrian center. Sessions were coled by a mental health professional and an equine specialist who guided participants in horse communication and behavior.

Early sessions focused on acquainting patients with the horses, grooming exercises, and learning “leading,” which involved directing a horse with a rope or wand. During subsequent sessions, patients became more comfortable with managing the horses in individual and teamwork exercises.

The horses were specifically chosen for their temperament and had no history of aggression. A horse wrangler attended sessions to further ensure safety.
 

Few dropouts

The study included four assessment points: pretreatment, midpoint, post treatment, and 3-month follow-up.

All 63 participants completed baseline assessments. Only five patients (7.9%) discontinued the program.

“We didn’t see dropouts at the rate we usually see in evidence-based therapies for PTSD, which is remarkable and suggests that people really loved it,” said Dr. Neria.

Man O&#039;War Project
Veteran Matthew Rypa with Crafty, an equine therapy horse in the Man O'War Project.

The primary outcome was the Clinician-Administered PTSD Scale–5 (CAPS-5), a structured interview that evaluates intrusive memories, social avoidance, and other symptoms based on DSM-5 criteria.

In the intent-to-treat analysis, mean CAPS-5 scores decreased from 38.6 at baseline to 26.9 post treatment. In addition, 29 (46.0%) and 23 (36.5%) participants scored below the PTSD diagnostic threshold of 25 at posttreatment and follow-up, respectively.

Notably, 50.8% of the study population had a clinically significant change, defined as 30% or greater decrease in CAPS-5 score, at post treatment; 54.0% had a significant change at follow-up.

Mean scores on the self-reported 20-item PTSD Checklist for DSM-5 questionnaire decreased from 50.7 at baseline to 34.6 at study termination.

Depression symptoms, measured by the clinician-rated Hamilton Depression Rating Scale and the self-reported Beck Depression Inventory–II, also improved.
 

 

 

Structural, functional change

The results did not differ by age, gender, or type of trauma. Dr. Neria noted that many women in the study had suffered sexual abuse or assault, suggesting that the intervention might be appropriate for PTSD outside the military.

“I’m very keen on moving this along into a civilian population,” he said.

The study did not examine potential mechanisms of action. The benefits may come from something inherent in the equine interactions, the human group process, or just being in the beautiful setting where the treatment took place, the investigators noted.

However, Dr. Neria thinks there is another potential explanation – real changes in the brain.

Neuroimaging of a subsample of 20 participants before and after the intervention showed a significant increase in caudate functional connectivity and a reduction in gray matter density of the thalamus and the caudate.

“We see a big change both structurally and functionally,” with the results pointing to an impact on the reward network of the brain, said Dr. Neria.

“This suggests that pleasure was perhaps the main mechanism of action,” which corresponds with patient reports of really enjoying the experience, he added.

Dr. Neria noted that equine therapy is different from bonding with a loyal dog. Interacting with a large and powerful animal may give veterans a sense of accomplishment and self-worth, which can be tremendously therapeutic.
 

Next step in therapy?

Commenting on the research, retired Col. Elspeth Cameron Ritchie, MD, chair of psychiatry, MedStar Washington Hospital Center, Washington, called equine therapy “innovative” in PTSD.

Dr. Elspeth Cameron Ritchie

“I see this as the next step in finding acceptable therapies that people like to do,” she said.

Some patients have an aversion to talk therapy because it makes them relive their trauma; and many dislike the side effects of medications, which can include erectile dysfunction, said Dr. Ritchie, who was not involved with the research.

“So something like this that they can enjoy, have a sense of mastery, can bond with an animal, I think is wonderful,” she said.

Dr. Ritchie noted that working with animals offers “a kind of biofeedback” that may calm anxieties, help maintain control, and “is very nonjudgmental.”

However, she pointed out that equine therapy is not new. For example, horses have been used previously to treat patients with a variety of disabilities, including autism.

Dr. Ritchie thought it was “very wise” that study participants just learned to control the horses and didn’t actually ride them, because that could be a frightening experience.

Nonetheless, she noted equine therapy “is not going to be accessible for everybody.”

In addition, Dr. Ritchie was surprised that the investigators didn’t mention more of the quite extensive research that has been conducted on dog therapy in patients with PTSD.

Dr. Neria and Ritchie disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Equine therapy, which involves interactions with horses in a controlled environment, reduces fear and other symptoms of posttraumatic stress disorder, new research suggests.

Man O&#039;War Project
Dr. Yuval Neria with Crafty, one of the Man O'War Project's equine therapy horses.

Results from a study of about 60 military veterans who underwent weekly sessions of horse-assisted therapy showed “marked reductions” in clinician-rated and self-reported symptoms of PTSD and depression up to 3 months post treatment.

“What we’re doing here with horses is helping people overcome something very specific to PTSD,” coinvestigator Yuval Neria, PhD, professor of clinical medical psychology and director of the PTSD treatment and research program, Columbia University Medical Center, New York, said in an interview.

“It offers the opportunity to overcome fear, to facilitate self-efficacy, to facilitate trust in yourself, to understand your feelings, and perhaps to change them over time, he said.

In addition, veterans loved the experience, Dr. Neria reported. He noted that many patients with PTSD have trouble with traditional treatments and are eager to try something “creative and new.”

The findings were published online Aug. 31, 2021, in the Journal of Clinical Psychiatry.
 

Building bonds

PTSD affects an estimated 10%-30% of U.S. military personnel. These rates are higher than in the general population because veterans may experience increased trauma through combat, injury, and sexual assault, the investigators noted.

Dr. Yuval Neria

Previous research has suggested that horse-human interactions can build bonds that foster behavioral changes. These powerful animals provide instantaneous feedback, allowing patients to develop emotional awareness.

“Horses are very sensitive to whatever we communicate with them, whether it’s fear or anger or stress,” said Dr. Neria.

Equine-assisted therapy is increasingly being used for various mental and physical conditions. Launching an open-label study to examine this type of treatment for PTSD “was an opportunity to look at something very, very different,” Dr. Neria said.

“This is not psychotherapy, it’s not medication, and it’s not neural stimulation,” he added.

The study included 63 veterans with PTSD (mean age, 50 years; 37% women). Of these, 47 were receiving psychotherapy alone, pharmacotherapy alone, or both. In addition, 48 had at least one comorbid disorder. All were divided into 16 groups of three to five participants each.

The program consisted of eight 90-minute weekly sessions conducted at a large equestrian center. Sessions were coled by a mental health professional and an equine specialist who guided participants in horse communication and behavior.

Early sessions focused on acquainting patients with the horses, grooming exercises, and learning “leading,” which involved directing a horse with a rope or wand. During subsequent sessions, patients became more comfortable with managing the horses in individual and teamwork exercises.

The horses were specifically chosen for their temperament and had no history of aggression. A horse wrangler attended sessions to further ensure safety.
 

Few dropouts

The study included four assessment points: pretreatment, midpoint, post treatment, and 3-month follow-up.

All 63 participants completed baseline assessments. Only five patients (7.9%) discontinued the program.

“We didn’t see dropouts at the rate we usually see in evidence-based therapies for PTSD, which is remarkable and suggests that people really loved it,” said Dr. Neria.

Man O&#039;War Project
Veteran Matthew Rypa with Crafty, an equine therapy horse in the Man O'War Project.

The primary outcome was the Clinician-Administered PTSD Scale–5 (CAPS-5), a structured interview that evaluates intrusive memories, social avoidance, and other symptoms based on DSM-5 criteria.

In the intent-to-treat analysis, mean CAPS-5 scores decreased from 38.6 at baseline to 26.9 post treatment. In addition, 29 (46.0%) and 23 (36.5%) participants scored below the PTSD diagnostic threshold of 25 at posttreatment and follow-up, respectively.

Notably, 50.8% of the study population had a clinically significant change, defined as 30% or greater decrease in CAPS-5 score, at post treatment; 54.0% had a significant change at follow-up.

Mean scores on the self-reported 20-item PTSD Checklist for DSM-5 questionnaire decreased from 50.7 at baseline to 34.6 at study termination.

Depression symptoms, measured by the clinician-rated Hamilton Depression Rating Scale and the self-reported Beck Depression Inventory–II, also improved.
 

 

 

Structural, functional change

The results did not differ by age, gender, or type of trauma. Dr. Neria noted that many women in the study had suffered sexual abuse or assault, suggesting that the intervention might be appropriate for PTSD outside the military.

“I’m very keen on moving this along into a civilian population,” he said.

The study did not examine potential mechanisms of action. The benefits may come from something inherent in the equine interactions, the human group process, or just being in the beautiful setting where the treatment took place, the investigators noted.

However, Dr. Neria thinks there is another potential explanation – real changes in the brain.

Neuroimaging of a subsample of 20 participants before and after the intervention showed a significant increase in caudate functional connectivity and a reduction in gray matter density of the thalamus and the caudate.

“We see a big change both structurally and functionally,” with the results pointing to an impact on the reward network of the brain, said Dr. Neria.

“This suggests that pleasure was perhaps the main mechanism of action,” which corresponds with patient reports of really enjoying the experience, he added.

Dr. Neria noted that equine therapy is different from bonding with a loyal dog. Interacting with a large and powerful animal may give veterans a sense of accomplishment and self-worth, which can be tremendously therapeutic.
 

Next step in therapy?

Commenting on the research, retired Col. Elspeth Cameron Ritchie, MD, chair of psychiatry, MedStar Washington Hospital Center, Washington, called equine therapy “innovative” in PTSD.

Dr. Elspeth Cameron Ritchie

“I see this as the next step in finding acceptable therapies that people like to do,” she said.

Some patients have an aversion to talk therapy because it makes them relive their trauma; and many dislike the side effects of medications, which can include erectile dysfunction, said Dr. Ritchie, who was not involved with the research.

“So something like this that they can enjoy, have a sense of mastery, can bond with an animal, I think is wonderful,” she said.

Dr. Ritchie noted that working with animals offers “a kind of biofeedback” that may calm anxieties, help maintain control, and “is very nonjudgmental.”

However, she pointed out that equine therapy is not new. For example, horses have been used previously to treat patients with a variety of disabilities, including autism.

Dr. Ritchie thought it was “very wise” that study participants just learned to control the horses and didn’t actually ride them, because that could be a frightening experience.

Nonetheless, she noted equine therapy “is not going to be accessible for everybody.”

In addition, Dr. Ritchie was surprised that the investigators didn’t mention more of the quite extensive research that has been conducted on dog therapy in patients with PTSD.

Dr. Neria and Ritchie disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Equine therapy, which involves interactions with horses in a controlled environment, reduces fear and other symptoms of posttraumatic stress disorder, new research suggests.

Man O&#039;War Project
Dr. Yuval Neria with Crafty, one of the Man O'War Project's equine therapy horses.

Results from a study of about 60 military veterans who underwent weekly sessions of horse-assisted therapy showed “marked reductions” in clinician-rated and self-reported symptoms of PTSD and depression up to 3 months post treatment.

“What we’re doing here with horses is helping people overcome something very specific to PTSD,” coinvestigator Yuval Neria, PhD, professor of clinical medical psychology and director of the PTSD treatment and research program, Columbia University Medical Center, New York, said in an interview.

“It offers the opportunity to overcome fear, to facilitate self-efficacy, to facilitate trust in yourself, to understand your feelings, and perhaps to change them over time, he said.

In addition, veterans loved the experience, Dr. Neria reported. He noted that many patients with PTSD have trouble with traditional treatments and are eager to try something “creative and new.”

The findings were published online Aug. 31, 2021, in the Journal of Clinical Psychiatry.
 

Building bonds

PTSD affects an estimated 10%-30% of U.S. military personnel. These rates are higher than in the general population because veterans may experience increased trauma through combat, injury, and sexual assault, the investigators noted.

Dr. Yuval Neria

Previous research has suggested that horse-human interactions can build bonds that foster behavioral changes. These powerful animals provide instantaneous feedback, allowing patients to develop emotional awareness.

“Horses are very sensitive to whatever we communicate with them, whether it’s fear or anger or stress,” said Dr. Neria.

Equine-assisted therapy is increasingly being used for various mental and physical conditions. Launching an open-label study to examine this type of treatment for PTSD “was an opportunity to look at something very, very different,” Dr. Neria said.

“This is not psychotherapy, it’s not medication, and it’s not neural stimulation,” he added.

The study included 63 veterans with PTSD (mean age, 50 years; 37% women). Of these, 47 were receiving psychotherapy alone, pharmacotherapy alone, or both. In addition, 48 had at least one comorbid disorder. All were divided into 16 groups of three to five participants each.

The program consisted of eight 90-minute weekly sessions conducted at a large equestrian center. Sessions were coled by a mental health professional and an equine specialist who guided participants in horse communication and behavior.

Early sessions focused on acquainting patients with the horses, grooming exercises, and learning “leading,” which involved directing a horse with a rope or wand. During subsequent sessions, patients became more comfortable with managing the horses in individual and teamwork exercises.

The horses were specifically chosen for their temperament and had no history of aggression. A horse wrangler attended sessions to further ensure safety.
 

Few dropouts

The study included four assessment points: pretreatment, midpoint, post treatment, and 3-month follow-up.

All 63 participants completed baseline assessments. Only five patients (7.9%) discontinued the program.

“We didn’t see dropouts at the rate we usually see in evidence-based therapies for PTSD, which is remarkable and suggests that people really loved it,” said Dr. Neria.

Man O&#039;War Project
Veteran Matthew Rypa with Crafty, an equine therapy horse in the Man O'War Project.

The primary outcome was the Clinician-Administered PTSD Scale–5 (CAPS-5), a structured interview that evaluates intrusive memories, social avoidance, and other symptoms based on DSM-5 criteria.

In the intent-to-treat analysis, mean CAPS-5 scores decreased from 38.6 at baseline to 26.9 post treatment. In addition, 29 (46.0%) and 23 (36.5%) participants scored below the PTSD diagnostic threshold of 25 at posttreatment and follow-up, respectively.

Notably, 50.8% of the study population had a clinically significant change, defined as 30% or greater decrease in CAPS-5 score, at post treatment; 54.0% had a significant change at follow-up.

Mean scores on the self-reported 20-item PTSD Checklist for DSM-5 questionnaire decreased from 50.7 at baseline to 34.6 at study termination.

Depression symptoms, measured by the clinician-rated Hamilton Depression Rating Scale and the self-reported Beck Depression Inventory–II, also improved.
 

 

 

Structural, functional change

The results did not differ by age, gender, or type of trauma. Dr. Neria noted that many women in the study had suffered sexual abuse or assault, suggesting that the intervention might be appropriate for PTSD outside the military.

“I’m very keen on moving this along into a civilian population,” he said.

The study did not examine potential mechanisms of action. The benefits may come from something inherent in the equine interactions, the human group process, or just being in the beautiful setting where the treatment took place, the investigators noted.

However, Dr. Neria thinks there is another potential explanation – real changes in the brain.

Neuroimaging of a subsample of 20 participants before and after the intervention showed a significant increase in caudate functional connectivity and a reduction in gray matter density of the thalamus and the caudate.

“We see a big change both structurally and functionally,” with the results pointing to an impact on the reward network of the brain, said Dr. Neria.

“This suggests that pleasure was perhaps the main mechanism of action,” which corresponds with patient reports of really enjoying the experience, he added.

Dr. Neria noted that equine therapy is different from bonding with a loyal dog. Interacting with a large and powerful animal may give veterans a sense of accomplishment and self-worth, which can be tremendously therapeutic.
 

Next step in therapy?

Commenting on the research, retired Col. Elspeth Cameron Ritchie, MD, chair of psychiatry, MedStar Washington Hospital Center, Washington, called equine therapy “innovative” in PTSD.

Dr. Elspeth Cameron Ritchie

“I see this as the next step in finding acceptable therapies that people like to do,” she said.

Some patients have an aversion to talk therapy because it makes them relive their trauma; and many dislike the side effects of medications, which can include erectile dysfunction, said Dr. Ritchie, who was not involved with the research.

“So something like this that they can enjoy, have a sense of mastery, can bond with an animal, I think is wonderful,” she said.

Dr. Ritchie noted that working with animals offers “a kind of biofeedback” that may calm anxieties, help maintain control, and “is very nonjudgmental.”

However, she pointed out that equine therapy is not new. For example, horses have been used previously to treat patients with a variety of disabilities, including autism.

Dr. Ritchie thought it was “very wise” that study participants just learned to control the horses and didn’t actually ride them, because that could be a frightening experience.

Nonetheless, she noted equine therapy “is not going to be accessible for everybody.”

In addition, Dr. Ritchie was surprised that the investigators didn’t mention more of the quite extensive research that has been conducted on dog therapy in patients with PTSD.

Dr. Neria and Ritchie disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Stimulating jobs may help stave off dementia onset

Article Type
Changed
Tue, 08/24/2021 - 12:10

Individuals with cognitively stimulating jobs are at a lower risk of developing dementia than their peers with less challenging employment, new research suggests.

Results from a large, multicohort study also showed an association between cognitive stimulation and lower levels of certain plasma proteins, providing possible clues on a protective biological mechanism.

“These new findings support the hypothesis that mental stimulation in adulthood may postpone the onset of dementia,” Mika Kivimäki, PhD, professor and director of the Whitehall II Study, department of epidemiology, University College London, said in an interview.

The results were published online Aug. 19, 2021, in the BMJ.
 

‘Work fast and hard’

Researchers assessed the association between workplace cognitive stimulation and dementia incidence in seven cohorts that included almost 108,000 men and women (mean age, 44.6 years). All were free of dementia at baseline.

Participants included civil servants, public sector employees, forestry workers, and others from the general working population.

Investigators separated the participants into three categories of workplace cognitive stimulation: “high,” which referred to both high job demand and high job control; “low,” which referred to low demands and low control; and “medium,” which referred to all other combinations of job demand and job control.

“Highly cognitively stimulating jobs require you to work fast and hard, learn new things, be creative, and have a high level of skill,” said Dr. Kivimäki.

The researchers controlled for low education, hypertension, smoking, obesity, depression, physical inactivity, diabetes, low social contact, excessive alcohol consumption, and traumatic brain injury. These represent 10 of the 12 dementia risk factors named by the 2020 Lancet Commission on Dementia Prevention as having convincing evidence, Dr. Kivimäki noted.

Although the investigators had no data on the other two risk factors of hearing loss and air pollution, these are unlikely to be confounding factors, he said.

Follow-up for incident dementia varied from 13.7 to 30.1 years, depending on the cohort, and was 16.7 years in the total patient population. The mean age at dementia onset was 71.2 years.
 

Benefits across the life course

Results showed that incident dementia per 10,000 person years was 7.3 in the low–cognitive stimulation group and 4.8 in the high-stimulation group, for a difference of 2.5.

“These differences were relatively small because the incidence of dementia in this relatively young population was low,” Dr. Kivimäki said.

Compared with those with low stimulation, the adjusted hazard ratio for dementia for this with high stimulation was 0.77 (95% CI, 0.65-0.92).

The results were similar for men and women, and for those younger and older than 60 years. However, the link between workplace cognitive stimulation appeared stronger for Alzheimer’s disease than for other dementias.

There also appeared to be additive effects of higher cognitive stimulation in both childhood, as indicated by higher educational attainment, and adulthood, based on work characteristics, said Dr. Kivimäki.

“These findings support the benefits of cognitive stimulation across the life course, with education leading to higher peak cognitive performance and cognitive stimulation at work lowering age-related cognitive decline,” he added.

The findings don’t seem to be the result of workers with cognitive impairment remaining in unchallenging jobs, he noted. Separate analyses showed lower dementia incidence even when 10 years or more separated the assessment of cognitive stimulation and the dementia diagnosis.

“This suggests that the findings are unlikely to be biased due to reverse causation,” Dr. Kivimäki said.
 

 

 

Possible mechanism

Findings were similar when the researchers assessed effect from job changes. “This is probably because people in highly stimulating jobs are more likely to change to another highly stimulating job than to a low-stimulating job,” said Dr. Kivimäki. “Similarly, people with less stimulating jobs are seldom able to change to a substantially more stimulating job.”

As a dementia risk factor, low workplace stimulation is comparable with high alcohol intake and physical inactivity, but is weaker than education, diabetes, smoking, hypertension, and obesity, Dr. Kivimäki noted.

When asked about individuals with less cognitively stimulating jobs who are enormously stimulated outside work, he said that “previous large-scale studies have failed to find evidence that leisure time cognitive activity would significantly reduce risk of dementia.”

To explore potential underlying mechanisms, the investigators examined almost 5,000 plasma proteins in more than 2,200 individuals from one cohort in the Whitehall II study. They found six proteins were significantly lower among participants with high versus low cognitive stimulation.

In another analysis that included more than 13,500 participants from the Whitehall and another cohort, higher levels of three of these plasma proteins were associated with increased dementia risk – or conversely, lower protein levels with lower dementia risk.

The findings suggest a “novel plausible explanation” for the link between workplace cognitive stimulation and dementia risk, said Dr. Kivimäki.

He noted that higher levels of certain proteins prevent brain cells from forming new connections.
 

‘Some of the most compelling evidence to date’

In an accompanying editorial, Serhiy Dekhtyar, PhD, assistant professor (Docent), Aging Research Center, Karolinska Institute, Stockholm, noted that the study is “an important piece of work” and “some of the most compelling evidence to date” on the role of occupational cognitive stimulation in dementia risk.

The large-scale investigation in multiple cohorts and contexts has “advanced the field” and could help “explain previously mixed findings in the literature,” Dekhtyar said in an interview.

Importantly, the researchers provide “an indication of biological mechanisms potentially connecting work mental stimulation and dementia,” he added.

However, Dr. Dekhtyar noted that the difference of 2.5 incident cases of dementia per 10,000 person years of follow-up between the low and high mental-stimulation groups “is not especially large” – although it is comparable with other established risk factors for dementia.

He suspects the effect size would have been larger had the follow-up for dementia been longer.

Dr. Dekhtyar also raised the possibility that “innate cognition” might affect both educational and occupational attainment, and the subsequent dementia risk.

“Without taking this into account, we may inadvertently conclude that education or occupational stimulation help differentially preserve cognition into late life – when in reality, it may be initial differences in cognitive ability that are preserved throughout life,” he concluded.

Funding sources for the study included Nordic Research Programme on Health and Welfare (NordForsk), Medical Research Council, Wellcome Trust, Academy of Finland, and Helsinki Institute of Life Science. Dr. Kivimäki has received support from NordForsk, the UK Medical Research Council, the Wellcome Trust, the Academy of Finland, and the Helsinki Institute of Life Science. Dr. Dekhtyar disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Individuals with cognitively stimulating jobs are at a lower risk of developing dementia than their peers with less challenging employment, new research suggests.

Results from a large, multicohort study also showed an association between cognitive stimulation and lower levels of certain plasma proteins, providing possible clues on a protective biological mechanism.

“These new findings support the hypothesis that mental stimulation in adulthood may postpone the onset of dementia,” Mika Kivimäki, PhD, professor and director of the Whitehall II Study, department of epidemiology, University College London, said in an interview.

The results were published online Aug. 19, 2021, in the BMJ.
 

‘Work fast and hard’

Researchers assessed the association between workplace cognitive stimulation and dementia incidence in seven cohorts that included almost 108,000 men and women (mean age, 44.6 years). All were free of dementia at baseline.

Participants included civil servants, public sector employees, forestry workers, and others from the general working population.

Investigators separated the participants into three categories of workplace cognitive stimulation: “high,” which referred to both high job demand and high job control; “low,” which referred to low demands and low control; and “medium,” which referred to all other combinations of job demand and job control.

“Highly cognitively stimulating jobs require you to work fast and hard, learn new things, be creative, and have a high level of skill,” said Dr. Kivimäki.

The researchers controlled for low education, hypertension, smoking, obesity, depression, physical inactivity, diabetes, low social contact, excessive alcohol consumption, and traumatic brain injury. These represent 10 of the 12 dementia risk factors named by the 2020 Lancet Commission on Dementia Prevention as having convincing evidence, Dr. Kivimäki noted.

Although the investigators had no data on the other two risk factors of hearing loss and air pollution, these are unlikely to be confounding factors, he said.

Follow-up for incident dementia varied from 13.7 to 30.1 years, depending on the cohort, and was 16.7 years in the total patient population. The mean age at dementia onset was 71.2 years.
 

Benefits across the life course

Results showed that incident dementia per 10,000 person years was 7.3 in the low–cognitive stimulation group and 4.8 in the high-stimulation group, for a difference of 2.5.

“These differences were relatively small because the incidence of dementia in this relatively young population was low,” Dr. Kivimäki said.

Compared with those with low stimulation, the adjusted hazard ratio for dementia for this with high stimulation was 0.77 (95% CI, 0.65-0.92).

The results were similar for men and women, and for those younger and older than 60 years. However, the link between workplace cognitive stimulation appeared stronger for Alzheimer’s disease than for other dementias.

There also appeared to be additive effects of higher cognitive stimulation in both childhood, as indicated by higher educational attainment, and adulthood, based on work characteristics, said Dr. Kivimäki.

“These findings support the benefits of cognitive stimulation across the life course, with education leading to higher peak cognitive performance and cognitive stimulation at work lowering age-related cognitive decline,” he added.

The findings don’t seem to be the result of workers with cognitive impairment remaining in unchallenging jobs, he noted. Separate analyses showed lower dementia incidence even when 10 years or more separated the assessment of cognitive stimulation and the dementia diagnosis.

“This suggests that the findings are unlikely to be biased due to reverse causation,” Dr. Kivimäki said.
 

 

 

Possible mechanism

Findings were similar when the researchers assessed effect from job changes. “This is probably because people in highly stimulating jobs are more likely to change to another highly stimulating job than to a low-stimulating job,” said Dr. Kivimäki. “Similarly, people with less stimulating jobs are seldom able to change to a substantially more stimulating job.”

As a dementia risk factor, low workplace stimulation is comparable with high alcohol intake and physical inactivity, but is weaker than education, diabetes, smoking, hypertension, and obesity, Dr. Kivimäki noted.

When asked about individuals with less cognitively stimulating jobs who are enormously stimulated outside work, he said that “previous large-scale studies have failed to find evidence that leisure time cognitive activity would significantly reduce risk of dementia.”

To explore potential underlying mechanisms, the investigators examined almost 5,000 plasma proteins in more than 2,200 individuals from one cohort in the Whitehall II study. They found six proteins were significantly lower among participants with high versus low cognitive stimulation.

In another analysis that included more than 13,500 participants from the Whitehall and another cohort, higher levels of three of these plasma proteins were associated with increased dementia risk – or conversely, lower protein levels with lower dementia risk.

The findings suggest a “novel plausible explanation” for the link between workplace cognitive stimulation and dementia risk, said Dr. Kivimäki.

He noted that higher levels of certain proteins prevent brain cells from forming new connections.
 

‘Some of the most compelling evidence to date’

In an accompanying editorial, Serhiy Dekhtyar, PhD, assistant professor (Docent), Aging Research Center, Karolinska Institute, Stockholm, noted that the study is “an important piece of work” and “some of the most compelling evidence to date” on the role of occupational cognitive stimulation in dementia risk.

The large-scale investigation in multiple cohorts and contexts has “advanced the field” and could help “explain previously mixed findings in the literature,” Dekhtyar said in an interview.

Importantly, the researchers provide “an indication of biological mechanisms potentially connecting work mental stimulation and dementia,” he added.

However, Dr. Dekhtyar noted that the difference of 2.5 incident cases of dementia per 10,000 person years of follow-up between the low and high mental-stimulation groups “is not especially large” – although it is comparable with other established risk factors for dementia.

He suspects the effect size would have been larger had the follow-up for dementia been longer.

Dr. Dekhtyar also raised the possibility that “innate cognition” might affect both educational and occupational attainment, and the subsequent dementia risk.

“Without taking this into account, we may inadvertently conclude that education or occupational stimulation help differentially preserve cognition into late life – when in reality, it may be initial differences in cognitive ability that are preserved throughout life,” he concluded.

Funding sources for the study included Nordic Research Programme on Health and Welfare (NordForsk), Medical Research Council, Wellcome Trust, Academy of Finland, and Helsinki Institute of Life Science. Dr. Kivimäki has received support from NordForsk, the UK Medical Research Council, the Wellcome Trust, the Academy of Finland, and the Helsinki Institute of Life Science. Dr. Dekhtyar disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Individuals with cognitively stimulating jobs are at a lower risk of developing dementia than their peers with less challenging employment, new research suggests.

Results from a large, multicohort study also showed an association between cognitive stimulation and lower levels of certain plasma proteins, providing possible clues on a protective biological mechanism.

“These new findings support the hypothesis that mental stimulation in adulthood may postpone the onset of dementia,” Mika Kivimäki, PhD, professor and director of the Whitehall II Study, department of epidemiology, University College London, said in an interview.

The results were published online Aug. 19, 2021, in the BMJ.
 

‘Work fast and hard’

Researchers assessed the association between workplace cognitive stimulation and dementia incidence in seven cohorts that included almost 108,000 men and women (mean age, 44.6 years). All were free of dementia at baseline.

Participants included civil servants, public sector employees, forestry workers, and others from the general working population.

Investigators separated the participants into three categories of workplace cognitive stimulation: “high,” which referred to both high job demand and high job control; “low,” which referred to low demands and low control; and “medium,” which referred to all other combinations of job demand and job control.

“Highly cognitively stimulating jobs require you to work fast and hard, learn new things, be creative, and have a high level of skill,” said Dr. Kivimäki.

The researchers controlled for low education, hypertension, smoking, obesity, depression, physical inactivity, diabetes, low social contact, excessive alcohol consumption, and traumatic brain injury. These represent 10 of the 12 dementia risk factors named by the 2020 Lancet Commission on Dementia Prevention as having convincing evidence, Dr. Kivimäki noted.

Although the investigators had no data on the other two risk factors of hearing loss and air pollution, these are unlikely to be confounding factors, he said.

Follow-up for incident dementia varied from 13.7 to 30.1 years, depending on the cohort, and was 16.7 years in the total patient population. The mean age at dementia onset was 71.2 years.
 

Benefits across the life course

Results showed that incident dementia per 10,000 person years was 7.3 in the low–cognitive stimulation group and 4.8 in the high-stimulation group, for a difference of 2.5.

“These differences were relatively small because the incidence of dementia in this relatively young population was low,” Dr. Kivimäki said.

Compared with those with low stimulation, the adjusted hazard ratio for dementia for this with high stimulation was 0.77 (95% CI, 0.65-0.92).

The results were similar for men and women, and for those younger and older than 60 years. However, the link between workplace cognitive stimulation appeared stronger for Alzheimer’s disease than for other dementias.

There also appeared to be additive effects of higher cognitive stimulation in both childhood, as indicated by higher educational attainment, and adulthood, based on work characteristics, said Dr. Kivimäki.

“These findings support the benefits of cognitive stimulation across the life course, with education leading to higher peak cognitive performance and cognitive stimulation at work lowering age-related cognitive decline,” he added.

The findings don’t seem to be the result of workers with cognitive impairment remaining in unchallenging jobs, he noted. Separate analyses showed lower dementia incidence even when 10 years or more separated the assessment of cognitive stimulation and the dementia diagnosis.

“This suggests that the findings are unlikely to be biased due to reverse causation,” Dr. Kivimäki said.
 

 

 

Possible mechanism

Findings were similar when the researchers assessed effect from job changes. “This is probably because people in highly stimulating jobs are more likely to change to another highly stimulating job than to a low-stimulating job,” said Dr. Kivimäki. “Similarly, people with less stimulating jobs are seldom able to change to a substantially more stimulating job.”

As a dementia risk factor, low workplace stimulation is comparable with high alcohol intake and physical inactivity, but is weaker than education, diabetes, smoking, hypertension, and obesity, Dr. Kivimäki noted.

When asked about individuals with less cognitively stimulating jobs who are enormously stimulated outside work, he said that “previous large-scale studies have failed to find evidence that leisure time cognitive activity would significantly reduce risk of dementia.”

To explore potential underlying mechanisms, the investigators examined almost 5,000 plasma proteins in more than 2,200 individuals from one cohort in the Whitehall II study. They found six proteins were significantly lower among participants with high versus low cognitive stimulation.

In another analysis that included more than 13,500 participants from the Whitehall and another cohort, higher levels of three of these plasma proteins were associated with increased dementia risk – or conversely, lower protein levels with lower dementia risk.

The findings suggest a “novel plausible explanation” for the link between workplace cognitive stimulation and dementia risk, said Dr. Kivimäki.

He noted that higher levels of certain proteins prevent brain cells from forming new connections.
 

‘Some of the most compelling evidence to date’

In an accompanying editorial, Serhiy Dekhtyar, PhD, assistant professor (Docent), Aging Research Center, Karolinska Institute, Stockholm, noted that the study is “an important piece of work” and “some of the most compelling evidence to date” on the role of occupational cognitive stimulation in dementia risk.

The large-scale investigation in multiple cohorts and contexts has “advanced the field” and could help “explain previously mixed findings in the literature,” Dekhtyar said in an interview.

Importantly, the researchers provide “an indication of biological mechanisms potentially connecting work mental stimulation and dementia,” he added.

However, Dr. Dekhtyar noted that the difference of 2.5 incident cases of dementia per 10,000 person years of follow-up between the low and high mental-stimulation groups “is not especially large” – although it is comparable with other established risk factors for dementia.

He suspects the effect size would have been larger had the follow-up for dementia been longer.

Dr. Dekhtyar also raised the possibility that “innate cognition” might affect both educational and occupational attainment, and the subsequent dementia risk.

“Without taking this into account, we may inadvertently conclude that education or occupational stimulation help differentially preserve cognition into late life – when in reality, it may be initial differences in cognitive ability that are preserved throughout life,” he concluded.

Funding sources for the study included Nordic Research Programme on Health and Welfare (NordForsk), Medical Research Council, Wellcome Trust, Academy of Finland, and Helsinki Institute of Life Science. Dr. Kivimäki has received support from NordForsk, the UK Medical Research Council, the Wellcome Trust, the Academy of Finland, and the Helsinki Institute of Life Science. Dr. Dekhtyar disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article