User login
Dementia Deemed Highly Preventable: Here’s How
A new report on the preventability of dementia is both exciting and paradigm-shifting. The new study, published in The Lancet by the Lancet Commission on Dementia, estimates that .
This is paradigm-shifting because dementia is often perceived as an inevitable consequence of the aging process, with a major genetic component. But this study suggests that modifying these risk factors can benefit everyone, irrespective of genetic risk, and that it’s important to have a life-course approach. It’s never too early or too late to start to modify these factors.
We’ve known for a long time that many chronic diseases are highly preventable and modifiable. Some that come to mind are type 2 diabetes, coronary heart disease, and even certain forms of cancer. Modifiable risk factors include cigarette smoking, diet, physical activity, and maintaining a healthy weight. This study suggests that many of the same risk factors and more are relevant to reducing risk for dementia.
Let’s go through the risk factors, many of which are behavioral. These risk factors include lifestyle factors such as lack of physical activity, cigarette smoking, excessive alcohol consumption, and obesity. The cardiovascular or vascular-specific risk factors include not only those behavioral factors but also hypertension, high LDL cholesterol, and diabetes. Cognitive engagement–specific risk factors include social isolation, which is a major risk factor for dementia, as well as untreated hearing or vision loss, which can exacerbate social isolation and depression, and low educational attainment, which can be related to less cognitive engagement.
They also mention traumatic brain injury from an accident or contact sports without head protection as a risk factor, and the environmental risk factor of air pollution or poor air quality.
Two of these risk factors are new since the previous report in 2020: elevated LDL cholesterol and untreated vision loss, both of which are quite treatable. Overall, these findings suggest that a lot can be done to lower dementia risk, but it requires individual behavior modifications as well as a comprehensive approach with involvement of the healthcare system for improved screening, access, and public policy to reduce air pollution.
Some of these risk factors are more relevant to women, especially the social isolation that is so common later in life in women. In the United States, close to two out of three patients with dementia are women.
So, informing our patients about these risk factors and what can be done in terms of behavior modification, increased screening, and treatment for these conditions can go a long way in helping our patients reduce their risk for dementia.
Dr. Manson is professor of medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School, chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, and past president, North American Menopause Society, 2011-2012. She disclosed receiving study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article appeared on Medscape.com.
A new report on the preventability of dementia is both exciting and paradigm-shifting. The new study, published in The Lancet by the Lancet Commission on Dementia, estimates that .
This is paradigm-shifting because dementia is often perceived as an inevitable consequence of the aging process, with a major genetic component. But this study suggests that modifying these risk factors can benefit everyone, irrespective of genetic risk, and that it’s important to have a life-course approach. It’s never too early or too late to start to modify these factors.
We’ve known for a long time that many chronic diseases are highly preventable and modifiable. Some that come to mind are type 2 diabetes, coronary heart disease, and even certain forms of cancer. Modifiable risk factors include cigarette smoking, diet, physical activity, and maintaining a healthy weight. This study suggests that many of the same risk factors and more are relevant to reducing risk for dementia.
Let’s go through the risk factors, many of which are behavioral. These risk factors include lifestyle factors such as lack of physical activity, cigarette smoking, excessive alcohol consumption, and obesity. The cardiovascular or vascular-specific risk factors include not only those behavioral factors but also hypertension, high LDL cholesterol, and diabetes. Cognitive engagement–specific risk factors include social isolation, which is a major risk factor for dementia, as well as untreated hearing or vision loss, which can exacerbate social isolation and depression, and low educational attainment, which can be related to less cognitive engagement.
They also mention traumatic brain injury from an accident or contact sports without head protection as a risk factor, and the environmental risk factor of air pollution or poor air quality.
Two of these risk factors are new since the previous report in 2020: elevated LDL cholesterol and untreated vision loss, both of which are quite treatable. Overall, these findings suggest that a lot can be done to lower dementia risk, but it requires individual behavior modifications as well as a comprehensive approach with involvement of the healthcare system for improved screening, access, and public policy to reduce air pollution.
Some of these risk factors are more relevant to women, especially the social isolation that is so common later in life in women. In the United States, close to two out of three patients with dementia are women.
So, informing our patients about these risk factors and what can be done in terms of behavior modification, increased screening, and treatment for these conditions can go a long way in helping our patients reduce their risk for dementia.
Dr. Manson is professor of medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School, chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, and past president, North American Menopause Society, 2011-2012. She disclosed receiving study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article appeared on Medscape.com.
A new report on the preventability of dementia is both exciting and paradigm-shifting. The new study, published in The Lancet by the Lancet Commission on Dementia, estimates that .
This is paradigm-shifting because dementia is often perceived as an inevitable consequence of the aging process, with a major genetic component. But this study suggests that modifying these risk factors can benefit everyone, irrespective of genetic risk, and that it’s important to have a life-course approach. It’s never too early or too late to start to modify these factors.
We’ve known for a long time that many chronic diseases are highly preventable and modifiable. Some that come to mind are type 2 diabetes, coronary heart disease, and even certain forms of cancer. Modifiable risk factors include cigarette smoking, diet, physical activity, and maintaining a healthy weight. This study suggests that many of the same risk factors and more are relevant to reducing risk for dementia.
Let’s go through the risk factors, many of which are behavioral. These risk factors include lifestyle factors such as lack of physical activity, cigarette smoking, excessive alcohol consumption, and obesity. The cardiovascular or vascular-specific risk factors include not only those behavioral factors but also hypertension, high LDL cholesterol, and diabetes. Cognitive engagement–specific risk factors include social isolation, which is a major risk factor for dementia, as well as untreated hearing or vision loss, which can exacerbate social isolation and depression, and low educational attainment, which can be related to less cognitive engagement.
They also mention traumatic brain injury from an accident or contact sports without head protection as a risk factor, and the environmental risk factor of air pollution or poor air quality.
Two of these risk factors are new since the previous report in 2020: elevated LDL cholesterol and untreated vision loss, both of which are quite treatable. Overall, these findings suggest that a lot can be done to lower dementia risk, but it requires individual behavior modifications as well as a comprehensive approach with involvement of the healthcare system for improved screening, access, and public policy to reduce air pollution.
Some of these risk factors are more relevant to women, especially the social isolation that is so common later in life in women. In the United States, close to two out of three patients with dementia are women.
So, informing our patients about these risk factors and what can be done in terms of behavior modification, increased screening, and treatment for these conditions can go a long way in helping our patients reduce their risk for dementia.
Dr. Manson is professor of medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School, chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, and past president, North American Menopause Society, 2011-2012. She disclosed receiving study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article appeared on Medscape.com.
AHS White Paper Guides Treatment of Posttraumatic Headache in Youth
The guidance document, the first of its kind, covers risk factors for prolonged recovery, along with pharmacologic and nonpharmacologic management strategies, and supports an emphasis on multidisciplinary care, lead author Carlyn Patterson Gentile, MD, PhD, attending physician in the Division of Neurology at Children’s Hospital of Philadelphia in Pennsylvania, and colleagues reported.
“There are no guidelines to inform the management of posttraumatic headache in youth, but multiple studies have been conducted over the past 2 decades,” the authors wrote in Headache. “This white paper aims to provide a thorough review of the current literature, identify gaps in knowledge, and provide a road map for [posttraumatic headache] management in youth based on available evidence and expert opinion.”
Clarity for an Underrecognized Issue
According to Russell Lonser, MD, professor and chair of neurological surgery at Ohio State University, Columbus, the white paper is important because it offers concrete guidance for health care providers who may be less familiar with posttraumatic headache in youth.
“It brings together all of the previous literature ... in a very well-written way,” Dr. Lonser said in an interview. “More than anything, it could reassure [providers] that they shouldn’t be hunting down potentially magical cures, and reassure them in symptomatic management.”
Meeryo C. Choe, MD, associate clinical professor of pediatric neurology at UCLA Health in Calabasas, California, said the paper also helps shine a light on what may be a more common condition than the public suspects.
“While the media focuses on the effects of concussion in professional sports athletes, the biggest population of athletes is in our youth population,” Dr. Choe said in a written comment. “Almost 25 million children participate in sports throughout the country, and yet we lack guidelines on how to treat posttraumatic headache which can often develop into persistent postconcussive symptoms.”
This white paper, she noted, builds on Dr. Gentile’s 2021 systematic review, introduces new management recommendations, and aligns with the latest consensus statement from the Concussion in Sport Group.
Risk Factors
The white paper first emphasizes the importance of early identification of youth at high risk for prolonged recovery from posttraumatic headache. Risk factors include female sex, adolescent age, a high number of acute symptoms following the initial injury, and social determinants of health.
“I agree that it is important to identify these patients early to improve the recovery trajectory,” Dr. Choe said.
Identifying these individuals quickly allows for timely intervention with both pharmacologic and nonpharmacologic therapies, Dr. Gentile and colleagues noted, potentially mitigating persistent symptoms. Clinicians are encouraged to perform thorough initial assessments to identify these risk factors and initiate early, personalized management plans.
Initial Management of Acute Posttraumatic Headache
For the initial management of acute posttraumatic headache, the white paper recommends a scheduled dosing regimen of simple analgesics. Ibuprofen at a dosage of 10 mg/kg every 6-8 hours (up to a maximum of 600 mg per dose) combined with acetaminophen has shown the best evidence for efficacy. Provided the patient is clinically stable, this regimen should be initiated within 48 hours of the injury and maintained with scheduled dosing for 3-10 days.
If effective, these medications can subsequently be used on an as-needed basis. Careful usage of analgesics is crucial, the white paper cautions, as overadministration can lead to medication-overuse headaches, complicating the recovery process.
Secondary Treatment Options
In cases where first-line oral medications are ineffective, the AHS white paper outlines several secondary treatment options. These include acute intravenous therapies such as ketorolac, dopamine receptor antagonists, and intravenous fluids. Nerve blocks and oral corticosteroid bridges may also be considered.
The white paper stresses the importance of individualized treatment plans that consider the specific needs and responses of each patient, noting that the evidence supporting these approaches is primarily derived from retrospective studies and case reports.
“Patient preferences should be factored in,” said Sean Rose, MD, pediatric neurologist and codirector of the Complex Concussion Clinic at Nationwide Children’s Hospital, Columbus, Ohio.
Supplements and Preventive Measures
For adolescents and young adults at high risk of prolonged posttraumatic headache, the white paper suggests the use of riboflavin and magnesium supplements. Small randomized clinical trials suggest that these supplements may aid in speeding recovery when administered for 1-2 weeks within 48 hours of injury.
If significant headache persists after 2 weeks, a regimen of riboflavin 400 mg daily and magnesium 400-500 mg nightly can be trialed for 6-8 weeks, in line with recommendations for migraine prevention. Additionally, melatonin at a dose of 3-5 mg nightly for an 8-week course may be considered for patients experiencing comorbid sleep disturbances.
Targeted Preventative Therapy
The white paper emphasizes the importance of targeting preventative therapy to the primary headache phenotype.
For instance, patients presenting with a migraine phenotype, or those with a personal or family history of migraines, may be most likely to respond to medications proven effective in migraine prevention, such as amitriptyline, topiramate, and propranolol.
“Most research evidence [for treating posttraumatic headache in youth] is still based on the treatment of migraine,” Dr. Rose pointed out in a written comment.
Dr. Gentile and colleagues recommend initiating preventive therapies 4-6 weeks post injury if headaches are not improving, occur more than 1-2 days per week, or significantly impact daily functioning.
Specialist Referrals and Physical Activity
Referral to a headache specialist is advised for patients who do not respond to first-line acute and preventive therapies. Specialists can offer advanced diagnostic and therapeutic options, the authors noted, ensuring a comprehensive approach to managing posttraumatic headache.
The white paper also recommends noncontact, sub–symptom threshold aerobic physical activity and activities of daily living after an initial 24-48 hour period of symptom-limited cognitive and physical rest. Engaging in these activities may promote faster recovery and help patients gradually return to their normal routines.
“This has been a shift in the concussion treatment approach over the last decade, and is one of the most important interventions we can recommend as physicians,” Dr. Choe noted. “This is where pediatricians and emergency department physicians seeing children acutely can really make a difference in the recovery trajectory for a child after a concussion. ‘Cocoon therapy’ has been proven not only to not work, but be detrimental to recovery.”
Nonpharmacologic Interventions
Based on clinical assessment, nonpharmacologic interventions may also be considered, according to the white paper. These interventions include cervico-vestibular therapy, which addresses neck and balance issues, and cognitive-behavioral therapy, which helps manage the psychological aspects of chronic headache. Dr. Gentile and colleagues highlighted the potential benefits of a collaborative care model that incorporates these nonpharmacologic interventions alongside pharmacologic treatments, providing a holistic approach to posttraumatic headache management.
“Persisting headaches after concussion are often driven by multiple factors,” Dr. Rose said. “Multidisciplinary concussion clinics can offer multiple treatment approaches such as behavioral, physical therapy, exercise, and medication options.”
Unmet Needs
The white paper concludes by calling for high-quality prospective cohort studies and placebo-controlled, randomized, controlled trials to further advance the understanding and treatment of posttraumatic headache in children.
Dr. Lonser, Dr. Choe, and Dr. Rose all agreed.
“More focused treatment trials are needed to gauge efficacy in children with headache after concussion,” Dr. Rose said.
Specifically, Dr. Gentile and colleagues underscored the need to standardize data collection via common elements, which could improve the ability to compare results across studies and develop more effective treatments. In addition, research into the underlying pathophysiology of posttraumatic headache is crucial for identifying new therapeutic targets and clinical and biological markers that can personalize patient care.
They also stressed the importance of exploring the impact of health disparities and social determinants on posttraumatic headache outcomes, aiming to develop interventions that are equitable and accessible to all patient populations.The white paper was approved by the AHS, and supported by the National Institutes of Health/National Institute of Neurological Disorders and Stroke K23 NS124986. The authors disclosed relationships with Eli Lilly, Pfizer, Amgen, and others. The interviewees disclosed no conflicts of interest.
The guidance document, the first of its kind, covers risk factors for prolonged recovery, along with pharmacologic and nonpharmacologic management strategies, and supports an emphasis on multidisciplinary care, lead author Carlyn Patterson Gentile, MD, PhD, attending physician in the Division of Neurology at Children’s Hospital of Philadelphia in Pennsylvania, and colleagues reported.
“There are no guidelines to inform the management of posttraumatic headache in youth, but multiple studies have been conducted over the past 2 decades,” the authors wrote in Headache. “This white paper aims to provide a thorough review of the current literature, identify gaps in knowledge, and provide a road map for [posttraumatic headache] management in youth based on available evidence and expert opinion.”
Clarity for an Underrecognized Issue
According to Russell Lonser, MD, professor and chair of neurological surgery at Ohio State University, Columbus, the white paper is important because it offers concrete guidance for health care providers who may be less familiar with posttraumatic headache in youth.
“It brings together all of the previous literature ... in a very well-written way,” Dr. Lonser said in an interview. “More than anything, it could reassure [providers] that they shouldn’t be hunting down potentially magical cures, and reassure them in symptomatic management.”
Meeryo C. Choe, MD, associate clinical professor of pediatric neurology at UCLA Health in Calabasas, California, said the paper also helps shine a light on what may be a more common condition than the public suspects.
“While the media focuses on the effects of concussion in professional sports athletes, the biggest population of athletes is in our youth population,” Dr. Choe said in a written comment. “Almost 25 million children participate in sports throughout the country, and yet we lack guidelines on how to treat posttraumatic headache which can often develop into persistent postconcussive symptoms.”
This white paper, she noted, builds on Dr. Gentile’s 2021 systematic review, introduces new management recommendations, and aligns with the latest consensus statement from the Concussion in Sport Group.
Risk Factors
The white paper first emphasizes the importance of early identification of youth at high risk for prolonged recovery from posttraumatic headache. Risk factors include female sex, adolescent age, a high number of acute symptoms following the initial injury, and social determinants of health.
“I agree that it is important to identify these patients early to improve the recovery trajectory,” Dr. Choe said.
Identifying these individuals quickly allows for timely intervention with both pharmacologic and nonpharmacologic therapies, Dr. Gentile and colleagues noted, potentially mitigating persistent symptoms. Clinicians are encouraged to perform thorough initial assessments to identify these risk factors and initiate early, personalized management plans.
Initial Management of Acute Posttraumatic Headache
For the initial management of acute posttraumatic headache, the white paper recommends a scheduled dosing regimen of simple analgesics. Ibuprofen at a dosage of 10 mg/kg every 6-8 hours (up to a maximum of 600 mg per dose) combined with acetaminophen has shown the best evidence for efficacy. Provided the patient is clinically stable, this regimen should be initiated within 48 hours of the injury and maintained with scheduled dosing for 3-10 days.
If effective, these medications can subsequently be used on an as-needed basis. Careful usage of analgesics is crucial, the white paper cautions, as overadministration can lead to medication-overuse headaches, complicating the recovery process.
Secondary Treatment Options
In cases where first-line oral medications are ineffective, the AHS white paper outlines several secondary treatment options. These include acute intravenous therapies such as ketorolac, dopamine receptor antagonists, and intravenous fluids. Nerve blocks and oral corticosteroid bridges may also be considered.
The white paper stresses the importance of individualized treatment plans that consider the specific needs and responses of each patient, noting that the evidence supporting these approaches is primarily derived from retrospective studies and case reports.
“Patient preferences should be factored in,” said Sean Rose, MD, pediatric neurologist and codirector of the Complex Concussion Clinic at Nationwide Children’s Hospital, Columbus, Ohio.
Supplements and Preventive Measures
For adolescents and young adults at high risk of prolonged posttraumatic headache, the white paper suggests the use of riboflavin and magnesium supplements. Small randomized clinical trials suggest that these supplements may aid in speeding recovery when administered for 1-2 weeks within 48 hours of injury.
If significant headache persists after 2 weeks, a regimen of riboflavin 400 mg daily and magnesium 400-500 mg nightly can be trialed for 6-8 weeks, in line with recommendations for migraine prevention. Additionally, melatonin at a dose of 3-5 mg nightly for an 8-week course may be considered for patients experiencing comorbid sleep disturbances.
Targeted Preventative Therapy
The white paper emphasizes the importance of targeting preventative therapy to the primary headache phenotype.
For instance, patients presenting with a migraine phenotype, or those with a personal or family history of migraines, may be most likely to respond to medications proven effective in migraine prevention, such as amitriptyline, topiramate, and propranolol.
“Most research evidence [for treating posttraumatic headache in youth] is still based on the treatment of migraine,” Dr. Rose pointed out in a written comment.
Dr. Gentile and colleagues recommend initiating preventive therapies 4-6 weeks post injury if headaches are not improving, occur more than 1-2 days per week, or significantly impact daily functioning.
Specialist Referrals and Physical Activity
Referral to a headache specialist is advised for patients who do not respond to first-line acute and preventive therapies. Specialists can offer advanced diagnostic and therapeutic options, the authors noted, ensuring a comprehensive approach to managing posttraumatic headache.
The white paper also recommends noncontact, sub–symptom threshold aerobic physical activity and activities of daily living after an initial 24-48 hour period of symptom-limited cognitive and physical rest. Engaging in these activities may promote faster recovery and help patients gradually return to their normal routines.
“This has been a shift in the concussion treatment approach over the last decade, and is one of the most important interventions we can recommend as physicians,” Dr. Choe noted. “This is where pediatricians and emergency department physicians seeing children acutely can really make a difference in the recovery trajectory for a child after a concussion. ‘Cocoon therapy’ has been proven not only to not work, but be detrimental to recovery.”
Nonpharmacologic Interventions
Based on clinical assessment, nonpharmacologic interventions may also be considered, according to the white paper. These interventions include cervico-vestibular therapy, which addresses neck and balance issues, and cognitive-behavioral therapy, which helps manage the psychological aspects of chronic headache. Dr. Gentile and colleagues highlighted the potential benefits of a collaborative care model that incorporates these nonpharmacologic interventions alongside pharmacologic treatments, providing a holistic approach to posttraumatic headache management.
“Persisting headaches after concussion are often driven by multiple factors,” Dr. Rose said. “Multidisciplinary concussion clinics can offer multiple treatment approaches such as behavioral, physical therapy, exercise, and medication options.”
Unmet Needs
The white paper concludes by calling for high-quality prospective cohort studies and placebo-controlled, randomized, controlled trials to further advance the understanding and treatment of posttraumatic headache in children.
Dr. Lonser, Dr. Choe, and Dr. Rose all agreed.
“More focused treatment trials are needed to gauge efficacy in children with headache after concussion,” Dr. Rose said.
Specifically, Dr. Gentile and colleagues underscored the need to standardize data collection via common elements, which could improve the ability to compare results across studies and develop more effective treatments. In addition, research into the underlying pathophysiology of posttraumatic headache is crucial for identifying new therapeutic targets and clinical and biological markers that can personalize patient care.
They also stressed the importance of exploring the impact of health disparities and social determinants on posttraumatic headache outcomes, aiming to develop interventions that are equitable and accessible to all patient populations.The white paper was approved by the AHS, and supported by the National Institutes of Health/National Institute of Neurological Disorders and Stroke K23 NS124986. The authors disclosed relationships with Eli Lilly, Pfizer, Amgen, and others. The interviewees disclosed no conflicts of interest.
The guidance document, the first of its kind, covers risk factors for prolonged recovery, along with pharmacologic and nonpharmacologic management strategies, and supports an emphasis on multidisciplinary care, lead author Carlyn Patterson Gentile, MD, PhD, attending physician in the Division of Neurology at Children’s Hospital of Philadelphia in Pennsylvania, and colleagues reported.
“There are no guidelines to inform the management of posttraumatic headache in youth, but multiple studies have been conducted over the past 2 decades,” the authors wrote in Headache. “This white paper aims to provide a thorough review of the current literature, identify gaps in knowledge, and provide a road map for [posttraumatic headache] management in youth based on available evidence and expert opinion.”
Clarity for an Underrecognized Issue
According to Russell Lonser, MD, professor and chair of neurological surgery at Ohio State University, Columbus, the white paper is important because it offers concrete guidance for health care providers who may be less familiar with posttraumatic headache in youth.
“It brings together all of the previous literature ... in a very well-written way,” Dr. Lonser said in an interview. “More than anything, it could reassure [providers] that they shouldn’t be hunting down potentially magical cures, and reassure them in symptomatic management.”
Meeryo C. Choe, MD, associate clinical professor of pediatric neurology at UCLA Health in Calabasas, California, said the paper also helps shine a light on what may be a more common condition than the public suspects.
“While the media focuses on the effects of concussion in professional sports athletes, the biggest population of athletes is in our youth population,” Dr. Choe said in a written comment. “Almost 25 million children participate in sports throughout the country, and yet we lack guidelines on how to treat posttraumatic headache which can often develop into persistent postconcussive symptoms.”
This white paper, she noted, builds on Dr. Gentile’s 2021 systematic review, introduces new management recommendations, and aligns with the latest consensus statement from the Concussion in Sport Group.
Risk Factors
The white paper first emphasizes the importance of early identification of youth at high risk for prolonged recovery from posttraumatic headache. Risk factors include female sex, adolescent age, a high number of acute symptoms following the initial injury, and social determinants of health.
“I agree that it is important to identify these patients early to improve the recovery trajectory,” Dr. Choe said.
Identifying these individuals quickly allows for timely intervention with both pharmacologic and nonpharmacologic therapies, Dr. Gentile and colleagues noted, potentially mitigating persistent symptoms. Clinicians are encouraged to perform thorough initial assessments to identify these risk factors and initiate early, personalized management plans.
Initial Management of Acute Posttraumatic Headache
For the initial management of acute posttraumatic headache, the white paper recommends a scheduled dosing regimen of simple analgesics. Ibuprofen at a dosage of 10 mg/kg every 6-8 hours (up to a maximum of 600 mg per dose) combined with acetaminophen has shown the best evidence for efficacy. Provided the patient is clinically stable, this regimen should be initiated within 48 hours of the injury and maintained with scheduled dosing for 3-10 days.
If effective, these medications can subsequently be used on an as-needed basis. Careful usage of analgesics is crucial, the white paper cautions, as overadministration can lead to medication-overuse headaches, complicating the recovery process.
Secondary Treatment Options
In cases where first-line oral medications are ineffective, the AHS white paper outlines several secondary treatment options. These include acute intravenous therapies such as ketorolac, dopamine receptor antagonists, and intravenous fluids. Nerve blocks and oral corticosteroid bridges may also be considered.
The white paper stresses the importance of individualized treatment plans that consider the specific needs and responses of each patient, noting that the evidence supporting these approaches is primarily derived from retrospective studies and case reports.
“Patient preferences should be factored in,” said Sean Rose, MD, pediatric neurologist and codirector of the Complex Concussion Clinic at Nationwide Children’s Hospital, Columbus, Ohio.
Supplements and Preventive Measures
For adolescents and young adults at high risk of prolonged posttraumatic headache, the white paper suggests the use of riboflavin and magnesium supplements. Small randomized clinical trials suggest that these supplements may aid in speeding recovery when administered for 1-2 weeks within 48 hours of injury.
If significant headache persists after 2 weeks, a regimen of riboflavin 400 mg daily and magnesium 400-500 mg nightly can be trialed for 6-8 weeks, in line with recommendations for migraine prevention. Additionally, melatonin at a dose of 3-5 mg nightly for an 8-week course may be considered for patients experiencing comorbid sleep disturbances.
Targeted Preventative Therapy
The white paper emphasizes the importance of targeting preventative therapy to the primary headache phenotype.
For instance, patients presenting with a migraine phenotype, or those with a personal or family history of migraines, may be most likely to respond to medications proven effective in migraine prevention, such as amitriptyline, topiramate, and propranolol.
“Most research evidence [for treating posttraumatic headache in youth] is still based on the treatment of migraine,” Dr. Rose pointed out in a written comment.
Dr. Gentile and colleagues recommend initiating preventive therapies 4-6 weeks post injury if headaches are not improving, occur more than 1-2 days per week, or significantly impact daily functioning.
Specialist Referrals and Physical Activity
Referral to a headache specialist is advised for patients who do not respond to first-line acute and preventive therapies. Specialists can offer advanced diagnostic and therapeutic options, the authors noted, ensuring a comprehensive approach to managing posttraumatic headache.
The white paper also recommends noncontact, sub–symptom threshold aerobic physical activity and activities of daily living after an initial 24-48 hour period of symptom-limited cognitive and physical rest. Engaging in these activities may promote faster recovery and help patients gradually return to their normal routines.
“This has been a shift in the concussion treatment approach over the last decade, and is one of the most important interventions we can recommend as physicians,” Dr. Choe noted. “This is where pediatricians and emergency department physicians seeing children acutely can really make a difference in the recovery trajectory for a child after a concussion. ‘Cocoon therapy’ has been proven not only to not work, but be detrimental to recovery.”
Nonpharmacologic Interventions
Based on clinical assessment, nonpharmacologic interventions may also be considered, according to the white paper. These interventions include cervico-vestibular therapy, which addresses neck and balance issues, and cognitive-behavioral therapy, which helps manage the psychological aspects of chronic headache. Dr. Gentile and colleagues highlighted the potential benefits of a collaborative care model that incorporates these nonpharmacologic interventions alongside pharmacologic treatments, providing a holistic approach to posttraumatic headache management.
“Persisting headaches after concussion are often driven by multiple factors,” Dr. Rose said. “Multidisciplinary concussion clinics can offer multiple treatment approaches such as behavioral, physical therapy, exercise, and medication options.”
Unmet Needs
The white paper concludes by calling for high-quality prospective cohort studies and placebo-controlled, randomized, controlled trials to further advance the understanding and treatment of posttraumatic headache in children.
Dr. Lonser, Dr. Choe, and Dr. Rose all agreed.
“More focused treatment trials are needed to gauge efficacy in children with headache after concussion,” Dr. Rose said.
Specifically, Dr. Gentile and colleagues underscored the need to standardize data collection via common elements, which could improve the ability to compare results across studies and develop more effective treatments. In addition, research into the underlying pathophysiology of posttraumatic headache is crucial for identifying new therapeutic targets and clinical and biological markers that can personalize patient care.
They also stressed the importance of exploring the impact of health disparities and social determinants on posttraumatic headache outcomes, aiming to develop interventions that are equitable and accessible to all patient populations.The white paper was approved by the AHS, and supported by the National Institutes of Health/National Institute of Neurological Disorders and Stroke K23 NS124986. The authors disclosed relationships with Eli Lilly, Pfizer, Amgen, and others. The interviewees disclosed no conflicts of interest.
FROM HEADACHE
Anxiety Linked to a Threefold Increased Risk for Dementia
TOPLINE:
, new research shows.
METHODOLOGY:
- A total of 2132 participants aged 55-85 years (mean age, 76 years) were recruited from the Hunter Community Study. Of these, 53% were women.
- Participants were assessed over three different waves, 5 years apart. Demographic and health-related data were captured at wave 1.
- Researchers used the Kessler Psychological Distress Scale (K10) to measure anxiety at two points: Baseline (wave 1) and first follow-up (wave 2), with a 5-year interval between them. Anxiety was classified as chronic if present during both waves, resolved if only present at wave 1, and new if only appearing at wave 2.
- The primary outcome, incident all-cause dementia, during the follow-up period (maximum 13 years after baseline) was identified using the International Classification of Disease-10 codes.
TAKEAWAY:
- Out of 2132 cognitively healthy participants, 64 developed dementia, with an average time to diagnosis of 10 years. Chronic anxiety was linked to a 2.8-fold increased risk for dementia, while new-onset anxiety was associated with a 3.2-fold increased risk (P = .01).
- Participants younger than 70 years with chronic anxiety had a 4.6-fold increased risk for dementia (P = .03), and those with new-onset anxiety had a 7.2 times higher risk for dementia (P = .004).
- There was no significant risk for dementia in participants with anxiety that had resolved.
- Investigators speculated that individuals with anxiety were more likely to engage in unhealthy lifestyle behaviors, such as poor diet and smoking, which can lead to cardiovascular disease — a condition strongly associated with dementia.
IN PRACTICE:
“This prospective cohort study used causal inference methods to explore the role of anxiety in promoting the development of dementia,” lead author Kay Khaing, MMed, The University of Newcastle, Australia, wrote in a press release. “The findings suggest that anxiety may be a new risk factor to target in the prevention of dementia and also indicate that treating anxiety may reduce this risk.”
SOURCE:
Kay Khaing, MMed, of The University of Newcastle, Australia, led the study, which was published online in the Journal of the American Geriatrics Society.
LIMITATIONS:
Anxiety was measured using K10, which assessed symptoms experienced in the most recent 4 weeks, raising concerns about its accuracy over the entire observation period. The authors acknowledged that despite using a combination of the total K10 score and the anxiety subscale, the overlap of anxiety and depression might not be fully disentangled, leading to residual confounding by depression. Additionally, 33% of participants were lost to follow-up, and those lost had higher anxiety rates at baseline, potentially leading to missing cases of dementia and affecting the effect estimate.
DISCLOSURES:
This study did not report any funding or conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
, new research shows.
METHODOLOGY:
- A total of 2132 participants aged 55-85 years (mean age, 76 years) were recruited from the Hunter Community Study. Of these, 53% were women.
- Participants were assessed over three different waves, 5 years apart. Demographic and health-related data were captured at wave 1.
- Researchers used the Kessler Psychological Distress Scale (K10) to measure anxiety at two points: Baseline (wave 1) and first follow-up (wave 2), with a 5-year interval between them. Anxiety was classified as chronic if present during both waves, resolved if only present at wave 1, and new if only appearing at wave 2.
- The primary outcome, incident all-cause dementia, during the follow-up period (maximum 13 years after baseline) was identified using the International Classification of Disease-10 codes.
TAKEAWAY:
- Out of 2132 cognitively healthy participants, 64 developed dementia, with an average time to diagnosis of 10 years. Chronic anxiety was linked to a 2.8-fold increased risk for dementia, while new-onset anxiety was associated with a 3.2-fold increased risk (P = .01).
- Participants younger than 70 years with chronic anxiety had a 4.6-fold increased risk for dementia (P = .03), and those with new-onset anxiety had a 7.2 times higher risk for dementia (P = .004).
- There was no significant risk for dementia in participants with anxiety that had resolved.
- Investigators speculated that individuals with anxiety were more likely to engage in unhealthy lifestyle behaviors, such as poor diet and smoking, which can lead to cardiovascular disease — a condition strongly associated with dementia.
IN PRACTICE:
“This prospective cohort study used causal inference methods to explore the role of anxiety in promoting the development of dementia,” lead author Kay Khaing, MMed, The University of Newcastle, Australia, wrote in a press release. “The findings suggest that anxiety may be a new risk factor to target in the prevention of dementia and also indicate that treating anxiety may reduce this risk.”
SOURCE:
Kay Khaing, MMed, of The University of Newcastle, Australia, led the study, which was published online in the Journal of the American Geriatrics Society.
LIMITATIONS:
Anxiety was measured using K10, which assessed symptoms experienced in the most recent 4 weeks, raising concerns about its accuracy over the entire observation period. The authors acknowledged that despite using a combination of the total K10 score and the anxiety subscale, the overlap of anxiety and depression might not be fully disentangled, leading to residual confounding by depression. Additionally, 33% of participants were lost to follow-up, and those lost had higher anxiety rates at baseline, potentially leading to missing cases of dementia and affecting the effect estimate.
DISCLOSURES:
This study did not report any funding or conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
, new research shows.
METHODOLOGY:
- A total of 2132 participants aged 55-85 years (mean age, 76 years) were recruited from the Hunter Community Study. Of these, 53% were women.
- Participants were assessed over three different waves, 5 years apart. Demographic and health-related data were captured at wave 1.
- Researchers used the Kessler Psychological Distress Scale (K10) to measure anxiety at two points: Baseline (wave 1) and first follow-up (wave 2), with a 5-year interval between them. Anxiety was classified as chronic if present during both waves, resolved if only present at wave 1, and new if only appearing at wave 2.
- The primary outcome, incident all-cause dementia, during the follow-up period (maximum 13 years after baseline) was identified using the International Classification of Disease-10 codes.
TAKEAWAY:
- Out of 2132 cognitively healthy participants, 64 developed dementia, with an average time to diagnosis of 10 years. Chronic anxiety was linked to a 2.8-fold increased risk for dementia, while new-onset anxiety was associated with a 3.2-fold increased risk (P = .01).
- Participants younger than 70 years with chronic anxiety had a 4.6-fold increased risk for dementia (P = .03), and those with new-onset anxiety had a 7.2 times higher risk for dementia (P = .004).
- There was no significant risk for dementia in participants with anxiety that had resolved.
- Investigators speculated that individuals with anxiety were more likely to engage in unhealthy lifestyle behaviors, such as poor diet and smoking, which can lead to cardiovascular disease — a condition strongly associated with dementia.
IN PRACTICE:
“This prospective cohort study used causal inference methods to explore the role of anxiety in promoting the development of dementia,” lead author Kay Khaing, MMed, The University of Newcastle, Australia, wrote in a press release. “The findings suggest that anxiety may be a new risk factor to target in the prevention of dementia and also indicate that treating anxiety may reduce this risk.”
SOURCE:
Kay Khaing, MMed, of The University of Newcastle, Australia, led the study, which was published online in the Journal of the American Geriatrics Society.
LIMITATIONS:
Anxiety was measured using K10, which assessed symptoms experienced in the most recent 4 weeks, raising concerns about its accuracy over the entire observation period. The authors acknowledged that despite using a combination of the total K10 score and the anxiety subscale, the overlap of anxiety and depression might not be fully disentangled, leading to residual confounding by depression. Additionally, 33% of participants were lost to follow-up, and those lost had higher anxiety rates at baseline, potentially leading to missing cases of dementia and affecting the effect estimate.
DISCLOSURES:
This study did not report any funding or conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
NODDI and DTI in Remote Mild Traumatic Brain Injury
, according to authors of a recent study. In particular, they said, using neurite orientation dispersion and density imaging (NODDI) to monitor long-term mTBI impact on brain regions related to cognitive and emotional processing can help clinicians assess recovery, predict progression, and optimize treatment.
“Currently,” said co-senior study author Ping-Hong Yeh, PhD, “there is a lack of minimally invasive, quantitative diagnostic biomarkers for monitoring progression or recovery after mild TBI. However, mild TBI can be quite disabling, with many patients reporting symptoms months or even years after injury. This is the most difficult part to diagnose.” Dr. Yeh is a researcher at the National Intrepid Center of Excellence (NICoE) at Walter Reed National Military Medical Center, Bethesda, Maryland.
The NICoE, a Department of Defense organization and the senior member of Defense Intrepid Network for Traumatic Brain Injury and Brain Health, is among several centers charged with improving support for injured service members’ recovery, rehabilitation, and reintegration into their communities. The overarching goal, said Dr. Yeh, is to enable community neurologists to refer service members and veterans to these centers for treatment and advanced imaging when needed.
Invisible Wounds
Limitations of conventional MRI and CT make it tough to discern which patients with mTBI will return to baseline functioning, and which will develop long-term complications. Addressing the silent or invisible wounds of mTBI will require improved diagnostic, prognostic, and therapeutic tools, he said.
For their study, published in JAMA Network Open, Dr. Yeh and colleagues compared diffusion tensor imaging (DTI) and NODDI data from 65 male service members with remote (more than 2 years old) mTBI against scans of 33 noninjured controls matched for age, sex, and active-duty status.
“Although DTI is very sensitive in detecting microstructural changes in mild TBI,” he said, “it is not specific to the underlying pathophysiological changes.”
Conversely, NODDI uses biophysical modeling of intracellular diffusion, extracellular diffusion, and free water to help physicians to understand subtle pathophysiological changes with greater sensitivity and specificity than does DTI. “This will allow us to correlate symptoms with brain structural changes, making the invisible wound visible.”
In the study, the greatest differences between injured and control patients appeared in the following NODDI metrics (P <.001 in all analyses):
- Intracellular volume fraction (ICVF) of the right corticospinal tract (CST)
- Orientation dispersion index (ODI) of the left posterior thalamic radiation (PTR)
- ODI of the left uncinate fasciculus (UNC)
Regarding patient-reported neurobehavioral symptoms, Neurobehavioral Symptom Inventory cognitive subscores were associated with fractional anisotropy of the left UNC. In addition, PTSD Checklist–Civilian version total scores and avoidance subscores corresponded, respectively, with isotropic volume fraction (ISOVF) of the genu of corpus callosum and with ODI of the left fornix and stria terminalis.
Next Steps
Presently, Dr. Yeh said, conventional MRI and CT usually cannot differentiate between axonal injury, axonal inflammation (which develops during the chronic phase of mTBI), and demyelination. “But newer biophysical modeling, such as NODDI, will allow us to tell the difference.” Along with providing prognostic information, he said, such technology can guide appropriate treatment, such as anti-inflammatory agents for chronic inflammation.
Most community neurologists refer patients with persistent mTBI symptoms in the absence of red flags using CT and conventional MRI for advanced neuroimaging, said Dr. Yeh. But because few community neurologists are familiar with NODDI, he said, broadening its reach will require educating these providers. Additional steps that Dr. Yeh said could occur over the next decade or more include boosting advanced dMRI sensitivity levels through improved hardware, software, and diagnostic tools.
“We need to make these techniques clinically feasible,” he added. Currently, protocols that allow advanced dMRI scans in about 10 minutes can be achievable.
The investments required to implement advanced dMRI techniques will be substantial. A state-of-the-art 3T MRI scanner that can support NODDI and DTI can easily cost $1 million, said Dr. Yeh. Factor in additional equipment options and construction costs, he added, and the total price tag can easily exceed $2 million. But rather than replacing all existing MRI systems, said Dr. Yeh, AI one day may help translate high-gradient capability even to widely used lower-field MRI scanners operating at 0.5T.
Streamlining systems that incorporate disparate scanners with different acquisition parameters will require standardized data acquisition and sharing parameters. Along with helping to evaluate new techniques as they become available, data harmonization and sharing can facilitate a shift from research comparisons between large groups to comparing a single patient against many others — a move that Dr. Yeh said must occur for advanced dMRI techniques to achieve clinical relevance.
In addition, experts will need to revise clinical guidelines for use of new technologies as their availability grows. “Improper use of these techniques will not only increase health costs, but also probably result in adverse health results.” Such guidelines could be very useful in evaluating the suitability and quality of referrals for diagnostic images, Dr. Yeh said.
Dr. Yeh reports no relevant financial interests. The project was partially funded by the US Army Medical Research and Materiel Command.
, according to authors of a recent study. In particular, they said, using neurite orientation dispersion and density imaging (NODDI) to monitor long-term mTBI impact on brain regions related to cognitive and emotional processing can help clinicians assess recovery, predict progression, and optimize treatment.
“Currently,” said co-senior study author Ping-Hong Yeh, PhD, “there is a lack of minimally invasive, quantitative diagnostic biomarkers for monitoring progression or recovery after mild TBI. However, mild TBI can be quite disabling, with many patients reporting symptoms months or even years after injury. This is the most difficult part to diagnose.” Dr. Yeh is a researcher at the National Intrepid Center of Excellence (NICoE) at Walter Reed National Military Medical Center, Bethesda, Maryland.
The NICoE, a Department of Defense organization and the senior member of Defense Intrepid Network for Traumatic Brain Injury and Brain Health, is among several centers charged with improving support for injured service members’ recovery, rehabilitation, and reintegration into their communities. The overarching goal, said Dr. Yeh, is to enable community neurologists to refer service members and veterans to these centers for treatment and advanced imaging when needed.
Invisible Wounds
Limitations of conventional MRI and CT make it tough to discern which patients with mTBI will return to baseline functioning, and which will develop long-term complications. Addressing the silent or invisible wounds of mTBI will require improved diagnostic, prognostic, and therapeutic tools, he said.
For their study, published in JAMA Network Open, Dr. Yeh and colleagues compared diffusion tensor imaging (DTI) and NODDI data from 65 male service members with remote (more than 2 years old) mTBI against scans of 33 noninjured controls matched for age, sex, and active-duty status.
“Although DTI is very sensitive in detecting microstructural changes in mild TBI,” he said, “it is not specific to the underlying pathophysiological changes.”
Conversely, NODDI uses biophysical modeling of intracellular diffusion, extracellular diffusion, and free water to help physicians to understand subtle pathophysiological changes with greater sensitivity and specificity than does DTI. “This will allow us to correlate symptoms with brain structural changes, making the invisible wound visible.”
In the study, the greatest differences between injured and control patients appeared in the following NODDI metrics (P <.001 in all analyses):
- Intracellular volume fraction (ICVF) of the right corticospinal tract (CST)
- Orientation dispersion index (ODI) of the left posterior thalamic radiation (PTR)
- ODI of the left uncinate fasciculus (UNC)
Regarding patient-reported neurobehavioral symptoms, Neurobehavioral Symptom Inventory cognitive subscores were associated with fractional anisotropy of the left UNC. In addition, PTSD Checklist–Civilian version total scores and avoidance subscores corresponded, respectively, with isotropic volume fraction (ISOVF) of the genu of corpus callosum and with ODI of the left fornix and stria terminalis.
Next Steps
Presently, Dr. Yeh said, conventional MRI and CT usually cannot differentiate between axonal injury, axonal inflammation (which develops during the chronic phase of mTBI), and demyelination. “But newer biophysical modeling, such as NODDI, will allow us to tell the difference.” Along with providing prognostic information, he said, such technology can guide appropriate treatment, such as anti-inflammatory agents for chronic inflammation.
Most community neurologists refer patients with persistent mTBI symptoms in the absence of red flags using CT and conventional MRI for advanced neuroimaging, said Dr. Yeh. But because few community neurologists are familiar with NODDI, he said, broadening its reach will require educating these providers. Additional steps that Dr. Yeh said could occur over the next decade or more include boosting advanced dMRI sensitivity levels through improved hardware, software, and diagnostic tools.
“We need to make these techniques clinically feasible,” he added. Currently, protocols that allow advanced dMRI scans in about 10 minutes can be achievable.
The investments required to implement advanced dMRI techniques will be substantial. A state-of-the-art 3T MRI scanner that can support NODDI and DTI can easily cost $1 million, said Dr. Yeh. Factor in additional equipment options and construction costs, he added, and the total price tag can easily exceed $2 million. But rather than replacing all existing MRI systems, said Dr. Yeh, AI one day may help translate high-gradient capability even to widely used lower-field MRI scanners operating at 0.5T.
Streamlining systems that incorporate disparate scanners with different acquisition parameters will require standardized data acquisition and sharing parameters. Along with helping to evaluate new techniques as they become available, data harmonization and sharing can facilitate a shift from research comparisons between large groups to comparing a single patient against many others — a move that Dr. Yeh said must occur for advanced dMRI techniques to achieve clinical relevance.
In addition, experts will need to revise clinical guidelines for use of new technologies as their availability grows. “Improper use of these techniques will not only increase health costs, but also probably result in adverse health results.” Such guidelines could be very useful in evaluating the suitability and quality of referrals for diagnostic images, Dr. Yeh said.
Dr. Yeh reports no relevant financial interests. The project was partially funded by the US Army Medical Research and Materiel Command.
, according to authors of a recent study. In particular, they said, using neurite orientation dispersion and density imaging (NODDI) to monitor long-term mTBI impact on brain regions related to cognitive and emotional processing can help clinicians assess recovery, predict progression, and optimize treatment.
“Currently,” said co-senior study author Ping-Hong Yeh, PhD, “there is a lack of minimally invasive, quantitative diagnostic biomarkers for monitoring progression or recovery after mild TBI. However, mild TBI can be quite disabling, with many patients reporting symptoms months or even years after injury. This is the most difficult part to diagnose.” Dr. Yeh is a researcher at the National Intrepid Center of Excellence (NICoE) at Walter Reed National Military Medical Center, Bethesda, Maryland.
The NICoE, a Department of Defense organization and the senior member of Defense Intrepid Network for Traumatic Brain Injury and Brain Health, is among several centers charged with improving support for injured service members’ recovery, rehabilitation, and reintegration into their communities. The overarching goal, said Dr. Yeh, is to enable community neurologists to refer service members and veterans to these centers for treatment and advanced imaging when needed.
Invisible Wounds
Limitations of conventional MRI and CT make it tough to discern which patients with mTBI will return to baseline functioning, and which will develop long-term complications. Addressing the silent or invisible wounds of mTBI will require improved diagnostic, prognostic, and therapeutic tools, he said.
For their study, published in JAMA Network Open, Dr. Yeh and colleagues compared diffusion tensor imaging (DTI) and NODDI data from 65 male service members with remote (more than 2 years old) mTBI against scans of 33 noninjured controls matched for age, sex, and active-duty status.
“Although DTI is very sensitive in detecting microstructural changes in mild TBI,” he said, “it is not specific to the underlying pathophysiological changes.”
Conversely, NODDI uses biophysical modeling of intracellular diffusion, extracellular diffusion, and free water to help physicians to understand subtle pathophysiological changes with greater sensitivity and specificity than does DTI. “This will allow us to correlate symptoms with brain structural changes, making the invisible wound visible.”
In the study, the greatest differences between injured and control patients appeared in the following NODDI metrics (P <.001 in all analyses):
- Intracellular volume fraction (ICVF) of the right corticospinal tract (CST)
- Orientation dispersion index (ODI) of the left posterior thalamic radiation (PTR)
- ODI of the left uncinate fasciculus (UNC)
Regarding patient-reported neurobehavioral symptoms, Neurobehavioral Symptom Inventory cognitive subscores were associated with fractional anisotropy of the left UNC. In addition, PTSD Checklist–Civilian version total scores and avoidance subscores corresponded, respectively, with isotropic volume fraction (ISOVF) of the genu of corpus callosum and with ODI of the left fornix and stria terminalis.
Next Steps
Presently, Dr. Yeh said, conventional MRI and CT usually cannot differentiate between axonal injury, axonal inflammation (which develops during the chronic phase of mTBI), and demyelination. “But newer biophysical modeling, such as NODDI, will allow us to tell the difference.” Along with providing prognostic information, he said, such technology can guide appropriate treatment, such as anti-inflammatory agents for chronic inflammation.
Most community neurologists refer patients with persistent mTBI symptoms in the absence of red flags using CT and conventional MRI for advanced neuroimaging, said Dr. Yeh. But because few community neurologists are familiar with NODDI, he said, broadening its reach will require educating these providers. Additional steps that Dr. Yeh said could occur over the next decade or more include boosting advanced dMRI sensitivity levels through improved hardware, software, and diagnostic tools.
“We need to make these techniques clinically feasible,” he added. Currently, protocols that allow advanced dMRI scans in about 10 minutes can be achievable.
The investments required to implement advanced dMRI techniques will be substantial. A state-of-the-art 3T MRI scanner that can support NODDI and DTI can easily cost $1 million, said Dr. Yeh. Factor in additional equipment options and construction costs, he added, and the total price tag can easily exceed $2 million. But rather than replacing all existing MRI systems, said Dr. Yeh, AI one day may help translate high-gradient capability even to widely used lower-field MRI scanners operating at 0.5T.
Streamlining systems that incorporate disparate scanners with different acquisition parameters will require standardized data acquisition and sharing parameters. Along with helping to evaluate new techniques as they become available, data harmonization and sharing can facilitate a shift from research comparisons between large groups to comparing a single patient against many others — a move that Dr. Yeh said must occur for advanced dMRI techniques to achieve clinical relevance.
In addition, experts will need to revise clinical guidelines for use of new technologies as their availability grows. “Improper use of these techniques will not only increase health costs, but also probably result in adverse health results.” Such guidelines could be very useful in evaluating the suitability and quality of referrals for diagnostic images, Dr. Yeh said.
Dr. Yeh reports no relevant financial interests. The project was partially funded by the US Army Medical Research and Materiel Command.
FROM JAMA NETWORK OPEN
Fecal Transplant: A New Approach for Parkinson’s Disease?
, results of a new, randomized placebo-controlled trial show.
However, investigators discovered some interesting insights from the study, which they believe may help in designing future “improved, and hopefully successful, trials” with the intervention.
“Further studies — for example, through modified fecal microbiota transplantation approaches or bowel cleansing — are warranted,” they concluded.
The study was published online in JAMA Neurology.
Gut Dysfunction: An Early Symptom
Investigators led by Filip Scheperjans, MD, Helsinki University Hospital, Finland, explained that gut dysfunction is a prevalent, early symptom in Parkinson’s disease and is associated with more rapid disease progression.
Interventions targeting gut microbiota, such as FMT, have shown promising symptomatic, and potentially neuroprotective, effects in animal models of Parkinson’s disease.
Although several randomized clinical trials suggest efficacy of probiotics for Parkinson’s disease-related constipation, only limited clinical information on FMT is available.
In the current trial, 48 patients with Parkinson’s disease aged 35-75 years with mild to moderate symptoms and dysbiosis of fecal microbiota were randomized in a 2:1 ratio to receive FMT or placebo infused into the cecum via colonoscopy.
All patients had whole-bowel lavage starting the day before the colonoscopy. Fecal microbiota transplantation was administered as a single-dose and without antibiotic pretreatment.
Active treatment was a freeze-stored preparation of 30 g of feces from one of two donors who were healthy individuals without dysbiosis. The preparation was mixed with 150 mL of sterile physiologic saline and 20 mL of 85% glycerol for cryoprotection to improve viability of microbes. Placebo was the carrier solution alone, consisting of 180 mL of sterile physiologic saline and 20 mL of 85% glycerol.
The primary endpoint, a change in Parkinson’s disease symptoms as assessed on the Unified Parkinson’s Disease Rating Scale (UPDRS) at 6 months, did not differ between the two study groups.
Gastrointestinal adverse events were more frequent in the FMT group, occurring in 16 patients (53%) versus one patient (7%) in the placebo group. But no major safety concerns were observed.
Secondary outcomes and post hoc analyses showed a greater increase in dopaminergic medication, which may indicate faster disease progression, but also improvement in certain motor and nonmotor outcomes in the placebo group.
Microbiota changes were more pronounced after FMT, but dysbiosis status was reversed more frequently in the placebo group.
The researchers noted that the apparent futility in this trial is in contrast to several previous small clinical studies of fecal transplant that have suggested the potential for improvement of Parkinson’s disease symptoms.
In addition, encouraging results from the probiotics field suggest that an impact on motor and nonmotor Parkinson’s disease symptoms through gut microbiota manipulation is possible.
The researchers raised the possibility that the placebo procedure was not an inert comparator, given the relatively strong and sustained gut microbiota alteration and dysbiosis conversion observed in the placebo group, and suggested that the colonic cleansing procedure may also have had some beneficial effect.
“It seems possible that, after cleansing of a dysbiotic gut microbiota, recolonization leads to a more physiologic gut microbiota composition with symptom improvement in the placebo group. This warrants further exploration of modified fecal microbiota transplantation approaches and bowel cleansing in Parkinson’s disease,” they concluded.
Distinct Gut Microbiome
In an accompanying editorial, Timothy R. Sampson, PhD, assistant professor, Department of Cell Biology, Emory University School of Medicine, Atlanta, pointed out that dozens of independent studies have now demonstrated a distinct gut microbiome composition associated with Parkinson’s disease, and experimental data suggest that this has the capacity to incite inflammatory responses; degrade intestinal mucosa; and dysregulate a number of neuroactive and amyloidogenic molecules, which could contribute to the disease.
He noted that three other small placebo-controlled studies of fecal transplantation in Parkinson’s disease showed slightly more robust responses in the active treatment group, including improvements in UPDRS scores and gastrointestinal symptoms.
However, these studies tested different FMT procedures, including lyophilized oral capsules given at different dosing frequencies and either nasojejunal or colonic transfusion following a standard bowel preparation.
In addition, there is no consensus on pretransplant procedures, such as antibiotics or bowel clearance, and the choice of donor microbiome is probably essential, because there may be certain microbes required to shift the entire community, Dr. Sampson wrote.
Understanding how microbial contributions directly relate to Parkinson’s disease would identify individuals more likely to respond to peripheral interventions, and further exploration is needed to shed light on particular microbes that warrant targeting for either enrichment or depletion, he added.
“Despite a lack of primary end point efficacy in this latest study, in-depth comparison across these studies may reveal opportunities to refine fecal microbiota transplantation approaches. Together, these studies will continue to refine the hypothesis of a microbial contribution to Parkinson’s disease and reveal new therapeutic avenues,” Dr. Sampson concluded.
‘Planting Grass in a Yard Full of Weeds’
Commenting on the research, James Beck, PhD, chief scientific officer of the Parkinson’s Foundation, New York, said that whether FMT are helpful remains to be determined.
“The key question that needs to be solved is how to best perform these transplants. One issue is that you cannot plant grass when the yard is full of weeds. However, if you take too hard an approach killing the weeds — that is, with powerful antibiotics — you jeopardize the new grass, or in this case, the bacteria in the transplant. Solving that issue will be important as we consider whether this is effective or not.”
Dr. Beck added that there is still much to be learned from research into the gut microbiota. “I am hopeful with additional effort we will have answers soon.”
A version of this article appeared on Medscape.com.
, results of a new, randomized placebo-controlled trial show.
However, investigators discovered some interesting insights from the study, which they believe may help in designing future “improved, and hopefully successful, trials” with the intervention.
“Further studies — for example, through modified fecal microbiota transplantation approaches or bowel cleansing — are warranted,” they concluded.
The study was published online in JAMA Neurology.
Gut Dysfunction: An Early Symptom
Investigators led by Filip Scheperjans, MD, Helsinki University Hospital, Finland, explained that gut dysfunction is a prevalent, early symptom in Parkinson’s disease and is associated with more rapid disease progression.
Interventions targeting gut microbiota, such as FMT, have shown promising symptomatic, and potentially neuroprotective, effects in animal models of Parkinson’s disease.
Although several randomized clinical trials suggest efficacy of probiotics for Parkinson’s disease-related constipation, only limited clinical information on FMT is available.
In the current trial, 48 patients with Parkinson’s disease aged 35-75 years with mild to moderate symptoms and dysbiosis of fecal microbiota were randomized in a 2:1 ratio to receive FMT or placebo infused into the cecum via colonoscopy.
All patients had whole-bowel lavage starting the day before the colonoscopy. Fecal microbiota transplantation was administered as a single-dose and without antibiotic pretreatment.
Active treatment was a freeze-stored preparation of 30 g of feces from one of two donors who were healthy individuals without dysbiosis. The preparation was mixed with 150 mL of sterile physiologic saline and 20 mL of 85% glycerol for cryoprotection to improve viability of microbes. Placebo was the carrier solution alone, consisting of 180 mL of sterile physiologic saline and 20 mL of 85% glycerol.
The primary endpoint, a change in Parkinson’s disease symptoms as assessed on the Unified Parkinson’s Disease Rating Scale (UPDRS) at 6 months, did not differ between the two study groups.
Gastrointestinal adverse events were more frequent in the FMT group, occurring in 16 patients (53%) versus one patient (7%) in the placebo group. But no major safety concerns were observed.
Secondary outcomes and post hoc analyses showed a greater increase in dopaminergic medication, which may indicate faster disease progression, but also improvement in certain motor and nonmotor outcomes in the placebo group.
Microbiota changes were more pronounced after FMT, but dysbiosis status was reversed more frequently in the placebo group.
The researchers noted that the apparent futility in this trial is in contrast to several previous small clinical studies of fecal transplant that have suggested the potential for improvement of Parkinson’s disease symptoms.
In addition, encouraging results from the probiotics field suggest that an impact on motor and nonmotor Parkinson’s disease symptoms through gut microbiota manipulation is possible.
The researchers raised the possibility that the placebo procedure was not an inert comparator, given the relatively strong and sustained gut microbiota alteration and dysbiosis conversion observed in the placebo group, and suggested that the colonic cleansing procedure may also have had some beneficial effect.
“It seems possible that, after cleansing of a dysbiotic gut microbiota, recolonization leads to a more physiologic gut microbiota composition with symptom improvement in the placebo group. This warrants further exploration of modified fecal microbiota transplantation approaches and bowel cleansing in Parkinson’s disease,” they concluded.
Distinct Gut Microbiome
In an accompanying editorial, Timothy R. Sampson, PhD, assistant professor, Department of Cell Biology, Emory University School of Medicine, Atlanta, pointed out that dozens of independent studies have now demonstrated a distinct gut microbiome composition associated with Parkinson’s disease, and experimental data suggest that this has the capacity to incite inflammatory responses; degrade intestinal mucosa; and dysregulate a number of neuroactive and amyloidogenic molecules, which could contribute to the disease.
He noted that three other small placebo-controlled studies of fecal transplantation in Parkinson’s disease showed slightly more robust responses in the active treatment group, including improvements in UPDRS scores and gastrointestinal symptoms.
However, these studies tested different FMT procedures, including lyophilized oral capsules given at different dosing frequencies and either nasojejunal or colonic transfusion following a standard bowel preparation.
In addition, there is no consensus on pretransplant procedures, such as antibiotics or bowel clearance, and the choice of donor microbiome is probably essential, because there may be certain microbes required to shift the entire community, Dr. Sampson wrote.
Understanding how microbial contributions directly relate to Parkinson’s disease would identify individuals more likely to respond to peripheral interventions, and further exploration is needed to shed light on particular microbes that warrant targeting for either enrichment or depletion, he added.
“Despite a lack of primary end point efficacy in this latest study, in-depth comparison across these studies may reveal opportunities to refine fecal microbiota transplantation approaches. Together, these studies will continue to refine the hypothesis of a microbial contribution to Parkinson’s disease and reveal new therapeutic avenues,” Dr. Sampson concluded.
‘Planting Grass in a Yard Full of Weeds’
Commenting on the research, James Beck, PhD, chief scientific officer of the Parkinson’s Foundation, New York, said that whether FMT are helpful remains to be determined.
“The key question that needs to be solved is how to best perform these transplants. One issue is that you cannot plant grass when the yard is full of weeds. However, if you take too hard an approach killing the weeds — that is, with powerful antibiotics — you jeopardize the new grass, or in this case, the bacteria in the transplant. Solving that issue will be important as we consider whether this is effective or not.”
Dr. Beck added that there is still much to be learned from research into the gut microbiota. “I am hopeful with additional effort we will have answers soon.”
A version of this article appeared on Medscape.com.
, results of a new, randomized placebo-controlled trial show.
However, investigators discovered some interesting insights from the study, which they believe may help in designing future “improved, and hopefully successful, trials” with the intervention.
“Further studies — for example, through modified fecal microbiota transplantation approaches or bowel cleansing — are warranted,” they concluded.
The study was published online in JAMA Neurology.
Gut Dysfunction: An Early Symptom
Investigators led by Filip Scheperjans, MD, Helsinki University Hospital, Finland, explained that gut dysfunction is a prevalent, early symptom in Parkinson’s disease and is associated with more rapid disease progression.
Interventions targeting gut microbiota, such as FMT, have shown promising symptomatic, and potentially neuroprotective, effects in animal models of Parkinson’s disease.
Although several randomized clinical trials suggest efficacy of probiotics for Parkinson’s disease-related constipation, only limited clinical information on FMT is available.
In the current trial, 48 patients with Parkinson’s disease aged 35-75 years with mild to moderate symptoms and dysbiosis of fecal microbiota were randomized in a 2:1 ratio to receive FMT or placebo infused into the cecum via colonoscopy.
All patients had whole-bowel lavage starting the day before the colonoscopy. Fecal microbiota transplantation was administered as a single-dose and without antibiotic pretreatment.
Active treatment was a freeze-stored preparation of 30 g of feces from one of two donors who were healthy individuals without dysbiosis. The preparation was mixed with 150 mL of sterile physiologic saline and 20 mL of 85% glycerol for cryoprotection to improve viability of microbes. Placebo was the carrier solution alone, consisting of 180 mL of sterile physiologic saline and 20 mL of 85% glycerol.
The primary endpoint, a change in Parkinson’s disease symptoms as assessed on the Unified Parkinson’s Disease Rating Scale (UPDRS) at 6 months, did not differ between the two study groups.
Gastrointestinal adverse events were more frequent in the FMT group, occurring in 16 patients (53%) versus one patient (7%) in the placebo group. But no major safety concerns were observed.
Secondary outcomes and post hoc analyses showed a greater increase in dopaminergic medication, which may indicate faster disease progression, but also improvement in certain motor and nonmotor outcomes in the placebo group.
Microbiota changes were more pronounced after FMT, but dysbiosis status was reversed more frequently in the placebo group.
The researchers noted that the apparent futility in this trial is in contrast to several previous small clinical studies of fecal transplant that have suggested the potential for improvement of Parkinson’s disease symptoms.
In addition, encouraging results from the probiotics field suggest that an impact on motor and nonmotor Parkinson’s disease symptoms through gut microbiota manipulation is possible.
The researchers raised the possibility that the placebo procedure was not an inert comparator, given the relatively strong and sustained gut microbiota alteration and dysbiosis conversion observed in the placebo group, and suggested that the colonic cleansing procedure may also have had some beneficial effect.
“It seems possible that, after cleansing of a dysbiotic gut microbiota, recolonization leads to a more physiologic gut microbiota composition with symptom improvement in the placebo group. This warrants further exploration of modified fecal microbiota transplantation approaches and bowel cleansing in Parkinson’s disease,” they concluded.
Distinct Gut Microbiome
In an accompanying editorial, Timothy R. Sampson, PhD, assistant professor, Department of Cell Biology, Emory University School of Medicine, Atlanta, pointed out that dozens of independent studies have now demonstrated a distinct gut microbiome composition associated with Parkinson’s disease, and experimental data suggest that this has the capacity to incite inflammatory responses; degrade intestinal mucosa; and dysregulate a number of neuroactive and amyloidogenic molecules, which could contribute to the disease.
He noted that three other small placebo-controlled studies of fecal transplantation in Parkinson’s disease showed slightly more robust responses in the active treatment group, including improvements in UPDRS scores and gastrointestinal symptoms.
However, these studies tested different FMT procedures, including lyophilized oral capsules given at different dosing frequencies and either nasojejunal or colonic transfusion following a standard bowel preparation.
In addition, there is no consensus on pretransplant procedures, such as antibiotics or bowel clearance, and the choice of donor microbiome is probably essential, because there may be certain microbes required to shift the entire community, Dr. Sampson wrote.
Understanding how microbial contributions directly relate to Parkinson’s disease would identify individuals more likely to respond to peripheral interventions, and further exploration is needed to shed light on particular microbes that warrant targeting for either enrichment or depletion, he added.
“Despite a lack of primary end point efficacy in this latest study, in-depth comparison across these studies may reveal opportunities to refine fecal microbiota transplantation approaches. Together, these studies will continue to refine the hypothesis of a microbial contribution to Parkinson’s disease and reveal new therapeutic avenues,” Dr. Sampson concluded.
‘Planting Grass in a Yard Full of Weeds’
Commenting on the research, James Beck, PhD, chief scientific officer of the Parkinson’s Foundation, New York, said that whether FMT are helpful remains to be determined.
“The key question that needs to be solved is how to best perform these transplants. One issue is that you cannot plant grass when the yard is full of weeds. However, if you take too hard an approach killing the weeds — that is, with powerful antibiotics — you jeopardize the new grass, or in this case, the bacteria in the transplant. Solving that issue will be important as we consider whether this is effective or not.”
Dr. Beck added that there is still much to be learned from research into the gut microbiota. “I am hopeful with additional effort we will have answers soon.”
A version of this article appeared on Medscape.com.
FROM JAMA NEUROLOGY
Treatable Condition Misdiagnosed as Dementia in Almost 13% of Cases
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
From the American Journal of Medicine
In Some Patients, Antiseizure Medications Can Cause Severe Skin Reactions
according to authors of a recent review. And if putting higher-risk patients on drugs most associated with human leukocyte antigen (HLA)–related reaction risk before test results are available, authors advised starting at low doses and titrating slowly.
“When someone is having a seizure drug prescribed,” said senior author Ram Mani, MD, MSCE, chief of epilepsy at Rutgers Robert Wood Johnson Medical School in New Brunswick, New Jersey, “it’s often a tense clinical situation because the patient has either had the first few seizures of their life, or they’ve had a worsening in their seizures.”
To help physicians optimize choices, Dr. Mani and colleagues reviewed literature regarding 31 ASMs. Their study was published in Current Treatment Options in Neurology.
Overall, said Dr. Mani, incidence of benign skin reactions such as morbilliform exanthematous eruptions, which account for 95% of cutaneous adverse drug reactions (CADRs), ranges from a few percent up to 15%. “It’s a somewhat common occurrence. Fortunately, the reactions that can lead to morbidity and mortality are fairly rare.”
Severe Cutaneous Adverse Reactions
Among the five ASMs approved by the Food and Drug Administration since 2018, cenobamate has sparked the greatest concern. In early clinical development for epilepsy, a fast titration schedule (starting at 50 mg/day and increasing by 50 mg every 2 weeks to at least 200 mg/day) resulted in three cases of drug reaction with eosinophilia and systemic symptoms (DRESS, also called drug-induced hypersensitivity reaction/DIHS), including one fatal case. Based on a phase 3 trial, the drug’s manufacturer now recommends starting at 12.5 mg and titrating more slowly.
DRESS/DIHS appears within 2-6 weeks of drug exposure. Along with malaise, fever, and conjunctivitis, symptoms can include skin eruptions ranging from morbilliform to hemorrhagic and bullous. “Facial edema and early facial rash are classic findings,” the authors added. DRESS also can involve painful lymphadenopathy and potentially life-threatening damage to the liver, heart, and other organs.
Stevens-Johnson syndrome (SJS), which is characterized by detached skin measuring less than 10% of the entire body surface area, typically happens within the first month of drug exposure. Flu-like symptoms can appear 1-3 days before erythematous to dusky macules, commonly on the chest, as well as cutaneous and mucosal erosions. Along with the skin and conjunctiva, SJS can affect the eyes, lungs, liver, bone marrow, and gastrointestinal tract.
When patients present with possible DRESS or SJS, the authors recommended inpatient multidisciplinary care. Having ready access to blood tests can help assess severity and prognosis, Dr. Mani explained. Inpatient evaluation and treatment also may allow faster access to other specialists as needed, and monitoring of potential seizure exacerbation in patients with uncontrolled seizures for whom the drug provided benefit but required abrupt discontinuation.
Often, he added, all hope is not lost for future use of the medication after a minor skin reaction. A case series and literature review of mild lamotrigine-associated CADRs showed that most patients could reintroduce and titrate lamotrigine by waiting at least 4 weeks, beginning at 5 mg/day, and gradually increasing to 25 mg/day.
Identifying Those at Risk
With millions of patients being newly prescribed ASMs annually, accurately screening out all people at risk of severe cutaneous adverse reactions based on available genetic information is impossible. The complexity of evolving recommendations for HLA testing makes them hard to remember, Dr. Mani said. “Development and better use of clinical decision support systems can help.”
Accordingly, he starts with a thorough history and physical examination, inquiring about prior skin reactions or hypersensitivity, which are risk factors for future reactions to drugs such as carbamazepine, phenytoin, phenobarbital, oxcarbazepine, lamotrigine, rufinamide, and zonisamide. “Most of the medicines that the HLA tests are being done for are not the initial medicines I typically prescribe for a patient with newly diagnosed epilepsy,” said Dr. Mani. For ASM-naive patients with moderate or high risk of skin hypersensitivity reactions, he usually starts with lacosamide, levetiracetam, or brivaracetam. Additional low-risk drugs he considers in more complex cases include valproate, topiramate, and clobazam.
Only if a patient’s initial ASM causes problems will Dr. Mani consider higher-risk options and order HLA tests for patients belonging to indicated groups — such as testing for HLA-B*15:02 in Asian patients being considered for carbamazepine. About once weekly, he must put a patient on a potentially higher-risk drug before test results are available. If after a thorough risk-benefit discussion, he and the patient agree that the higher-risk drug is warranted, Dr. Mani starts at a lower-than-labeled dose, with a slower titration schedule that typically extends the ramp-up period by 1 week.
Fortunately, Dr. Mani said that, in 20 years of practice, he has seen more misdiagnoses — involving rashes from poison ivy, viral infections, or allergies — than actual ASM-induced reactions. “That’s why the patient, family, and practitioner need to be open-minded about what could be causing the rash.”
Dr. Mani reported no relevant conflicts. The study authors reported no funding sources.
according to authors of a recent review. And if putting higher-risk patients on drugs most associated with human leukocyte antigen (HLA)–related reaction risk before test results are available, authors advised starting at low doses and titrating slowly.
“When someone is having a seizure drug prescribed,” said senior author Ram Mani, MD, MSCE, chief of epilepsy at Rutgers Robert Wood Johnson Medical School in New Brunswick, New Jersey, “it’s often a tense clinical situation because the patient has either had the first few seizures of their life, or they’ve had a worsening in their seizures.”
To help physicians optimize choices, Dr. Mani and colleagues reviewed literature regarding 31 ASMs. Their study was published in Current Treatment Options in Neurology.
Overall, said Dr. Mani, incidence of benign skin reactions such as morbilliform exanthematous eruptions, which account for 95% of cutaneous adverse drug reactions (CADRs), ranges from a few percent up to 15%. “It’s a somewhat common occurrence. Fortunately, the reactions that can lead to morbidity and mortality are fairly rare.”
Severe Cutaneous Adverse Reactions
Among the five ASMs approved by the Food and Drug Administration since 2018, cenobamate has sparked the greatest concern. In early clinical development for epilepsy, a fast titration schedule (starting at 50 mg/day and increasing by 50 mg every 2 weeks to at least 200 mg/day) resulted in three cases of drug reaction with eosinophilia and systemic symptoms (DRESS, also called drug-induced hypersensitivity reaction/DIHS), including one fatal case. Based on a phase 3 trial, the drug’s manufacturer now recommends starting at 12.5 mg and titrating more slowly.
DRESS/DIHS appears within 2-6 weeks of drug exposure. Along with malaise, fever, and conjunctivitis, symptoms can include skin eruptions ranging from morbilliform to hemorrhagic and bullous. “Facial edema and early facial rash are classic findings,” the authors added. DRESS also can involve painful lymphadenopathy and potentially life-threatening damage to the liver, heart, and other organs.
Stevens-Johnson syndrome (SJS), which is characterized by detached skin measuring less than 10% of the entire body surface area, typically happens within the first month of drug exposure. Flu-like symptoms can appear 1-3 days before erythematous to dusky macules, commonly on the chest, as well as cutaneous and mucosal erosions. Along with the skin and conjunctiva, SJS can affect the eyes, lungs, liver, bone marrow, and gastrointestinal tract.
When patients present with possible DRESS or SJS, the authors recommended inpatient multidisciplinary care. Having ready access to blood tests can help assess severity and prognosis, Dr. Mani explained. Inpatient evaluation and treatment also may allow faster access to other specialists as needed, and monitoring of potential seizure exacerbation in patients with uncontrolled seizures for whom the drug provided benefit but required abrupt discontinuation.
Often, he added, all hope is not lost for future use of the medication after a minor skin reaction. A case series and literature review of mild lamotrigine-associated CADRs showed that most patients could reintroduce and titrate lamotrigine by waiting at least 4 weeks, beginning at 5 mg/day, and gradually increasing to 25 mg/day.
Identifying Those at Risk
With millions of patients being newly prescribed ASMs annually, accurately screening out all people at risk of severe cutaneous adverse reactions based on available genetic information is impossible. The complexity of evolving recommendations for HLA testing makes them hard to remember, Dr. Mani said. “Development and better use of clinical decision support systems can help.”
Accordingly, he starts with a thorough history and physical examination, inquiring about prior skin reactions or hypersensitivity, which are risk factors for future reactions to drugs such as carbamazepine, phenytoin, phenobarbital, oxcarbazepine, lamotrigine, rufinamide, and zonisamide. “Most of the medicines that the HLA tests are being done for are not the initial medicines I typically prescribe for a patient with newly diagnosed epilepsy,” said Dr. Mani. For ASM-naive patients with moderate or high risk of skin hypersensitivity reactions, he usually starts with lacosamide, levetiracetam, or brivaracetam. Additional low-risk drugs he considers in more complex cases include valproate, topiramate, and clobazam.
Only if a patient’s initial ASM causes problems will Dr. Mani consider higher-risk options and order HLA tests for patients belonging to indicated groups — such as testing for HLA-B*15:02 in Asian patients being considered for carbamazepine. About once weekly, he must put a patient on a potentially higher-risk drug before test results are available. If after a thorough risk-benefit discussion, he and the patient agree that the higher-risk drug is warranted, Dr. Mani starts at a lower-than-labeled dose, with a slower titration schedule that typically extends the ramp-up period by 1 week.
Fortunately, Dr. Mani said that, in 20 years of practice, he has seen more misdiagnoses — involving rashes from poison ivy, viral infections, or allergies — than actual ASM-induced reactions. “That’s why the patient, family, and practitioner need to be open-minded about what could be causing the rash.”
Dr. Mani reported no relevant conflicts. The study authors reported no funding sources.
according to authors of a recent review. And if putting higher-risk patients on drugs most associated with human leukocyte antigen (HLA)–related reaction risk before test results are available, authors advised starting at low doses and titrating slowly.
“When someone is having a seizure drug prescribed,” said senior author Ram Mani, MD, MSCE, chief of epilepsy at Rutgers Robert Wood Johnson Medical School in New Brunswick, New Jersey, “it’s often a tense clinical situation because the patient has either had the first few seizures of their life, or they’ve had a worsening in their seizures.”
To help physicians optimize choices, Dr. Mani and colleagues reviewed literature regarding 31 ASMs. Their study was published in Current Treatment Options in Neurology.
Overall, said Dr. Mani, incidence of benign skin reactions such as morbilliform exanthematous eruptions, which account for 95% of cutaneous adverse drug reactions (CADRs), ranges from a few percent up to 15%. “It’s a somewhat common occurrence. Fortunately, the reactions that can lead to morbidity and mortality are fairly rare.”
Severe Cutaneous Adverse Reactions
Among the five ASMs approved by the Food and Drug Administration since 2018, cenobamate has sparked the greatest concern. In early clinical development for epilepsy, a fast titration schedule (starting at 50 mg/day and increasing by 50 mg every 2 weeks to at least 200 mg/day) resulted in three cases of drug reaction with eosinophilia and systemic symptoms (DRESS, also called drug-induced hypersensitivity reaction/DIHS), including one fatal case. Based on a phase 3 trial, the drug’s manufacturer now recommends starting at 12.5 mg and titrating more slowly.
DRESS/DIHS appears within 2-6 weeks of drug exposure. Along with malaise, fever, and conjunctivitis, symptoms can include skin eruptions ranging from morbilliform to hemorrhagic and bullous. “Facial edema and early facial rash are classic findings,” the authors added. DRESS also can involve painful lymphadenopathy and potentially life-threatening damage to the liver, heart, and other organs.
Stevens-Johnson syndrome (SJS), which is characterized by detached skin measuring less than 10% of the entire body surface area, typically happens within the first month of drug exposure. Flu-like symptoms can appear 1-3 days before erythematous to dusky macules, commonly on the chest, as well as cutaneous and mucosal erosions. Along with the skin and conjunctiva, SJS can affect the eyes, lungs, liver, bone marrow, and gastrointestinal tract.
When patients present with possible DRESS or SJS, the authors recommended inpatient multidisciplinary care. Having ready access to blood tests can help assess severity and prognosis, Dr. Mani explained. Inpatient evaluation and treatment also may allow faster access to other specialists as needed, and monitoring of potential seizure exacerbation in patients with uncontrolled seizures for whom the drug provided benefit but required abrupt discontinuation.
Often, he added, all hope is not lost for future use of the medication after a minor skin reaction. A case series and literature review of mild lamotrigine-associated CADRs showed that most patients could reintroduce and titrate lamotrigine by waiting at least 4 weeks, beginning at 5 mg/day, and gradually increasing to 25 mg/day.
Identifying Those at Risk
With millions of patients being newly prescribed ASMs annually, accurately screening out all people at risk of severe cutaneous adverse reactions based on available genetic information is impossible. The complexity of evolving recommendations for HLA testing makes them hard to remember, Dr. Mani said. “Development and better use of clinical decision support systems can help.”
Accordingly, he starts with a thorough history and physical examination, inquiring about prior skin reactions or hypersensitivity, which are risk factors for future reactions to drugs such as carbamazepine, phenytoin, phenobarbital, oxcarbazepine, lamotrigine, rufinamide, and zonisamide. “Most of the medicines that the HLA tests are being done for are not the initial medicines I typically prescribe for a patient with newly diagnosed epilepsy,” said Dr. Mani. For ASM-naive patients with moderate or high risk of skin hypersensitivity reactions, he usually starts with lacosamide, levetiracetam, or brivaracetam. Additional low-risk drugs he considers in more complex cases include valproate, topiramate, and clobazam.
Only if a patient’s initial ASM causes problems will Dr. Mani consider higher-risk options and order HLA tests for patients belonging to indicated groups — such as testing for HLA-B*15:02 in Asian patients being considered for carbamazepine. About once weekly, he must put a patient on a potentially higher-risk drug before test results are available. If after a thorough risk-benefit discussion, he and the patient agree that the higher-risk drug is warranted, Dr. Mani starts at a lower-than-labeled dose, with a slower titration schedule that typically extends the ramp-up period by 1 week.
Fortunately, Dr. Mani said that, in 20 years of practice, he has seen more misdiagnoses — involving rashes from poison ivy, viral infections, or allergies — than actual ASM-induced reactions. “That’s why the patient, family, and practitioner need to be open-minded about what could be causing the rash.”
Dr. Mani reported no relevant conflicts. The study authors reported no funding sources.
FROM CURRENT TREATMENT OPTIONS IN NEUROLOGY
TBI Significantly Increases Mortality Rate Among Veterans With Epilepsy
recent research published in Epilepsia.
, according toIn a retrospective cohort study, Ali Roghani, PhD, of the division of epidemiology at the University of Utah School of Medicine in Salt Lake City, and colleagues evaluated 938,890 veterans between 2000 and 2019 in the Defense Health Agency and the Veterans Health Administration who served in the US military after the September 11 attacks. Overall, 27,436 veterans met criteria for a diagnosis of epilepsy, 264,890 had received a diagnosis for a traumatic brain injury (TBI), and the remaining patients had neither epilepsy nor TBI.
Among the veterans with no epilepsy, 248,714 veterans had a TBI diagnosis, while in the group of patients with epilepsy, 10,358 veterans experienced a TBI before their epilepsy diagnosis, 1598 were diagnosed with a TBI within 6 months of epilepsy, and 4310 veterans had a TBI 6 months after an epilepsy diagnosis. The researchers assessed all-cause mortality in each group, calculating cumulative mortality rates compared with the group of veterans who had no TBI and no epilepsy diagnosis.
Dr. Roghani and colleagues found a significantly higher mortality rate among veterans who developed epilepsy compared with a control group with neither epilepsy nor TBI (6.26% vs. 1.12%; P < .01), with a majority of veterans in the group who died being White (67.4%) men (89.9%). Compared with veterans who were deceased, nondeceased veterans were significantly more likely to have a history of being deployed (70.7% vs. 64.8%; P < .001), were less likely to be in the army (52.2% vs. 55.0%; P < .001), and were more likely to reach the rank of officer or warrant officer (8.1% vs. 7.6%; P = .014).
There were also significant differences in clinical characteristics between nondeceased and deceased veterans, including a higher rate of substance abuse disorder, smoking history, cardiovascular disease, stroke, transient ischemic attack, cancer, liver disease, kidney disease, or other injury as well as overdose, suicidal ideation, and homelessness. “Most clinical conditions were significantly different between deceased and nondeceased in part due to the large cohort size,” the researchers said.
After performing Cox regression analyses, the researchers found a higher mortality risk in veterans with epilepsy and/or TBIs among those who developed a TBI within 6 months of an epilepsy diagnosis (hazard ratio [HR], 5.02; 95% CI, 4.21-5.99), had a TBI prior to epilepsy (HR, 4.25; 95% CI, 3.89-4.58), had epilepsy alone (HR, 4.00; 95% CI, 3.67-4.36), had a TBI more than 6 months after an epilepsy diagnosis (HR, 2.49; 95% CI, 2.17-2.85), and those who had epilepsy alone (HR, 1.30; 95% CI, 1.25-1.36) compared with veterans who had neither epilepsy nor a TBI.
“The temporal relationship with TBI that occurred within 6 months after epilepsy diagnosis may suggest an increased vulnerability to accidents, severe injuries, or TBI resulting from seizures, potentially elevating mortality risk,” Dr. Roghani and colleagues wrote.
The researchers said the results “raise concerns” about the subgroup of patients who are diagnosed with epilepsy close to experiencing a TBI.
“Our results provide information regarding the temporal relationship between epilepsy and TBI regarding mortality in a cohort of post-9/11 veterans, which highlights the need for enhanced primary prevention, such as more access to health care among people with epilepsy and TBI,” they said. “Given the rising incidence of TBI in both the military and civilian populations, these findings suggest close monitoring might be crucial to develop effective prevention strategies for long-term complications, particularly [post-traumatic epilepsy].”
Reevaluating the Treatment of Epilepsy
Juliann Paolicchi, MD, a neurologist and member of the epilepsy team at Northwell Health in New York, who was not involved with the study, said in an interview that TBIs have been studied more closely since the beginning of conflicts in the Middle East, particularly in Iran and Afghanistan, where “newer artillery causes more diffuse traumatic injury to the brain and the body than the effects of more typical weaponry.”
The study by Roghani and colleagues, she said, “is groundbreaking in that it looks at the connection and timing of these two disruptive forces, epilepsy and TBI, on the brain,” she said. “The study reveals that timing is everything: The combination of two disrupting circuitry effects in proximity can have a deadly effect. The summation is greater than either alone in veterans, and has significant effects on the brain’s ability to sustain the functions that keep us alive.”
The 6 months following either a diagnosis of epilepsy or TBI is “crucial,” Dr. Paolicchi noted. “Military and private citizens should be closely monitored during this period, and the results suggest they should refrain from activities that could predispose to further brain injury.”
In addition, current standards for treatment of epilepsy may need to be reevaluated, she said. “Patients are not always treated with a seizure medication after a first seizure, but perhaps, especially in patients at higher risk for brain injury such as the military and athletes, that policy warrants further examination.”
The findings by Roghani and colleagues may also extend to other groups, such as evaluating athletes after a concussion, patients after they are in a motor vehicle accident, and infants with traumatic brain injury, Dr. Paolicchi said. “The results suggest a reexamining of the proximity [of TBI] and epilepsy in these and other areas,” she noted.
The authors reported personal and institutional relationships in the form of research support and other financial compensation from AbbVie, Biohaven, CURE, Department of Defense, Department of Veterans Affairs (VA), Eisai, Engage, National Institutes of Health, Sanofi, SCS Consulting, Sunovion, and UCB. This study was supported by funding from the Department of Defense, VA Health Systems, and the VA HSR&D Informatics, Decision Enhancement, and Analytic Sciences Center of Innovation. Dr. Paolicchi reports no relevant conflicts of interest.
recent research published in Epilepsia.
, according toIn a retrospective cohort study, Ali Roghani, PhD, of the division of epidemiology at the University of Utah School of Medicine in Salt Lake City, and colleagues evaluated 938,890 veterans between 2000 and 2019 in the Defense Health Agency and the Veterans Health Administration who served in the US military after the September 11 attacks. Overall, 27,436 veterans met criteria for a diagnosis of epilepsy, 264,890 had received a diagnosis for a traumatic brain injury (TBI), and the remaining patients had neither epilepsy nor TBI.
Among the veterans with no epilepsy, 248,714 veterans had a TBI diagnosis, while in the group of patients with epilepsy, 10,358 veterans experienced a TBI before their epilepsy diagnosis, 1598 were diagnosed with a TBI within 6 months of epilepsy, and 4310 veterans had a TBI 6 months after an epilepsy diagnosis. The researchers assessed all-cause mortality in each group, calculating cumulative mortality rates compared with the group of veterans who had no TBI and no epilepsy diagnosis.
Dr. Roghani and colleagues found a significantly higher mortality rate among veterans who developed epilepsy compared with a control group with neither epilepsy nor TBI (6.26% vs. 1.12%; P < .01), with a majority of veterans in the group who died being White (67.4%) men (89.9%). Compared with veterans who were deceased, nondeceased veterans were significantly more likely to have a history of being deployed (70.7% vs. 64.8%; P < .001), were less likely to be in the army (52.2% vs. 55.0%; P < .001), and were more likely to reach the rank of officer or warrant officer (8.1% vs. 7.6%; P = .014).
There were also significant differences in clinical characteristics between nondeceased and deceased veterans, including a higher rate of substance abuse disorder, smoking history, cardiovascular disease, stroke, transient ischemic attack, cancer, liver disease, kidney disease, or other injury as well as overdose, suicidal ideation, and homelessness. “Most clinical conditions were significantly different between deceased and nondeceased in part due to the large cohort size,” the researchers said.
After performing Cox regression analyses, the researchers found a higher mortality risk in veterans with epilepsy and/or TBIs among those who developed a TBI within 6 months of an epilepsy diagnosis (hazard ratio [HR], 5.02; 95% CI, 4.21-5.99), had a TBI prior to epilepsy (HR, 4.25; 95% CI, 3.89-4.58), had epilepsy alone (HR, 4.00; 95% CI, 3.67-4.36), had a TBI more than 6 months after an epilepsy diagnosis (HR, 2.49; 95% CI, 2.17-2.85), and those who had epilepsy alone (HR, 1.30; 95% CI, 1.25-1.36) compared with veterans who had neither epilepsy nor a TBI.
“The temporal relationship with TBI that occurred within 6 months after epilepsy diagnosis may suggest an increased vulnerability to accidents, severe injuries, or TBI resulting from seizures, potentially elevating mortality risk,” Dr. Roghani and colleagues wrote.
The researchers said the results “raise concerns” about the subgroup of patients who are diagnosed with epilepsy close to experiencing a TBI.
“Our results provide information regarding the temporal relationship between epilepsy and TBI regarding mortality in a cohort of post-9/11 veterans, which highlights the need for enhanced primary prevention, such as more access to health care among people with epilepsy and TBI,” they said. “Given the rising incidence of TBI in both the military and civilian populations, these findings suggest close monitoring might be crucial to develop effective prevention strategies for long-term complications, particularly [post-traumatic epilepsy].”
Reevaluating the Treatment of Epilepsy
Juliann Paolicchi, MD, a neurologist and member of the epilepsy team at Northwell Health in New York, who was not involved with the study, said in an interview that TBIs have been studied more closely since the beginning of conflicts in the Middle East, particularly in Iran and Afghanistan, where “newer artillery causes more diffuse traumatic injury to the brain and the body than the effects of more typical weaponry.”
The study by Roghani and colleagues, she said, “is groundbreaking in that it looks at the connection and timing of these two disruptive forces, epilepsy and TBI, on the brain,” she said. “The study reveals that timing is everything: The combination of two disrupting circuitry effects in proximity can have a deadly effect. The summation is greater than either alone in veterans, and has significant effects on the brain’s ability to sustain the functions that keep us alive.”
The 6 months following either a diagnosis of epilepsy or TBI is “crucial,” Dr. Paolicchi noted. “Military and private citizens should be closely monitored during this period, and the results suggest they should refrain from activities that could predispose to further brain injury.”
In addition, current standards for treatment of epilepsy may need to be reevaluated, she said. “Patients are not always treated with a seizure medication after a first seizure, but perhaps, especially in patients at higher risk for brain injury such as the military and athletes, that policy warrants further examination.”
The findings by Roghani and colleagues may also extend to other groups, such as evaluating athletes after a concussion, patients after they are in a motor vehicle accident, and infants with traumatic brain injury, Dr. Paolicchi said. “The results suggest a reexamining of the proximity [of TBI] and epilepsy in these and other areas,” she noted.
The authors reported personal and institutional relationships in the form of research support and other financial compensation from AbbVie, Biohaven, CURE, Department of Defense, Department of Veterans Affairs (VA), Eisai, Engage, National Institutes of Health, Sanofi, SCS Consulting, Sunovion, and UCB. This study was supported by funding from the Department of Defense, VA Health Systems, and the VA HSR&D Informatics, Decision Enhancement, and Analytic Sciences Center of Innovation. Dr. Paolicchi reports no relevant conflicts of interest.
recent research published in Epilepsia.
, according toIn a retrospective cohort study, Ali Roghani, PhD, of the division of epidemiology at the University of Utah School of Medicine in Salt Lake City, and colleagues evaluated 938,890 veterans between 2000 and 2019 in the Defense Health Agency and the Veterans Health Administration who served in the US military after the September 11 attacks. Overall, 27,436 veterans met criteria for a diagnosis of epilepsy, 264,890 had received a diagnosis for a traumatic brain injury (TBI), and the remaining patients had neither epilepsy nor TBI.
Among the veterans with no epilepsy, 248,714 veterans had a TBI diagnosis, while in the group of patients with epilepsy, 10,358 veterans experienced a TBI before their epilepsy diagnosis, 1598 were diagnosed with a TBI within 6 months of epilepsy, and 4310 veterans had a TBI 6 months after an epilepsy diagnosis. The researchers assessed all-cause mortality in each group, calculating cumulative mortality rates compared with the group of veterans who had no TBI and no epilepsy diagnosis.
Dr. Roghani and colleagues found a significantly higher mortality rate among veterans who developed epilepsy compared with a control group with neither epilepsy nor TBI (6.26% vs. 1.12%; P < .01), with a majority of veterans in the group who died being White (67.4%) men (89.9%). Compared with veterans who were deceased, nondeceased veterans were significantly more likely to have a history of being deployed (70.7% vs. 64.8%; P < .001), were less likely to be in the army (52.2% vs. 55.0%; P < .001), and were more likely to reach the rank of officer or warrant officer (8.1% vs. 7.6%; P = .014).
There were also significant differences in clinical characteristics between nondeceased and deceased veterans, including a higher rate of substance abuse disorder, smoking history, cardiovascular disease, stroke, transient ischemic attack, cancer, liver disease, kidney disease, or other injury as well as overdose, suicidal ideation, and homelessness. “Most clinical conditions were significantly different between deceased and nondeceased in part due to the large cohort size,” the researchers said.
After performing Cox regression analyses, the researchers found a higher mortality risk in veterans with epilepsy and/or TBIs among those who developed a TBI within 6 months of an epilepsy diagnosis (hazard ratio [HR], 5.02; 95% CI, 4.21-5.99), had a TBI prior to epilepsy (HR, 4.25; 95% CI, 3.89-4.58), had epilepsy alone (HR, 4.00; 95% CI, 3.67-4.36), had a TBI more than 6 months after an epilepsy diagnosis (HR, 2.49; 95% CI, 2.17-2.85), and those who had epilepsy alone (HR, 1.30; 95% CI, 1.25-1.36) compared with veterans who had neither epilepsy nor a TBI.
“The temporal relationship with TBI that occurred within 6 months after epilepsy diagnosis may suggest an increased vulnerability to accidents, severe injuries, or TBI resulting from seizures, potentially elevating mortality risk,” Dr. Roghani and colleagues wrote.
The researchers said the results “raise concerns” about the subgroup of patients who are diagnosed with epilepsy close to experiencing a TBI.
“Our results provide information regarding the temporal relationship between epilepsy and TBI regarding mortality in a cohort of post-9/11 veterans, which highlights the need for enhanced primary prevention, such as more access to health care among people with epilepsy and TBI,” they said. “Given the rising incidence of TBI in both the military and civilian populations, these findings suggest close monitoring might be crucial to develop effective prevention strategies for long-term complications, particularly [post-traumatic epilepsy].”
Reevaluating the Treatment of Epilepsy
Juliann Paolicchi, MD, a neurologist and member of the epilepsy team at Northwell Health in New York, who was not involved with the study, said in an interview that TBIs have been studied more closely since the beginning of conflicts in the Middle East, particularly in Iran and Afghanistan, where “newer artillery causes more diffuse traumatic injury to the brain and the body than the effects of more typical weaponry.”
The study by Roghani and colleagues, she said, “is groundbreaking in that it looks at the connection and timing of these two disruptive forces, epilepsy and TBI, on the brain,” she said. “The study reveals that timing is everything: The combination of two disrupting circuitry effects in proximity can have a deadly effect. The summation is greater than either alone in veterans, and has significant effects on the brain’s ability to sustain the functions that keep us alive.”
The 6 months following either a diagnosis of epilepsy or TBI is “crucial,” Dr. Paolicchi noted. “Military and private citizens should be closely monitored during this period, and the results suggest they should refrain from activities that could predispose to further brain injury.”
In addition, current standards for treatment of epilepsy may need to be reevaluated, she said. “Patients are not always treated with a seizure medication after a first seizure, but perhaps, especially in patients at higher risk for brain injury such as the military and athletes, that policy warrants further examination.”
The findings by Roghani and colleagues may also extend to other groups, such as evaluating athletes after a concussion, patients after they are in a motor vehicle accident, and infants with traumatic brain injury, Dr. Paolicchi said. “The results suggest a reexamining of the proximity [of TBI] and epilepsy in these and other areas,” she noted.
The authors reported personal and institutional relationships in the form of research support and other financial compensation from AbbVie, Biohaven, CURE, Department of Defense, Department of Veterans Affairs (VA), Eisai, Engage, National Institutes of Health, Sanofi, SCS Consulting, Sunovion, and UCB. This study was supported by funding from the Department of Defense, VA Health Systems, and the VA HSR&D Informatics, Decision Enhancement, and Analytic Sciences Center of Innovation. Dr. Paolicchi reports no relevant conflicts of interest.
FROM EPILEPSIA
Night Owl or Lark? The Answer May Affect Cognition
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM BMJ PUBLIC HEALTH
EMA Warns of Anaphylactic Reactions to MS Drug
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.