User login
Impaired vision an overlooked dementia risk factor
Investigators analyzed estimated population attributable fractions (PAFs) associated with dementia in more than 16,000 older adults. A PAF represents the number of dementia cases that could be prevented if a given risk factor were eliminated.
Results showed the PAF of vision impairment was 1.8%, suggesting that healthy vision had the potential to prevent more than 100,000 cases of dementia in the United States.
“Vision impairment and blindness disproportionately impact older adults, yet vision impairment is often preventable or even correctable,” study investigator Joshua Ehrlich MD, assistant professor of ophthalmology and visual sciences, University of Michigan, Ann Arbor, said in an interview.
Poor vision affects not only how individuals see the world, but also their systemic health and well-being, Dr. Ehrlich said.
“Accordingly, ensuring that older adults receive appropriate eye care is vital to promoting health, independence, and optimal aging,” he added.
The findings were published online in JAMA Neurology.
A surprising omission
There is an “urgent need to identify modifiable risk factors for dementia that can be targeted with interventions to slow cognitive decline and prevent dementia,” the investigators wrote.
In 2020, the Lancet Commission report on dementia prevention, intervention, and care proposed a life-course model of 12 potentially modifiable dementia risk factors. This included lower educational level, hearing loss, traumatic brain injury, hypertension, excessive alcohol consumption, obesity, smoking, depression, social isolation, physical inactivity, diabetes, and air pollution.
Together, these factors are associated with about 40% of dementia cases worldwide, the report notes.
Vision impairment was not included in this model, “despite considerable evidence that it is associated with an elevated risk of incident dementia and that it may operate through the same pathways as hearing loss,” the current researchers wrote.
“We have known for some time that vision impairment is a risk factor for dementia [and] we also know that a very large fraction of vision impairment, possibly in excess of 80%, is avoidable or has simply yet to be addressed,” Dr. Ehrlich said.
He and his colleagues found it “surprising that vision impairment had been ignored in key models of modifiable dementia risk factors that are used to shape health policy and resource allocation.” They set out to demonstrate that, “in fact, vision impairment is just as influential as a number of other long accepted modifiable dementia risk factors.”
The investigators assessed data from the Health and Retirement Study (HRS), a panel study that surveys more than 20,000 U.S. adults aged 50 years or older every 2 years.
The investigators applied the same methods used by the Lancet Commission to the HRS dataset and added vision impairment to the Lancet life-course model. Air pollution was excluded in their model “because those data were not readily available in the HRS,” the researchers wrote.
They noted the PAF is “based on the population prevalence and relative risk of dementia for each risk factor” and is “weighted, based on a principal components analysis, to account for communality (clustering of risk factors).”
A missed prevention opportunity
The sample included 16,690 participants (54% were women, 51.5% were at least age 65, 80.2% were White, 10.6% were Black, 9.2% were other).
In total, the 12 potentially modifiable risk factors used in the researchers’ model were associated with an estimated 62.4% of dementia cases in the United States, with hypertension as the most prevalent risk factor with the highest weighted PAF.
A new focus for prevention
Commenting for this article, Suzann Pershing, MD, associate professor of ophthalmology, Stanford (Calif.) University, called the study “particularly important because, despite growing recognition of its importance in relation to cognition, visual impairment is often an underrecognized risk factor.”
The current research “builds on increasingly robust medical literature linking visual impairment and dementia, applying analogous methods to those used for the life course model recently presented by the Lancet Commission to evaluate potentially modifiable dementia risk factors,” said Dr. Pershing, who was not involved with the study.
The investigators “make a compelling argument for inclusion of visual impairment as one of the potentially modifiable risk factors; practicing clinicians and health care systems may consider screening and targeted therapies to address visual impairment, with a goal of population health and contributing to a reduction in future dementia disease burden,” she added.
In an accompanying editorial), Jennifer Deal, PhD, department of epidemiology and Cochlear Center for Hearing and Public Health, Baltimore, and Julio Rojas, MD, PhD, Memory and Aging Center, department of neurology, Weill Institute for Neurosciences, University of California, San Francisco, call the findings “an important reminder that dementia is a social problem in which potentially treatable risk factors, including visual impairment, are highly prevalent in disadvantaged populations.”
The editorialists noted that 90% of cases of vision impairment are “preventable or have yet to be treated. The two “highly cost-effective interventions” of eyeglasses and/or cataract surgery “remain underused both in the U.S. and globally, especially in disadvantaged communities,” they wrote.
They added that more research is needed to “test the effectiveness of interventions to preserve cognitive health by promoting healthy vision.”
The study was supported by grants from the National Institute on Aging, the National Institutes of Health, and Research to Prevent Blindness. The investigators reported no relevant financial relationships. Dr. Deal reported having received grants from the National Institute on Aging. Dr. Rojas reported serving as site principal investigator on clinical trials for Eli Lilly and Eisai and receiving grants from the National Institute on Aging. Dr. Pershing is a consultant for Acumen, and Verana Health (as DigiSight Technologies).
A version of this article first appeared on Medscape.com.
Investigators analyzed estimated population attributable fractions (PAFs) associated with dementia in more than 16,000 older adults. A PAF represents the number of dementia cases that could be prevented if a given risk factor were eliminated.
Results showed the PAF of vision impairment was 1.8%, suggesting that healthy vision had the potential to prevent more than 100,000 cases of dementia in the United States.
“Vision impairment and blindness disproportionately impact older adults, yet vision impairment is often preventable or even correctable,” study investigator Joshua Ehrlich MD, assistant professor of ophthalmology and visual sciences, University of Michigan, Ann Arbor, said in an interview.
Poor vision affects not only how individuals see the world, but also their systemic health and well-being, Dr. Ehrlich said.
“Accordingly, ensuring that older adults receive appropriate eye care is vital to promoting health, independence, and optimal aging,” he added.
The findings were published online in JAMA Neurology.
A surprising omission
There is an “urgent need to identify modifiable risk factors for dementia that can be targeted with interventions to slow cognitive decline and prevent dementia,” the investigators wrote.
In 2020, the Lancet Commission report on dementia prevention, intervention, and care proposed a life-course model of 12 potentially modifiable dementia risk factors. This included lower educational level, hearing loss, traumatic brain injury, hypertension, excessive alcohol consumption, obesity, smoking, depression, social isolation, physical inactivity, diabetes, and air pollution.
Together, these factors are associated with about 40% of dementia cases worldwide, the report notes.
Vision impairment was not included in this model, “despite considerable evidence that it is associated with an elevated risk of incident dementia and that it may operate through the same pathways as hearing loss,” the current researchers wrote.
“We have known for some time that vision impairment is a risk factor for dementia [and] we also know that a very large fraction of vision impairment, possibly in excess of 80%, is avoidable or has simply yet to be addressed,” Dr. Ehrlich said.
He and his colleagues found it “surprising that vision impairment had been ignored in key models of modifiable dementia risk factors that are used to shape health policy and resource allocation.” They set out to demonstrate that, “in fact, vision impairment is just as influential as a number of other long accepted modifiable dementia risk factors.”
The investigators assessed data from the Health and Retirement Study (HRS), a panel study that surveys more than 20,000 U.S. adults aged 50 years or older every 2 years.
The investigators applied the same methods used by the Lancet Commission to the HRS dataset and added vision impairment to the Lancet life-course model. Air pollution was excluded in their model “because those data were not readily available in the HRS,” the researchers wrote.
They noted the PAF is “based on the population prevalence and relative risk of dementia for each risk factor” and is “weighted, based on a principal components analysis, to account for communality (clustering of risk factors).”
A missed prevention opportunity
The sample included 16,690 participants (54% were women, 51.5% were at least age 65, 80.2% were White, 10.6% were Black, 9.2% were other).
In total, the 12 potentially modifiable risk factors used in the researchers’ model were associated with an estimated 62.4% of dementia cases in the United States, with hypertension as the most prevalent risk factor with the highest weighted PAF.
A new focus for prevention
Commenting for this article, Suzann Pershing, MD, associate professor of ophthalmology, Stanford (Calif.) University, called the study “particularly important because, despite growing recognition of its importance in relation to cognition, visual impairment is often an underrecognized risk factor.”
The current research “builds on increasingly robust medical literature linking visual impairment and dementia, applying analogous methods to those used for the life course model recently presented by the Lancet Commission to evaluate potentially modifiable dementia risk factors,” said Dr. Pershing, who was not involved with the study.
The investigators “make a compelling argument for inclusion of visual impairment as one of the potentially modifiable risk factors; practicing clinicians and health care systems may consider screening and targeted therapies to address visual impairment, with a goal of population health and contributing to a reduction in future dementia disease burden,” she added.
In an accompanying editorial), Jennifer Deal, PhD, department of epidemiology and Cochlear Center for Hearing and Public Health, Baltimore, and Julio Rojas, MD, PhD, Memory and Aging Center, department of neurology, Weill Institute for Neurosciences, University of California, San Francisco, call the findings “an important reminder that dementia is a social problem in which potentially treatable risk factors, including visual impairment, are highly prevalent in disadvantaged populations.”
The editorialists noted that 90% of cases of vision impairment are “preventable or have yet to be treated. The two “highly cost-effective interventions” of eyeglasses and/or cataract surgery “remain underused both in the U.S. and globally, especially in disadvantaged communities,” they wrote.
They added that more research is needed to “test the effectiveness of interventions to preserve cognitive health by promoting healthy vision.”
The study was supported by grants from the National Institute on Aging, the National Institutes of Health, and Research to Prevent Blindness. The investigators reported no relevant financial relationships. Dr. Deal reported having received grants from the National Institute on Aging. Dr. Rojas reported serving as site principal investigator on clinical trials for Eli Lilly and Eisai and receiving grants from the National Institute on Aging. Dr. Pershing is a consultant for Acumen, and Verana Health (as DigiSight Technologies).
A version of this article first appeared on Medscape.com.
Investigators analyzed estimated population attributable fractions (PAFs) associated with dementia in more than 16,000 older adults. A PAF represents the number of dementia cases that could be prevented if a given risk factor were eliminated.
Results showed the PAF of vision impairment was 1.8%, suggesting that healthy vision had the potential to prevent more than 100,000 cases of dementia in the United States.
“Vision impairment and blindness disproportionately impact older adults, yet vision impairment is often preventable or even correctable,” study investigator Joshua Ehrlich MD, assistant professor of ophthalmology and visual sciences, University of Michigan, Ann Arbor, said in an interview.
Poor vision affects not only how individuals see the world, but also their systemic health and well-being, Dr. Ehrlich said.
“Accordingly, ensuring that older adults receive appropriate eye care is vital to promoting health, independence, and optimal aging,” he added.
The findings were published online in JAMA Neurology.
A surprising omission
There is an “urgent need to identify modifiable risk factors for dementia that can be targeted with interventions to slow cognitive decline and prevent dementia,” the investigators wrote.
In 2020, the Lancet Commission report on dementia prevention, intervention, and care proposed a life-course model of 12 potentially modifiable dementia risk factors. This included lower educational level, hearing loss, traumatic brain injury, hypertension, excessive alcohol consumption, obesity, smoking, depression, social isolation, physical inactivity, diabetes, and air pollution.
Together, these factors are associated with about 40% of dementia cases worldwide, the report notes.
Vision impairment was not included in this model, “despite considerable evidence that it is associated with an elevated risk of incident dementia and that it may operate through the same pathways as hearing loss,” the current researchers wrote.
“We have known for some time that vision impairment is a risk factor for dementia [and] we also know that a very large fraction of vision impairment, possibly in excess of 80%, is avoidable or has simply yet to be addressed,” Dr. Ehrlich said.
He and his colleagues found it “surprising that vision impairment had been ignored in key models of modifiable dementia risk factors that are used to shape health policy and resource allocation.” They set out to demonstrate that, “in fact, vision impairment is just as influential as a number of other long accepted modifiable dementia risk factors.”
The investigators assessed data from the Health and Retirement Study (HRS), a panel study that surveys more than 20,000 U.S. adults aged 50 years or older every 2 years.
The investigators applied the same methods used by the Lancet Commission to the HRS dataset and added vision impairment to the Lancet life-course model. Air pollution was excluded in their model “because those data were not readily available in the HRS,” the researchers wrote.
They noted the PAF is “based on the population prevalence and relative risk of dementia for each risk factor” and is “weighted, based on a principal components analysis, to account for communality (clustering of risk factors).”
A missed prevention opportunity
The sample included 16,690 participants (54% were women, 51.5% were at least age 65, 80.2% were White, 10.6% were Black, 9.2% were other).
In total, the 12 potentially modifiable risk factors used in the researchers’ model were associated with an estimated 62.4% of dementia cases in the United States, with hypertension as the most prevalent risk factor with the highest weighted PAF.
A new focus for prevention
Commenting for this article, Suzann Pershing, MD, associate professor of ophthalmology, Stanford (Calif.) University, called the study “particularly important because, despite growing recognition of its importance in relation to cognition, visual impairment is often an underrecognized risk factor.”
The current research “builds on increasingly robust medical literature linking visual impairment and dementia, applying analogous methods to those used for the life course model recently presented by the Lancet Commission to evaluate potentially modifiable dementia risk factors,” said Dr. Pershing, who was not involved with the study.
The investigators “make a compelling argument for inclusion of visual impairment as one of the potentially modifiable risk factors; practicing clinicians and health care systems may consider screening and targeted therapies to address visual impairment, with a goal of population health and contributing to a reduction in future dementia disease burden,” she added.
In an accompanying editorial), Jennifer Deal, PhD, department of epidemiology and Cochlear Center for Hearing and Public Health, Baltimore, and Julio Rojas, MD, PhD, Memory and Aging Center, department of neurology, Weill Institute for Neurosciences, University of California, San Francisco, call the findings “an important reminder that dementia is a social problem in which potentially treatable risk factors, including visual impairment, are highly prevalent in disadvantaged populations.”
The editorialists noted that 90% of cases of vision impairment are “preventable or have yet to be treated. The two “highly cost-effective interventions” of eyeglasses and/or cataract surgery “remain underused both in the U.S. and globally, especially in disadvantaged communities,” they wrote.
They added that more research is needed to “test the effectiveness of interventions to preserve cognitive health by promoting healthy vision.”
The study was supported by grants from the National Institute on Aging, the National Institutes of Health, and Research to Prevent Blindness. The investigators reported no relevant financial relationships. Dr. Deal reported having received grants from the National Institute on Aging. Dr. Rojas reported serving as site principal investigator on clinical trials for Eli Lilly and Eisai and receiving grants from the National Institute on Aging. Dr. Pershing is a consultant for Acumen, and Verana Health (as DigiSight Technologies).
A version of this article first appeared on Medscape.com.
Lupus may lead to worse stroke outcomes for women, but not men
Women with systemic lupus erythematosus (SLE) experience worse outcomes after an acute stroke than does the general population, but men with SLE do not, according to an analysis of the U.S. National Inpatient Sample presented at the annual meeting of the British Society for Rheumatology.
In a study of more than 1.5 million cases of acute stroke recorded in the United States between 2015 and 2018, women with SLE were more likely to be hospitalized for longer and less likely to be routinely discharged into their home environment than were those without SLE. No such association was found for men with SLE.
“The findings imply that primary stroke prevention is of utmost importance, especially in females with SLE,” said Sona Jesenakova, a fourth-year medical student at the University of Aberdeen (Scotland).
“There might be a need to explore more effective and targeted treatment strategies to try and minimize these excessive adverse acute stroke outcomes, especially in females with SLE suffering from stroke,” she suggested.
“Even though males form only a minority of the SLE patient population, some studies have shown that they are prone to suffer from worse disease outcomes,” Ms. Jesenakova said.
Importantly, “male sex has been identified as a risk factor for death early in the course of SLE,” she added, highlighting that sex differences do seem to exist in SLE.
Stroke is an important outcome to look at because people with SLE are known to be at higher risk for developing atherosclerosis, which is a widely known risk factor for ischemic stroke, and with antiphospholipid antibody positivity and uncontrolled disease activity, that risk can be increased. A meta-analysis of older studies has suggested that the risk for death after a stroke is 68% higher in people with SLE than in those without.
To examine the risk for death and other in-hospital outcomes in a more contemporary population, Ms. Jesenakova and associates used data from the National Inpatient Sample, a large, publicly available database that contains inpatient health care information from across the United States. Their sample population consisted of 1,581,430 individuals who had been hospitalized for stroke. Of these, there were 6,100 women and 940 men who had SLE; the remainder served as the ‘no-SLE’ control population.
As might be expected, patients with SLE were about 10 years younger than those without SLE; the median age of women and men with SLE and those without SLE were a respective 60, 61, and 71 years.
There was no difference in the type of stroke between the SLE and no-SLE groups; most had an ischemic stroke (around 89%) rather than a hemorrhagic stroke (around 11%).
The researchers analyzed three key outcomes: mortality at discharge, hospitalization prolonged to a stay of more than 4 days, and routine home discharge, meaning that the patient was able to be discharged home versus more specialist facilities such as a nursing home.
They conducted a multivariate analysis with adjustments made for potential confounding factors such as age, ethnicity, type of stroke, and revascularization treatment. Comorbidities, including major cardiovascular disease, were also accounted for.
Although women with SLE were 21% more likely to die than patients without SLE, men with SLE were 24% less likely to die than was the no-SLE population. However, these differences were not statistically significant.
Women with SLE were 20% more likely to have a prolonged hospital stay and 28% less likely to have a routine home discharge, compared with patients who did not have SLE. The 95% confidence intervals were statistically significant, which was not seen when comparing the same outcomes in men with SLE (odds ratios of 1.06 and 1.18, respectively).
“As for males, even though we didn’t find anything of statistical significance, we have to bear in mind that the sample we had was quite small, and thus these results need to be interpreted with caution,” Ms. Jesenakova said. “We also think that we identified a gap in the current knowledge, and as such, further research is needed to help us understand the influence of male sex on acute stroke outcomes in patients with comorbid SLE.”
The researchers performed a secondary analysis looking at the use of revascularization treatments for ischemic stroke and found that there were no differences between individuals with and without SLE. This analysis considered the use of intravenous thrombolysis and endovascular thrombectomy in just over 1.4 million cases but did not look at sex-specific differences.
Ms. Jesenakova had no conflicts of interest to disclose.
Women with systemic lupus erythematosus (SLE) experience worse outcomes after an acute stroke than does the general population, but men with SLE do not, according to an analysis of the U.S. National Inpatient Sample presented at the annual meeting of the British Society for Rheumatology.
In a study of more than 1.5 million cases of acute stroke recorded in the United States between 2015 and 2018, women with SLE were more likely to be hospitalized for longer and less likely to be routinely discharged into their home environment than were those without SLE. No such association was found for men with SLE.
“The findings imply that primary stroke prevention is of utmost importance, especially in females with SLE,” said Sona Jesenakova, a fourth-year medical student at the University of Aberdeen (Scotland).
“There might be a need to explore more effective and targeted treatment strategies to try and minimize these excessive adverse acute stroke outcomes, especially in females with SLE suffering from stroke,” she suggested.
“Even though males form only a minority of the SLE patient population, some studies have shown that they are prone to suffer from worse disease outcomes,” Ms. Jesenakova said.
Importantly, “male sex has been identified as a risk factor for death early in the course of SLE,” she added, highlighting that sex differences do seem to exist in SLE.
Stroke is an important outcome to look at because people with SLE are known to be at higher risk for developing atherosclerosis, which is a widely known risk factor for ischemic stroke, and with antiphospholipid antibody positivity and uncontrolled disease activity, that risk can be increased. A meta-analysis of older studies has suggested that the risk for death after a stroke is 68% higher in people with SLE than in those without.
To examine the risk for death and other in-hospital outcomes in a more contemporary population, Ms. Jesenakova and associates used data from the National Inpatient Sample, a large, publicly available database that contains inpatient health care information from across the United States. Their sample population consisted of 1,581,430 individuals who had been hospitalized for stroke. Of these, there were 6,100 women and 940 men who had SLE; the remainder served as the ‘no-SLE’ control population.
As might be expected, patients with SLE were about 10 years younger than those without SLE; the median age of women and men with SLE and those without SLE were a respective 60, 61, and 71 years.
There was no difference in the type of stroke between the SLE and no-SLE groups; most had an ischemic stroke (around 89%) rather than a hemorrhagic stroke (around 11%).
The researchers analyzed three key outcomes: mortality at discharge, hospitalization prolonged to a stay of more than 4 days, and routine home discharge, meaning that the patient was able to be discharged home versus more specialist facilities such as a nursing home.
They conducted a multivariate analysis with adjustments made for potential confounding factors such as age, ethnicity, type of stroke, and revascularization treatment. Comorbidities, including major cardiovascular disease, were also accounted for.
Although women with SLE were 21% more likely to die than patients without SLE, men with SLE were 24% less likely to die than was the no-SLE population. However, these differences were not statistically significant.
Women with SLE were 20% more likely to have a prolonged hospital stay and 28% less likely to have a routine home discharge, compared with patients who did not have SLE. The 95% confidence intervals were statistically significant, which was not seen when comparing the same outcomes in men with SLE (odds ratios of 1.06 and 1.18, respectively).
“As for males, even though we didn’t find anything of statistical significance, we have to bear in mind that the sample we had was quite small, and thus these results need to be interpreted with caution,” Ms. Jesenakova said. “We also think that we identified a gap in the current knowledge, and as such, further research is needed to help us understand the influence of male sex on acute stroke outcomes in patients with comorbid SLE.”
The researchers performed a secondary analysis looking at the use of revascularization treatments for ischemic stroke and found that there were no differences between individuals with and without SLE. This analysis considered the use of intravenous thrombolysis and endovascular thrombectomy in just over 1.4 million cases but did not look at sex-specific differences.
Ms. Jesenakova had no conflicts of interest to disclose.
Women with systemic lupus erythematosus (SLE) experience worse outcomes after an acute stroke than does the general population, but men with SLE do not, according to an analysis of the U.S. National Inpatient Sample presented at the annual meeting of the British Society for Rheumatology.
In a study of more than 1.5 million cases of acute stroke recorded in the United States between 2015 and 2018, women with SLE were more likely to be hospitalized for longer and less likely to be routinely discharged into their home environment than were those without SLE. No such association was found for men with SLE.
“The findings imply that primary stroke prevention is of utmost importance, especially in females with SLE,” said Sona Jesenakova, a fourth-year medical student at the University of Aberdeen (Scotland).
“There might be a need to explore more effective and targeted treatment strategies to try and minimize these excessive adverse acute stroke outcomes, especially in females with SLE suffering from stroke,” she suggested.
“Even though males form only a minority of the SLE patient population, some studies have shown that they are prone to suffer from worse disease outcomes,” Ms. Jesenakova said.
Importantly, “male sex has been identified as a risk factor for death early in the course of SLE,” she added, highlighting that sex differences do seem to exist in SLE.
Stroke is an important outcome to look at because people with SLE are known to be at higher risk for developing atherosclerosis, which is a widely known risk factor for ischemic stroke, and with antiphospholipid antibody positivity and uncontrolled disease activity, that risk can be increased. A meta-analysis of older studies has suggested that the risk for death after a stroke is 68% higher in people with SLE than in those without.
To examine the risk for death and other in-hospital outcomes in a more contemporary population, Ms. Jesenakova and associates used data from the National Inpatient Sample, a large, publicly available database that contains inpatient health care information from across the United States. Their sample population consisted of 1,581,430 individuals who had been hospitalized for stroke. Of these, there were 6,100 women and 940 men who had SLE; the remainder served as the ‘no-SLE’ control population.
As might be expected, patients with SLE were about 10 years younger than those without SLE; the median age of women and men with SLE and those without SLE were a respective 60, 61, and 71 years.
There was no difference in the type of stroke between the SLE and no-SLE groups; most had an ischemic stroke (around 89%) rather than a hemorrhagic stroke (around 11%).
The researchers analyzed three key outcomes: mortality at discharge, hospitalization prolonged to a stay of more than 4 days, and routine home discharge, meaning that the patient was able to be discharged home versus more specialist facilities such as a nursing home.
They conducted a multivariate analysis with adjustments made for potential confounding factors such as age, ethnicity, type of stroke, and revascularization treatment. Comorbidities, including major cardiovascular disease, were also accounted for.
Although women with SLE were 21% more likely to die than patients without SLE, men with SLE were 24% less likely to die than was the no-SLE population. However, these differences were not statistically significant.
Women with SLE were 20% more likely to have a prolonged hospital stay and 28% less likely to have a routine home discharge, compared with patients who did not have SLE. The 95% confidence intervals were statistically significant, which was not seen when comparing the same outcomes in men with SLE (odds ratios of 1.06 and 1.18, respectively).
“As for males, even though we didn’t find anything of statistical significance, we have to bear in mind that the sample we had was quite small, and thus these results need to be interpreted with caution,” Ms. Jesenakova said. “We also think that we identified a gap in the current knowledge, and as such, further research is needed to help us understand the influence of male sex on acute stroke outcomes in patients with comorbid SLE.”
The researchers performed a secondary analysis looking at the use of revascularization treatments for ischemic stroke and found that there were no differences between individuals with and without SLE. This analysis considered the use of intravenous thrombolysis and endovascular thrombectomy in just over 1.4 million cases but did not look at sex-specific differences.
Ms. Jesenakova had no conflicts of interest to disclose.
FROM BSR 2022
Smartphone diagnosis in infant seizures could be highly effective
This video transcript has been edited for clarity.
Andrew N. Wilner, MD: Welcome to Medscape. I’m Dr Andrew Wilner, reporting from the American Epilepsy Society meeting.
Today, I have the pleasure of speaking with Dr. Chethan Rao, a child and adolescent neurology resident from the Mayo Clinic in Jacksonville, Fla. Dr. Rao has a particular interest in pediatric epilepsy. Welcome, Dr. Rao.
Chethan Rao, DO: Thank you, Dr. Wilner. It’s a pleasure to be here, and thanks for taking the time to highlight our work.
Dr. Wilner: You had a very interesting paper at the meeting that I wanted to talk about, focused on infantile spasms and smartphone video. Before we dive into the paper, tell us: What are infantile spasms, and why is it important to diagnose them early?
Dr. Rao: Infantile spasms, also known as epileptic spasms, are 1- to 2-second seizures, and they typically consist of sudden stiffening of the body with brief bending forward or backward of the arms, legs, and head. They usually happen around age 3-8 months, and they typically occur in clusters, most often after awakening from sleep.
The incidence is about 1 in 2,000-3,000 children. Many kids with spasms go on to develop seizures that are very difficult to treat, like Lennox-Gastaut epilepsy, and many go on to have developmental delays as well.
Dr. Wilner: Are these subtle? In other words, could a parent have a child like that and not really recognize that this is something abnormal? Or are they so dramatic that parents say: “We’re going to the emergency room?”
Dr. Rao: One of the problems that we encounter often is that in this age group of infants, they have benign sleep myoclonus; they have Sandifer syndrome related to reflux. Those can be very difficult mimics of spasms. They’re not the most clear-cut, but they look usually different enough from normal baby movements that they get parents to seek medical attention.
Dr. Wilner: You mentioned that the infantile spasms really are a type of epilepsy and symptomatic, usually, of some underlying neurologic condition. Why is it so important to diagnose them early?
Dr. Rao: Great question. Many studies have looked at developmental outcomes based on when spasms were diagnosed and treated, and all of them have replicated time over time that the earlier you get to treatment for the spasms, the better the outcomes are for seizure control and for development.
For this reason, infantile spasm is considered a neurologic urgency in our world. Like I said, accurate diagnosis is often complicated by these potential mimics. Prompt EEG is one of the most important things for confirmation of diagnosis.
Dr. Wilner: But to get that EEG, it has to get all the way to the neurologist, right? It’s not something they’re going to do in the ER. I saw a statistic: There are millions, if not billions, of smartphones out there. Where does the smartphone come in?
Dr. Rao: Absolutely. One of the things that we have on our side these days is that almost everyone has a smartphone at their disposal. One of the recent polls in 2021 showed that more than 95% of adults of childbearing age have smartphones with video access. As some other studies have shown in the adult world, we all really have an epilepsy monitoring unit minus the EEG in our own pockets.
It’s definitely a useful tool, as that first screening video can be used in adjunct to history and physical. There have been many of studies on the adult epilepsy side showing the predictive value of smartphone video for differentiating things like epileptic seizures and nonepileptic spells. What we wanted to do is use smartphone video to pin the diagnosis early of infantile spasms and get it treated as quickly as possible.
Dr. Wilner: I’m a fan. Every now and then, I do have a patient who brings in a video of some spell. I’m an adult neurologist. The patient had a spell, and you ask them – of course they don’t remember – and you ask the witness, who usually is not a trained observer. There have been one or two occasions where I thought: “Well, I don’t know if that was really a seizure.” Then they show me the video and it’s like, “Wow, that is definitely a convulsion.” A picture definitely can be worth a thousand words.
You studied this systematically for your poster. Tell me about what you did.
Dr. Rao: Since the poster, we’ve actually expanded the study, so I’ll give you the updated version. We looked at 101 infants retrospectively at two large children’s health care centers: Nemours Children’s, associated with Mayo Clinic in Jacksonville, Fla., and Texas Children’s Hospital in Houston. We narrowed it down to 80 patients whom we included. Of these, 43 had smartphone video capture when they first presented and 37 had no video when they first presented.
We found a 17-day difference by median in the time to diagnosis and treatment. In other words, the video group was diagnosed and treated 17 days by median, compared with the no-video group. Although 17 days may not sound like a big number, in this context it can make a huge difference. That’s been shown by one of these key studies in our field called the UK Infantile Spasms Study. The 2-week difference made about a 10-point difference on the developmental scale that they use – so pretty significant.
Dr. Wilner: Let me think about this for a minute. Was that because the parents brought the child in with their video and the doctor said, “Hey, that’s infantile spasms. Here’s your shot of ACTH [or whatever they’re using these days].” Or was it because the parents who were attentive enough to use video brought their kids in sooner?
Or was this the time from when they brought the child in to treatment? Is that the time you looked at? So it wasn’t just that these were more attentive parents and more likely to use the video – you’re looking at the time from presentation with or without video until treatment, is that right?
Dr. Rao: We looked to the time from the start of the spasms, as reported by the parents, to the time of diagnosis and then the start of spasms to the time of treatment. What you asked was a fantastic question. We wanted to know who these parents are who are taking videos versus the ones that are not.
We looked at the race/ethnicity data and socioeconomic status data. There were no significant differences between the video and nonvideo group. That would not explain the difference in our results here.
Dr. Wilner: Do you have plans to follow these approximately 40 children 5 years from now and see who’s riding a bicycle and who’s still stuck in the stroller? Is there going to be a difference?
Dr. Rao: Because time to diagnosis and time to treatment were our primary outcomes, long-term follow-up may not really help as much in this study. We did have a couple of other ideas for future studies. One that we wanted to look at was kids who have risk factors for developing spasms, such as trisomy 21, tuberous sclerosis, and congenital cortical malformations; those kids are at a much higher risk for developing spasms around 3-8 months of life.
In giving targeted counseling to those families about how they can use smartphone video to minimize the time to diagnosis and treatment, we think we may be able to learn more and maybe do that prospectively.
The other interesting idea is using artificial intelligence technology for spasm detection in some of these smartphone videos. They’re already using it for different seizure types. It could be an efficient first pass when we get a whole bunch of smartphone videos to determine which ones we need to pursue further steps – to see whether we need to get long-term EEG monitoring or not.
Dr. Wilner: As an epileptologist, I was going to say that we have smartphone EKG. All we need now is smartphone EEG, and then you’ll have all the information you need on day one. It may be a ways away.
As a bottom line, would it be fair to say that parents should not hesitate to take a video of any suspiciously abnormal behavior and bring it to their family doctor or pediatric neurologist?
Dr. Rao: Yes. I was happy to see the Tuberous Sclerosis Alliance put out a promotional video that had some steps for when parents see things that are suspicious for spasms, and they do recommend using smartphone video and promptly showing it to their doctors. I think the difference that we hope to provide in this study is that we can now quantify the effect of having that smartphone video when they first present.
My takeaway from this study that I would like to show is encouraging the use of smartphone video as an adjunct tool and for providers to ask for the videos, but also for these pediatric centers to develop an infrastructure – either a secure, monitored email address like we have at our center or a patient portal – where parents can submit video concerning for spasms.
Dr. Wilner: Save the trip to the doctor. Get that video out there first.
Dr. Rao: Especially in the pandemic world, right?
Dr. Wilner: Yes. I understand that you are a neurology resident. To wrap up, what’s the next step for you?
Dr. Rao: I’m finishing up my child neurology residency this year, and I’m moving out to Stanford for pediatric epilepsy fellowship. We’re preparing this project we’re talking about for submission soon, and we’re working on another project, which is a systematic review of genetic testing and the presurgical workup for pediatric drug-resistant focal epilepsy.
Dr. Wilner: Excellent. That’s pretty exciting. Good luck to you. I want to thank you very much for telling us about your research.
Dr. Rao: It was a pleasure speaking with you, and I look forward to the next time.
Dr. Wilner: I’m Dr Andrew Wilner, reporting for Medscape. Thanks for watching.
A version of this article first appeared on Medscape.com.
This video transcript has been edited for clarity.
Andrew N. Wilner, MD: Welcome to Medscape. I’m Dr Andrew Wilner, reporting from the American Epilepsy Society meeting.
Today, I have the pleasure of speaking with Dr. Chethan Rao, a child and adolescent neurology resident from the Mayo Clinic in Jacksonville, Fla. Dr. Rao has a particular interest in pediatric epilepsy. Welcome, Dr. Rao.
Chethan Rao, DO: Thank you, Dr. Wilner. It’s a pleasure to be here, and thanks for taking the time to highlight our work.
Dr. Wilner: You had a very interesting paper at the meeting that I wanted to talk about, focused on infantile spasms and smartphone video. Before we dive into the paper, tell us: What are infantile spasms, and why is it important to diagnose them early?
Dr. Rao: Infantile spasms, also known as epileptic spasms, are 1- to 2-second seizures, and they typically consist of sudden stiffening of the body with brief bending forward or backward of the arms, legs, and head. They usually happen around age 3-8 months, and they typically occur in clusters, most often after awakening from sleep.
The incidence is about 1 in 2,000-3,000 children. Many kids with spasms go on to develop seizures that are very difficult to treat, like Lennox-Gastaut epilepsy, and many go on to have developmental delays as well.
Dr. Wilner: Are these subtle? In other words, could a parent have a child like that and not really recognize that this is something abnormal? Or are they so dramatic that parents say: “We’re going to the emergency room?”
Dr. Rao: One of the problems that we encounter often is that in this age group of infants, they have benign sleep myoclonus; they have Sandifer syndrome related to reflux. Those can be very difficult mimics of spasms. They’re not the most clear-cut, but they look usually different enough from normal baby movements that they get parents to seek medical attention.
Dr. Wilner: You mentioned that the infantile spasms really are a type of epilepsy and symptomatic, usually, of some underlying neurologic condition. Why is it so important to diagnose them early?
Dr. Rao: Great question. Many studies have looked at developmental outcomes based on when spasms were diagnosed and treated, and all of them have replicated time over time that the earlier you get to treatment for the spasms, the better the outcomes are for seizure control and for development.
For this reason, infantile spasm is considered a neurologic urgency in our world. Like I said, accurate diagnosis is often complicated by these potential mimics. Prompt EEG is one of the most important things for confirmation of diagnosis.
Dr. Wilner: But to get that EEG, it has to get all the way to the neurologist, right? It’s not something they’re going to do in the ER. I saw a statistic: There are millions, if not billions, of smartphones out there. Where does the smartphone come in?
Dr. Rao: Absolutely. One of the things that we have on our side these days is that almost everyone has a smartphone at their disposal. One of the recent polls in 2021 showed that more than 95% of adults of childbearing age have smartphones with video access. As some other studies have shown in the adult world, we all really have an epilepsy monitoring unit minus the EEG in our own pockets.
It’s definitely a useful tool, as that first screening video can be used in adjunct to history and physical. There have been many of studies on the adult epilepsy side showing the predictive value of smartphone video for differentiating things like epileptic seizures and nonepileptic spells. What we wanted to do is use smartphone video to pin the diagnosis early of infantile spasms and get it treated as quickly as possible.
Dr. Wilner: I’m a fan. Every now and then, I do have a patient who brings in a video of some spell. I’m an adult neurologist. The patient had a spell, and you ask them – of course they don’t remember – and you ask the witness, who usually is not a trained observer. There have been one or two occasions where I thought: “Well, I don’t know if that was really a seizure.” Then they show me the video and it’s like, “Wow, that is definitely a convulsion.” A picture definitely can be worth a thousand words.
You studied this systematically for your poster. Tell me about what you did.
Dr. Rao: Since the poster, we’ve actually expanded the study, so I’ll give you the updated version. We looked at 101 infants retrospectively at two large children’s health care centers: Nemours Children’s, associated with Mayo Clinic in Jacksonville, Fla., and Texas Children’s Hospital in Houston. We narrowed it down to 80 patients whom we included. Of these, 43 had smartphone video capture when they first presented and 37 had no video when they first presented.
We found a 17-day difference by median in the time to diagnosis and treatment. In other words, the video group was diagnosed and treated 17 days by median, compared with the no-video group. Although 17 days may not sound like a big number, in this context it can make a huge difference. That’s been shown by one of these key studies in our field called the UK Infantile Spasms Study. The 2-week difference made about a 10-point difference on the developmental scale that they use – so pretty significant.
Dr. Wilner: Let me think about this for a minute. Was that because the parents brought the child in with their video and the doctor said, “Hey, that’s infantile spasms. Here’s your shot of ACTH [or whatever they’re using these days].” Or was it because the parents who were attentive enough to use video brought their kids in sooner?
Or was this the time from when they brought the child in to treatment? Is that the time you looked at? So it wasn’t just that these were more attentive parents and more likely to use the video – you’re looking at the time from presentation with or without video until treatment, is that right?
Dr. Rao: We looked to the time from the start of the spasms, as reported by the parents, to the time of diagnosis and then the start of spasms to the time of treatment. What you asked was a fantastic question. We wanted to know who these parents are who are taking videos versus the ones that are not.
We looked at the race/ethnicity data and socioeconomic status data. There were no significant differences between the video and nonvideo group. That would not explain the difference in our results here.
Dr. Wilner: Do you have plans to follow these approximately 40 children 5 years from now and see who’s riding a bicycle and who’s still stuck in the stroller? Is there going to be a difference?
Dr. Rao: Because time to diagnosis and time to treatment were our primary outcomes, long-term follow-up may not really help as much in this study. We did have a couple of other ideas for future studies. One that we wanted to look at was kids who have risk factors for developing spasms, such as trisomy 21, tuberous sclerosis, and congenital cortical malformations; those kids are at a much higher risk for developing spasms around 3-8 months of life.
In giving targeted counseling to those families about how they can use smartphone video to minimize the time to diagnosis and treatment, we think we may be able to learn more and maybe do that prospectively.
The other interesting idea is using artificial intelligence technology for spasm detection in some of these smartphone videos. They’re already using it for different seizure types. It could be an efficient first pass when we get a whole bunch of smartphone videos to determine which ones we need to pursue further steps – to see whether we need to get long-term EEG monitoring or not.
Dr. Wilner: As an epileptologist, I was going to say that we have smartphone EKG. All we need now is smartphone EEG, and then you’ll have all the information you need on day one. It may be a ways away.
As a bottom line, would it be fair to say that parents should not hesitate to take a video of any suspiciously abnormal behavior and bring it to their family doctor or pediatric neurologist?
Dr. Rao: Yes. I was happy to see the Tuberous Sclerosis Alliance put out a promotional video that had some steps for when parents see things that are suspicious for spasms, and they do recommend using smartphone video and promptly showing it to their doctors. I think the difference that we hope to provide in this study is that we can now quantify the effect of having that smartphone video when they first present.
My takeaway from this study that I would like to show is encouraging the use of smartphone video as an adjunct tool and for providers to ask for the videos, but also for these pediatric centers to develop an infrastructure – either a secure, monitored email address like we have at our center or a patient portal – where parents can submit video concerning for spasms.
Dr. Wilner: Save the trip to the doctor. Get that video out there first.
Dr. Rao: Especially in the pandemic world, right?
Dr. Wilner: Yes. I understand that you are a neurology resident. To wrap up, what’s the next step for you?
Dr. Rao: I’m finishing up my child neurology residency this year, and I’m moving out to Stanford for pediatric epilepsy fellowship. We’re preparing this project we’re talking about for submission soon, and we’re working on another project, which is a systematic review of genetic testing and the presurgical workup for pediatric drug-resistant focal epilepsy.
Dr. Wilner: Excellent. That’s pretty exciting. Good luck to you. I want to thank you very much for telling us about your research.
Dr. Rao: It was a pleasure speaking with you, and I look forward to the next time.
Dr. Wilner: I’m Dr Andrew Wilner, reporting for Medscape. Thanks for watching.
A version of this article first appeared on Medscape.com.
This video transcript has been edited for clarity.
Andrew N. Wilner, MD: Welcome to Medscape. I’m Dr Andrew Wilner, reporting from the American Epilepsy Society meeting.
Today, I have the pleasure of speaking with Dr. Chethan Rao, a child and adolescent neurology resident from the Mayo Clinic in Jacksonville, Fla. Dr. Rao has a particular interest in pediatric epilepsy. Welcome, Dr. Rao.
Chethan Rao, DO: Thank you, Dr. Wilner. It’s a pleasure to be here, and thanks for taking the time to highlight our work.
Dr. Wilner: You had a very interesting paper at the meeting that I wanted to talk about, focused on infantile spasms and smartphone video. Before we dive into the paper, tell us: What are infantile spasms, and why is it important to diagnose them early?
Dr. Rao: Infantile spasms, also known as epileptic spasms, are 1- to 2-second seizures, and they typically consist of sudden stiffening of the body with brief bending forward or backward of the arms, legs, and head. They usually happen around age 3-8 months, and they typically occur in clusters, most often after awakening from sleep.
The incidence is about 1 in 2,000-3,000 children. Many kids with spasms go on to develop seizures that are very difficult to treat, like Lennox-Gastaut epilepsy, and many go on to have developmental delays as well.
Dr. Wilner: Are these subtle? In other words, could a parent have a child like that and not really recognize that this is something abnormal? Or are they so dramatic that parents say: “We’re going to the emergency room?”
Dr. Rao: One of the problems that we encounter often is that in this age group of infants, they have benign sleep myoclonus; they have Sandifer syndrome related to reflux. Those can be very difficult mimics of spasms. They’re not the most clear-cut, but they look usually different enough from normal baby movements that they get parents to seek medical attention.
Dr. Wilner: You mentioned that the infantile spasms really are a type of epilepsy and symptomatic, usually, of some underlying neurologic condition. Why is it so important to diagnose them early?
Dr. Rao: Great question. Many studies have looked at developmental outcomes based on when spasms were diagnosed and treated, and all of them have replicated time over time that the earlier you get to treatment for the spasms, the better the outcomes are for seizure control and for development.
For this reason, infantile spasm is considered a neurologic urgency in our world. Like I said, accurate diagnosis is often complicated by these potential mimics. Prompt EEG is one of the most important things for confirmation of diagnosis.
Dr. Wilner: But to get that EEG, it has to get all the way to the neurologist, right? It’s not something they’re going to do in the ER. I saw a statistic: There are millions, if not billions, of smartphones out there. Where does the smartphone come in?
Dr. Rao: Absolutely. One of the things that we have on our side these days is that almost everyone has a smartphone at their disposal. One of the recent polls in 2021 showed that more than 95% of adults of childbearing age have smartphones with video access. As some other studies have shown in the adult world, we all really have an epilepsy monitoring unit minus the EEG in our own pockets.
It’s definitely a useful tool, as that first screening video can be used in adjunct to history and physical. There have been many of studies on the adult epilepsy side showing the predictive value of smartphone video for differentiating things like epileptic seizures and nonepileptic spells. What we wanted to do is use smartphone video to pin the diagnosis early of infantile spasms and get it treated as quickly as possible.
Dr. Wilner: I’m a fan. Every now and then, I do have a patient who brings in a video of some spell. I’m an adult neurologist. The patient had a spell, and you ask them – of course they don’t remember – and you ask the witness, who usually is not a trained observer. There have been one or two occasions where I thought: “Well, I don’t know if that was really a seizure.” Then they show me the video and it’s like, “Wow, that is definitely a convulsion.” A picture definitely can be worth a thousand words.
You studied this systematically for your poster. Tell me about what you did.
Dr. Rao: Since the poster, we’ve actually expanded the study, so I’ll give you the updated version. We looked at 101 infants retrospectively at two large children’s health care centers: Nemours Children’s, associated with Mayo Clinic in Jacksonville, Fla., and Texas Children’s Hospital in Houston. We narrowed it down to 80 patients whom we included. Of these, 43 had smartphone video capture when they first presented and 37 had no video when they first presented.
We found a 17-day difference by median in the time to diagnosis and treatment. In other words, the video group was diagnosed and treated 17 days by median, compared with the no-video group. Although 17 days may not sound like a big number, in this context it can make a huge difference. That’s been shown by one of these key studies in our field called the UK Infantile Spasms Study. The 2-week difference made about a 10-point difference on the developmental scale that they use – so pretty significant.
Dr. Wilner: Let me think about this for a minute. Was that because the parents brought the child in with their video and the doctor said, “Hey, that’s infantile spasms. Here’s your shot of ACTH [or whatever they’re using these days].” Or was it because the parents who were attentive enough to use video brought their kids in sooner?
Or was this the time from when they brought the child in to treatment? Is that the time you looked at? So it wasn’t just that these were more attentive parents and more likely to use the video – you’re looking at the time from presentation with or without video until treatment, is that right?
Dr. Rao: We looked to the time from the start of the spasms, as reported by the parents, to the time of diagnosis and then the start of spasms to the time of treatment. What you asked was a fantastic question. We wanted to know who these parents are who are taking videos versus the ones that are not.
We looked at the race/ethnicity data and socioeconomic status data. There were no significant differences between the video and nonvideo group. That would not explain the difference in our results here.
Dr. Wilner: Do you have plans to follow these approximately 40 children 5 years from now and see who’s riding a bicycle and who’s still stuck in the stroller? Is there going to be a difference?
Dr. Rao: Because time to diagnosis and time to treatment were our primary outcomes, long-term follow-up may not really help as much in this study. We did have a couple of other ideas for future studies. One that we wanted to look at was kids who have risk factors for developing spasms, such as trisomy 21, tuberous sclerosis, and congenital cortical malformations; those kids are at a much higher risk for developing spasms around 3-8 months of life.
In giving targeted counseling to those families about how they can use smartphone video to minimize the time to diagnosis and treatment, we think we may be able to learn more and maybe do that prospectively.
The other interesting idea is using artificial intelligence technology for spasm detection in some of these smartphone videos. They’re already using it for different seizure types. It could be an efficient first pass when we get a whole bunch of smartphone videos to determine which ones we need to pursue further steps – to see whether we need to get long-term EEG monitoring or not.
Dr. Wilner: As an epileptologist, I was going to say that we have smartphone EKG. All we need now is smartphone EEG, and then you’ll have all the information you need on day one. It may be a ways away.
As a bottom line, would it be fair to say that parents should not hesitate to take a video of any suspiciously abnormal behavior and bring it to their family doctor or pediatric neurologist?
Dr. Rao: Yes. I was happy to see the Tuberous Sclerosis Alliance put out a promotional video that had some steps for when parents see things that are suspicious for spasms, and they do recommend using smartphone video and promptly showing it to their doctors. I think the difference that we hope to provide in this study is that we can now quantify the effect of having that smartphone video when they first present.
My takeaway from this study that I would like to show is encouraging the use of smartphone video as an adjunct tool and for providers to ask for the videos, but also for these pediatric centers to develop an infrastructure – either a secure, monitored email address like we have at our center or a patient portal – where parents can submit video concerning for spasms.
Dr. Wilner: Save the trip to the doctor. Get that video out there first.
Dr. Rao: Especially in the pandemic world, right?
Dr. Wilner: Yes. I understand that you are a neurology resident. To wrap up, what’s the next step for you?
Dr. Rao: I’m finishing up my child neurology residency this year, and I’m moving out to Stanford for pediatric epilepsy fellowship. We’re preparing this project we’re talking about for submission soon, and we’re working on another project, which is a systematic review of genetic testing and the presurgical workup for pediatric drug-resistant focal epilepsy.
Dr. Wilner: Excellent. That’s pretty exciting. Good luck to you. I want to thank you very much for telling us about your research.
Dr. Rao: It was a pleasure speaking with you, and I look forward to the next time.
Dr. Wilner: I’m Dr Andrew Wilner, reporting for Medscape. Thanks for watching.
A version of this article first appeared on Medscape.com.
Nap length linked to cognitive changes
No wonder we feel worse after naps
Some of us have hectic schedules that may make a nap feel more necessary. It’s common knowledge that naps shouldn’t be too long – maybe 20 minutes or so – but if you frequently take 3-hour naps and wake up thinking you’re late for school even though you’re 47 and have your PhD, this LOTME is for you.
Studies have shown that there is a link between napping during the day and Alzheimer’s/cognitive decline, but now we’ve got a double whammy for you: Longer and more frequent napping is linked to worse cognition after a year, and in turn, those with cognitive decline and Alzheimer’s are known to nap longer and more frequently during the day.
“We now know that the pathology related to cognitive decline can cause other changes in function,” he said. “It’s really a multisystem disorder, also including difficulty sleeping, changes in movement, changes in body composition, depression symptoms, behavioral changes, etc.,” coauthor Aron Buchman, MD, said in a statement from Rush University Medical Center.
The investigators monitored 1,400 patients over the course of 14 years with wrist bracelets that recorded when a person was not active during the day and considered that a nap.
At the beginning of the study, 75% of the study subjects had no cognitive impairment, 19.5% had some cognitive impairment, and approximately 4% had Alzheimer’s. Napping during the day only increased about 11 minutes a year for those with no signs of cognitive impairment, but those who showed significantly more signs of cognitive decline doubled their nap time and those actually diagnosed with Alzheimer’s tripled theirs.
The investigators did not imply that napping causes Alzheimer’s, but they noted that people who are older and nap more than an hour a day are 40% more likely to be at risk. It is something to consider and monitor.
Sometimes, after all, a nap seems like the best idea ever, but more often than not we wake up feeling 10 times worse. Our bodies may be giving us a heads up.
Pokemon Go away depression
The summer of 2016 was a great time if you happened to be a fan of Pokemon. Which is quite a lot of people. For almost 20 years millions have enjoyed the games and animated series, but Pokemon Go brought the thrill of catching Pokemon to life in a whole new way. For the first time, you could go out into the world and pretend you were a real Pokemon trainer, and everywhere you went, there would be others like you.
The ability to chase after Pikachu and Charizard in real life (well, augmented reality, but close enough) seemed to bring people a lot of joy, but seemed is never good enough for science. Can’t have anecdotes, we need data! So researchers at the London School of Economics and Political Science conducted a study into how Pokemon Go affected local Internet search rates of depression as the game was released slowly around the world.
Through analyzing Google Trend data of words like “depression,” “anxiety,” and “stress,” the researchers found that the release of Pokemon Go was significantly associated with a noticeable, though short-term, drop in depression-related Internet searches. Location-based augmented reality games may alleviate symptoms of mild depression, the researchers said, as they encourage physical activity, face-to-face socialization, and exposure to nature, though they added that simply going outside is likely not enough to combat clinical cases of severe depression.
Still, augmented reality games represent a viable target for public health investment, since they’re easy to use and inexpensive to make. That said, we’re not sure we want the FDA or CDC making a new Pokemon Go game. They’d probably end up filling the streets with Mr. Mime. And no one would leave their house for that.
And now a word from our sponsor
How many times has this happened to you? You need to repair a jet engine, inspect a nuclear reactor cooling system, AND perform bowel surgery, but you can’t carry around all the heavy, old-fashioned tools needed for those jobs.
Well, we’ve got one tool that can do it all! And that tool is a snake. No, it’s a robot.
It’s both! It’s the COntinuum roBot for Remote Applications. COBRA is the robot that looks like a snake! A snake that’s 5 meters long but only as thick as a pencil (about 9 mm in diameter). A robot with “extraordinary manoeuvrability and responsiveness due to … a compliant-joint structure and multiple continuous sections that enable it to bend at around 90 degrees,” according to the team at the University of Nottingham (England) that developed it.
COBRA comes equipped with a stereovision camera and a miniature cutting tool to perform complex industrial repair, but other devices can be interchanged for possible medical use.
COBRA and its joystick-like controller were designed to be easy to use. Dr. Oladejo Olaleye, the ear, nose, and throat and robotic surgeon at University Hospitals of Leicester who is directing its surgical development, was able to use COBRA on a dummy after just 5 minutes of training. He called it “the future of diagnostic endoscopy and therapeutic surgery.”
Don’t be the last aircraft engineer/nuclear technician/surgeon on your block to have this ultraslender, ultramaneuverable reptilian repair robot. Get your COBRA now! Operators are standing by.
Disclaimer: Robot is still under development and not yet on sale.
Rule, (worm) Britannia!
As long as there have been people, there have been parasitic worms living in their guts. Helminth infection is a continuing and largely ignored crisis in poor, tropical nations, though worm-based diseases have been basically eliminated from wealthier countries.
This wasn’t always the case, however, as a study published in PLOS Neglected Tropical Diseases (now there’s a specific topic) has found. The researchers detail the glorious history of helminth infestation in the United Kingdom from the Victorian era all the way back to prehistory, scouring hundreds of skeletons found in 17 sites across the country for eggs, which can remain intact for thousands of years.
The researchers found that two eras in particular had very high rates of infection. Unsurprisingly, the late medieval era was one of them, but the other is less obvious. The Romans were famous for their hygiene, their baths, and their plumbing, but maybe they also should be famous for the abundance of worms in their bellies. That doesn’t make sense at first: Shouldn’t good hygiene lower infection? The benefits of a good sewer system, however, are lessened when the waste containing said infectious organisms is used to fertilize crops. Recycling is generally a good thing, but less so when you’re recycling parasitic worms.
Curiously, of the three sites from the industrial age, only the one in London had high levels of worm infestation. Considering how dirty and cramped 19th-century British cities were, one might expect disease to run rampant (tuberculosis certainly did), but the sites in Oxford and Birmingham were almost devoid of worms. The researchers theorized that this was because of access to clean well water. Or maybe worms just have a thing for London. [Editor’s note: It’s probably not that.]
No wonder we feel worse after naps
Some of us have hectic schedules that may make a nap feel more necessary. It’s common knowledge that naps shouldn’t be too long – maybe 20 minutes or so – but if you frequently take 3-hour naps and wake up thinking you’re late for school even though you’re 47 and have your PhD, this LOTME is for you.
Studies have shown that there is a link between napping during the day and Alzheimer’s/cognitive decline, but now we’ve got a double whammy for you: Longer and more frequent napping is linked to worse cognition after a year, and in turn, those with cognitive decline and Alzheimer’s are known to nap longer and more frequently during the day.
“We now know that the pathology related to cognitive decline can cause other changes in function,” he said. “It’s really a multisystem disorder, also including difficulty sleeping, changes in movement, changes in body composition, depression symptoms, behavioral changes, etc.,” coauthor Aron Buchman, MD, said in a statement from Rush University Medical Center.
The investigators monitored 1,400 patients over the course of 14 years with wrist bracelets that recorded when a person was not active during the day and considered that a nap.
At the beginning of the study, 75% of the study subjects had no cognitive impairment, 19.5% had some cognitive impairment, and approximately 4% had Alzheimer’s. Napping during the day only increased about 11 minutes a year for those with no signs of cognitive impairment, but those who showed significantly more signs of cognitive decline doubled their nap time and those actually diagnosed with Alzheimer’s tripled theirs.
The investigators did not imply that napping causes Alzheimer’s, but they noted that people who are older and nap more than an hour a day are 40% more likely to be at risk. It is something to consider and monitor.
Sometimes, after all, a nap seems like the best idea ever, but more often than not we wake up feeling 10 times worse. Our bodies may be giving us a heads up.
Pokemon Go away depression
The summer of 2016 was a great time if you happened to be a fan of Pokemon. Which is quite a lot of people. For almost 20 years millions have enjoyed the games and animated series, but Pokemon Go brought the thrill of catching Pokemon to life in a whole new way. For the first time, you could go out into the world and pretend you were a real Pokemon trainer, and everywhere you went, there would be others like you.
The ability to chase after Pikachu and Charizard in real life (well, augmented reality, but close enough) seemed to bring people a lot of joy, but seemed is never good enough for science. Can’t have anecdotes, we need data! So researchers at the London School of Economics and Political Science conducted a study into how Pokemon Go affected local Internet search rates of depression as the game was released slowly around the world.
Through analyzing Google Trend data of words like “depression,” “anxiety,” and “stress,” the researchers found that the release of Pokemon Go was significantly associated with a noticeable, though short-term, drop in depression-related Internet searches. Location-based augmented reality games may alleviate symptoms of mild depression, the researchers said, as they encourage physical activity, face-to-face socialization, and exposure to nature, though they added that simply going outside is likely not enough to combat clinical cases of severe depression.
Still, augmented reality games represent a viable target for public health investment, since they’re easy to use and inexpensive to make. That said, we’re not sure we want the FDA or CDC making a new Pokemon Go game. They’d probably end up filling the streets with Mr. Mime. And no one would leave their house for that.
And now a word from our sponsor
How many times has this happened to you? You need to repair a jet engine, inspect a nuclear reactor cooling system, AND perform bowel surgery, but you can’t carry around all the heavy, old-fashioned tools needed for those jobs.
Well, we’ve got one tool that can do it all! And that tool is a snake. No, it’s a robot.
It’s both! It’s the COntinuum roBot for Remote Applications. COBRA is the robot that looks like a snake! A snake that’s 5 meters long but only as thick as a pencil (about 9 mm in diameter). A robot with “extraordinary manoeuvrability and responsiveness due to … a compliant-joint structure and multiple continuous sections that enable it to bend at around 90 degrees,” according to the team at the University of Nottingham (England) that developed it.
COBRA comes equipped with a stereovision camera and a miniature cutting tool to perform complex industrial repair, but other devices can be interchanged for possible medical use.
COBRA and its joystick-like controller were designed to be easy to use. Dr. Oladejo Olaleye, the ear, nose, and throat and robotic surgeon at University Hospitals of Leicester who is directing its surgical development, was able to use COBRA on a dummy after just 5 minutes of training. He called it “the future of diagnostic endoscopy and therapeutic surgery.”
Don’t be the last aircraft engineer/nuclear technician/surgeon on your block to have this ultraslender, ultramaneuverable reptilian repair robot. Get your COBRA now! Operators are standing by.
Disclaimer: Robot is still under development and not yet on sale.
Rule, (worm) Britannia!
As long as there have been people, there have been parasitic worms living in their guts. Helminth infection is a continuing and largely ignored crisis in poor, tropical nations, though worm-based diseases have been basically eliminated from wealthier countries.
This wasn’t always the case, however, as a study published in PLOS Neglected Tropical Diseases (now there’s a specific topic) has found. The researchers detail the glorious history of helminth infestation in the United Kingdom from the Victorian era all the way back to prehistory, scouring hundreds of skeletons found in 17 sites across the country for eggs, which can remain intact for thousands of years.
The researchers found that two eras in particular had very high rates of infection. Unsurprisingly, the late medieval era was one of them, but the other is less obvious. The Romans were famous for their hygiene, their baths, and their plumbing, but maybe they also should be famous for the abundance of worms in their bellies. That doesn’t make sense at first: Shouldn’t good hygiene lower infection? The benefits of a good sewer system, however, are lessened when the waste containing said infectious organisms is used to fertilize crops. Recycling is generally a good thing, but less so when you’re recycling parasitic worms.
Curiously, of the three sites from the industrial age, only the one in London had high levels of worm infestation. Considering how dirty and cramped 19th-century British cities were, one might expect disease to run rampant (tuberculosis certainly did), but the sites in Oxford and Birmingham were almost devoid of worms. The researchers theorized that this was because of access to clean well water. Or maybe worms just have a thing for London. [Editor’s note: It’s probably not that.]
No wonder we feel worse after naps
Some of us have hectic schedules that may make a nap feel more necessary. It’s common knowledge that naps shouldn’t be too long – maybe 20 minutes or so – but if you frequently take 3-hour naps and wake up thinking you’re late for school even though you’re 47 and have your PhD, this LOTME is for you.
Studies have shown that there is a link between napping during the day and Alzheimer’s/cognitive decline, but now we’ve got a double whammy for you: Longer and more frequent napping is linked to worse cognition after a year, and in turn, those with cognitive decline and Alzheimer’s are known to nap longer and more frequently during the day.
“We now know that the pathology related to cognitive decline can cause other changes in function,” he said. “It’s really a multisystem disorder, also including difficulty sleeping, changes in movement, changes in body composition, depression symptoms, behavioral changes, etc.,” coauthor Aron Buchman, MD, said in a statement from Rush University Medical Center.
The investigators monitored 1,400 patients over the course of 14 years with wrist bracelets that recorded when a person was not active during the day and considered that a nap.
At the beginning of the study, 75% of the study subjects had no cognitive impairment, 19.5% had some cognitive impairment, and approximately 4% had Alzheimer’s. Napping during the day only increased about 11 minutes a year for those with no signs of cognitive impairment, but those who showed significantly more signs of cognitive decline doubled their nap time and those actually diagnosed with Alzheimer’s tripled theirs.
The investigators did not imply that napping causes Alzheimer’s, but they noted that people who are older and nap more than an hour a day are 40% more likely to be at risk. It is something to consider and monitor.
Sometimes, after all, a nap seems like the best idea ever, but more often than not we wake up feeling 10 times worse. Our bodies may be giving us a heads up.
Pokemon Go away depression
The summer of 2016 was a great time if you happened to be a fan of Pokemon. Which is quite a lot of people. For almost 20 years millions have enjoyed the games and animated series, but Pokemon Go brought the thrill of catching Pokemon to life in a whole new way. For the first time, you could go out into the world and pretend you were a real Pokemon trainer, and everywhere you went, there would be others like you.
The ability to chase after Pikachu and Charizard in real life (well, augmented reality, but close enough) seemed to bring people a lot of joy, but seemed is never good enough for science. Can’t have anecdotes, we need data! So researchers at the London School of Economics and Political Science conducted a study into how Pokemon Go affected local Internet search rates of depression as the game was released slowly around the world.
Through analyzing Google Trend data of words like “depression,” “anxiety,” and “stress,” the researchers found that the release of Pokemon Go was significantly associated with a noticeable, though short-term, drop in depression-related Internet searches. Location-based augmented reality games may alleviate symptoms of mild depression, the researchers said, as they encourage physical activity, face-to-face socialization, and exposure to nature, though they added that simply going outside is likely not enough to combat clinical cases of severe depression.
Still, augmented reality games represent a viable target for public health investment, since they’re easy to use and inexpensive to make. That said, we’re not sure we want the FDA or CDC making a new Pokemon Go game. They’d probably end up filling the streets with Mr. Mime. And no one would leave their house for that.
And now a word from our sponsor
How many times has this happened to you? You need to repair a jet engine, inspect a nuclear reactor cooling system, AND perform bowel surgery, but you can’t carry around all the heavy, old-fashioned tools needed for those jobs.
Well, we’ve got one tool that can do it all! And that tool is a snake. No, it’s a robot.
It’s both! It’s the COntinuum roBot for Remote Applications. COBRA is the robot that looks like a snake! A snake that’s 5 meters long but only as thick as a pencil (about 9 mm in diameter). A robot with “extraordinary manoeuvrability and responsiveness due to … a compliant-joint structure and multiple continuous sections that enable it to bend at around 90 degrees,” according to the team at the University of Nottingham (England) that developed it.
COBRA comes equipped with a stereovision camera and a miniature cutting tool to perform complex industrial repair, but other devices can be interchanged for possible medical use.
COBRA and its joystick-like controller were designed to be easy to use. Dr. Oladejo Olaleye, the ear, nose, and throat and robotic surgeon at University Hospitals of Leicester who is directing its surgical development, was able to use COBRA on a dummy after just 5 minutes of training. He called it “the future of diagnostic endoscopy and therapeutic surgery.”
Don’t be the last aircraft engineer/nuclear technician/surgeon on your block to have this ultraslender, ultramaneuverable reptilian repair robot. Get your COBRA now! Operators are standing by.
Disclaimer: Robot is still under development and not yet on sale.
Rule, (worm) Britannia!
As long as there have been people, there have been parasitic worms living in their guts. Helminth infection is a continuing and largely ignored crisis in poor, tropical nations, though worm-based diseases have been basically eliminated from wealthier countries.
This wasn’t always the case, however, as a study published in PLOS Neglected Tropical Diseases (now there’s a specific topic) has found. The researchers detail the glorious history of helminth infestation in the United Kingdom from the Victorian era all the way back to prehistory, scouring hundreds of skeletons found in 17 sites across the country for eggs, which can remain intact for thousands of years.
The researchers found that two eras in particular had very high rates of infection. Unsurprisingly, the late medieval era was one of them, but the other is less obvious. The Romans were famous for their hygiene, their baths, and their plumbing, but maybe they also should be famous for the abundance of worms in their bellies. That doesn’t make sense at first: Shouldn’t good hygiene lower infection? The benefits of a good sewer system, however, are lessened when the waste containing said infectious organisms is used to fertilize crops. Recycling is generally a good thing, but less so when you’re recycling parasitic worms.
Curiously, of the three sites from the industrial age, only the one in London had high levels of worm infestation. Considering how dirty and cramped 19th-century British cities were, one might expect disease to run rampant (tuberculosis certainly did), but the sites in Oxford and Birmingham were almost devoid of worms. The researchers theorized that this was because of access to clean well water. Or maybe worms just have a thing for London. [Editor’s note: It’s probably not that.]
How do we distinguish between viral and bacterial meningitis?
Bacteria and viruses are the leading causes of community-acquired meningitis. Bacterial meningitis is associated with high morbidity and mortality, and prompt treatment with appropriate antibiotics is essential to optimize outcomes. Early diagnosis is therefore crucial for selecting patients who need antibiotics. On the other hand, the course of viral meningitis is generally benign, and there is usually no specific antimicrobial treatment required. Distinguishing between viral and bacterial causes of meningitis can be challenging; therefore, many patients receive empiric antibiotic treatment.
Etiology
Among the etiologic agents of viral meningitis, the nonpolio enteroviruses (Echovirus 30, 11, 9, 6, 7, 18, 16, 71, 25; Coxsackie B2, A9, B1, B3, B4) are the most common, responsible for more than 85% of cases. Other viruses potentially responsible for meningitis include the herpes simplex virus (HSV), primarily type 2, and flavivirus (such as the Dengue virus).
Clinical presentation
The clinical presentation of bacterial meningitis is more severe than that of viral meningitis. The classic clinical triad of bacterial meningitis consists of fever, neck stiffness, and altered mental status. Only 41% of cases present with these three symptoms, however. Other clinical characteristics include severe headaches, decreased level of consciousness, nausea, vomiting, seizures, focal neurologic signs, and skin rash.
Viral meningitis is usually not associated with a decreased level of consciousness or significant decline in overall health status. The most frequently reported symptoms are unusual headaches, fever, nausea, vomiting, sensitivity to light, and neck stiffness. Patients may also present with skin changes and lymphadenopathy, and, depending on etiology, genital ulcers.
Diagnosis
The diagnosis of bacterial meningitis is based on clinical symptoms, blood panels (blood count, inflammation markers, cultures), and cerebrospinal fluid (CSF) cultures. Gram staining and latex agglutination may lead to false-negative results, and cultures may take a few days to provide a definitive result. Therefore, empiric antibiotic treatment is often started until the etiology can be determined.
A spinal tap must always be performed, preferably after a scan is taken, to rule out the risk of herniation. After CSF samples have been collected, they must undergo complete analysis, including cytological, biochemical, and microbiological evaluation, using conventional and molecular testing methods, when available.
Cytological and biochemical analyses of CSF may be helpful, as findings may indicate a higher probability of either bacterial or viral etiology.
CSF samples collected from patients with acute bacterial meningitis present characteristic neutrophilic pleocytosis (cell count usually ranging from hundreds to a few thousand, with >80% polymorphonuclear cells). In some cases of L. monocytogenes meningitis (from 25% to 30%), a lymphocytic predominance may occur. Normally, glucose is low (CSF glucose-to-blood-glucose ratio of ≤0.4 or <40 mg/dL), protein is very high (>200 mg/dL), and the CSF lactate level is high (≥31.53 mg/dL).
In viral meningitis, the white blood cell count is generally 10-300 cells/mm3. Although glucose levels are normal in most cases, they may be below normal limits in lymphocytic choriomeningitis virus (LCMV), HSV, mumps virus, and poliovirus meningitis. Protein levels tend to be slightly elevated, but they may still be within the reference range.
A recent study investigated which of the cytological or biochemical markers best correlate with the definite etiologic diagnosis. This study, in which CSF samples were collected and analyzed from 2013 to 2017, considered cases of bacterial or viral meningitis confirmed via microbiological evaluation or polymerase chain reaction (PCR). CSF lactate was the best single CSF parameter, and CSF lactate above 30 mg/dL virtually excludes the possibility of a viral etiology.
Etiologic determination
Despite the major contribution of globally analyzing CSF and secondary parameters, particularly CSF lactate, the precise etiologic definition is of great importance in cases of acute meningitis. Such precise definition is not simple, as identification of the causative microorganism is often difficult. Moreover, there are limits to conventional microbiological methods. Bacterioscopy is poorly sensitive, and although bacterial cultures are more sensitive, they can delay diagnosis because of the time it takes for the bacteria to grow in culture media.
Targeted molecular detection methods are usually more sensitive than conventional microbiological methods. Panel-based molecular tests identify multiple pathogens in a single test. In 2015, the U.S. Food and Drug Administration authorized the first commercial multiplex detection system for infectious causes of community-acquired meningitis and encephalitis. This test, the BioFire FilmArray system, detects 14 bacterial, viral, and fungal pathogens in a turnaround time of about 1 hour, including S. pneumoniae, N. meningitidis, H. influenzae, S. agalactiae (i.e., group B Streptococcus), E. coli (serotype K1), L. monocytogenes, HSV-1, HSV-2, varicella-zoster virus (VZV), cytomegalovirus (CMV), human herpesvirus 6 (HHV-6), human parechovirus (HPeV), and Cryptococcus neoformans/gattii.
A meta-analysis of eight precise diagnostic studies evaluating the BioFire FilmArray system showed a high sensitivity of 90% (95% confidence interval, 86%-93%) and specificity of 97% (95% CI, 94%-99%). The FilmArray ME panel can halve the time to microbiological result, allowing for earlier discontinuation of antimicrobial agents and hospital discharge in cases of viral meningitis.
Conclusion
Acute community-acquired meningitis is usually the result of viral or bacterial infections. Given the low specificity of clinical symptoms and, very often, of the general laboratory panel findings, many patients are empirically treated with antibiotics. High-sensitivity and -specificity molecular techniques allow for rapid identification of the bacterial etiology (which requires antibiotic therapy) or the viral etiology of meningitis. The latter can be managed only with symptom-specific medications and does not usually require extended hospitalization. Therefore, these new techniques can improve the quality of care for these patients with viral meningitis.
A version of this article first appeared on Medscape.com.
Bacteria and viruses are the leading causes of community-acquired meningitis. Bacterial meningitis is associated with high morbidity and mortality, and prompt treatment with appropriate antibiotics is essential to optimize outcomes. Early diagnosis is therefore crucial for selecting patients who need antibiotics. On the other hand, the course of viral meningitis is generally benign, and there is usually no specific antimicrobial treatment required. Distinguishing between viral and bacterial causes of meningitis can be challenging; therefore, many patients receive empiric antibiotic treatment.
Etiology
Among the etiologic agents of viral meningitis, the nonpolio enteroviruses (Echovirus 30, 11, 9, 6, 7, 18, 16, 71, 25; Coxsackie B2, A9, B1, B3, B4) are the most common, responsible for more than 85% of cases. Other viruses potentially responsible for meningitis include the herpes simplex virus (HSV), primarily type 2, and flavivirus (such as the Dengue virus).
Clinical presentation
The clinical presentation of bacterial meningitis is more severe than that of viral meningitis. The classic clinical triad of bacterial meningitis consists of fever, neck stiffness, and altered mental status. Only 41% of cases present with these three symptoms, however. Other clinical characteristics include severe headaches, decreased level of consciousness, nausea, vomiting, seizures, focal neurologic signs, and skin rash.
Viral meningitis is usually not associated with a decreased level of consciousness or significant decline in overall health status. The most frequently reported symptoms are unusual headaches, fever, nausea, vomiting, sensitivity to light, and neck stiffness. Patients may also present with skin changes and lymphadenopathy, and, depending on etiology, genital ulcers.
Diagnosis
The diagnosis of bacterial meningitis is based on clinical symptoms, blood panels (blood count, inflammation markers, cultures), and cerebrospinal fluid (CSF) cultures. Gram staining and latex agglutination may lead to false-negative results, and cultures may take a few days to provide a definitive result. Therefore, empiric antibiotic treatment is often started until the etiology can be determined.
A spinal tap must always be performed, preferably after a scan is taken, to rule out the risk of herniation. After CSF samples have been collected, they must undergo complete analysis, including cytological, biochemical, and microbiological evaluation, using conventional and molecular testing methods, when available.
Cytological and biochemical analyses of CSF may be helpful, as findings may indicate a higher probability of either bacterial or viral etiology.
CSF samples collected from patients with acute bacterial meningitis present characteristic neutrophilic pleocytosis (cell count usually ranging from hundreds to a few thousand, with >80% polymorphonuclear cells). In some cases of L. monocytogenes meningitis (from 25% to 30%), a lymphocytic predominance may occur. Normally, glucose is low (CSF glucose-to-blood-glucose ratio of ≤0.4 or <40 mg/dL), protein is very high (>200 mg/dL), and the CSF lactate level is high (≥31.53 mg/dL).
In viral meningitis, the white blood cell count is generally 10-300 cells/mm3. Although glucose levels are normal in most cases, they may be below normal limits in lymphocytic choriomeningitis virus (LCMV), HSV, mumps virus, and poliovirus meningitis. Protein levels tend to be slightly elevated, but they may still be within the reference range.
A recent study investigated which of the cytological or biochemical markers best correlate with the definite etiologic diagnosis. This study, in which CSF samples were collected and analyzed from 2013 to 2017, considered cases of bacterial or viral meningitis confirmed via microbiological evaluation or polymerase chain reaction (PCR). CSF lactate was the best single CSF parameter, and CSF lactate above 30 mg/dL virtually excludes the possibility of a viral etiology.
Etiologic determination
Despite the major contribution of globally analyzing CSF and secondary parameters, particularly CSF lactate, the precise etiologic definition is of great importance in cases of acute meningitis. Such precise definition is not simple, as identification of the causative microorganism is often difficult. Moreover, there are limits to conventional microbiological methods. Bacterioscopy is poorly sensitive, and although bacterial cultures are more sensitive, they can delay diagnosis because of the time it takes for the bacteria to grow in culture media.
Targeted molecular detection methods are usually more sensitive than conventional microbiological methods. Panel-based molecular tests identify multiple pathogens in a single test. In 2015, the U.S. Food and Drug Administration authorized the first commercial multiplex detection system for infectious causes of community-acquired meningitis and encephalitis. This test, the BioFire FilmArray system, detects 14 bacterial, viral, and fungal pathogens in a turnaround time of about 1 hour, including S. pneumoniae, N. meningitidis, H. influenzae, S. agalactiae (i.e., group B Streptococcus), E. coli (serotype K1), L. monocytogenes, HSV-1, HSV-2, varicella-zoster virus (VZV), cytomegalovirus (CMV), human herpesvirus 6 (HHV-6), human parechovirus (HPeV), and Cryptococcus neoformans/gattii.
A meta-analysis of eight precise diagnostic studies evaluating the BioFire FilmArray system showed a high sensitivity of 90% (95% confidence interval, 86%-93%) and specificity of 97% (95% CI, 94%-99%). The FilmArray ME panel can halve the time to microbiological result, allowing for earlier discontinuation of antimicrobial agents and hospital discharge in cases of viral meningitis.
Conclusion
Acute community-acquired meningitis is usually the result of viral or bacterial infections. Given the low specificity of clinical symptoms and, very often, of the general laboratory panel findings, many patients are empirically treated with antibiotics. High-sensitivity and -specificity molecular techniques allow for rapid identification of the bacterial etiology (which requires antibiotic therapy) or the viral etiology of meningitis. The latter can be managed only with symptom-specific medications and does not usually require extended hospitalization. Therefore, these new techniques can improve the quality of care for these patients with viral meningitis.
A version of this article first appeared on Medscape.com.
Bacteria and viruses are the leading causes of community-acquired meningitis. Bacterial meningitis is associated with high morbidity and mortality, and prompt treatment with appropriate antibiotics is essential to optimize outcomes. Early diagnosis is therefore crucial for selecting patients who need antibiotics. On the other hand, the course of viral meningitis is generally benign, and there is usually no specific antimicrobial treatment required. Distinguishing between viral and bacterial causes of meningitis can be challenging; therefore, many patients receive empiric antibiotic treatment.
Etiology
Among the etiologic agents of viral meningitis, the nonpolio enteroviruses (Echovirus 30, 11, 9, 6, 7, 18, 16, 71, 25; Coxsackie B2, A9, B1, B3, B4) are the most common, responsible for more than 85% of cases. Other viruses potentially responsible for meningitis include the herpes simplex virus (HSV), primarily type 2, and flavivirus (such as the Dengue virus).
Clinical presentation
The clinical presentation of bacterial meningitis is more severe than that of viral meningitis. The classic clinical triad of bacterial meningitis consists of fever, neck stiffness, and altered mental status. Only 41% of cases present with these three symptoms, however. Other clinical characteristics include severe headaches, decreased level of consciousness, nausea, vomiting, seizures, focal neurologic signs, and skin rash.
Viral meningitis is usually not associated with a decreased level of consciousness or significant decline in overall health status. The most frequently reported symptoms are unusual headaches, fever, nausea, vomiting, sensitivity to light, and neck stiffness. Patients may also present with skin changes and lymphadenopathy, and, depending on etiology, genital ulcers.
Diagnosis
The diagnosis of bacterial meningitis is based on clinical symptoms, blood panels (blood count, inflammation markers, cultures), and cerebrospinal fluid (CSF) cultures. Gram staining and latex agglutination may lead to false-negative results, and cultures may take a few days to provide a definitive result. Therefore, empiric antibiotic treatment is often started until the etiology can be determined.
A spinal tap must always be performed, preferably after a scan is taken, to rule out the risk of herniation. After CSF samples have been collected, they must undergo complete analysis, including cytological, biochemical, and microbiological evaluation, using conventional and molecular testing methods, when available.
Cytological and biochemical analyses of CSF may be helpful, as findings may indicate a higher probability of either bacterial or viral etiology.
CSF samples collected from patients with acute bacterial meningitis present characteristic neutrophilic pleocytosis (cell count usually ranging from hundreds to a few thousand, with >80% polymorphonuclear cells). In some cases of L. monocytogenes meningitis (from 25% to 30%), a lymphocytic predominance may occur. Normally, glucose is low (CSF glucose-to-blood-glucose ratio of ≤0.4 or <40 mg/dL), protein is very high (>200 mg/dL), and the CSF lactate level is high (≥31.53 mg/dL).
In viral meningitis, the white blood cell count is generally 10-300 cells/mm3. Although glucose levels are normal in most cases, they may be below normal limits in lymphocytic choriomeningitis virus (LCMV), HSV, mumps virus, and poliovirus meningitis. Protein levels tend to be slightly elevated, but they may still be within the reference range.
A recent study investigated which of the cytological or biochemical markers best correlate with the definite etiologic diagnosis. This study, in which CSF samples were collected and analyzed from 2013 to 2017, considered cases of bacterial or viral meningitis confirmed via microbiological evaluation or polymerase chain reaction (PCR). CSF lactate was the best single CSF parameter, and CSF lactate above 30 mg/dL virtually excludes the possibility of a viral etiology.
Etiologic determination
Despite the major contribution of globally analyzing CSF and secondary parameters, particularly CSF lactate, the precise etiologic definition is of great importance in cases of acute meningitis. Such precise definition is not simple, as identification of the causative microorganism is often difficult. Moreover, there are limits to conventional microbiological methods. Bacterioscopy is poorly sensitive, and although bacterial cultures are more sensitive, they can delay diagnosis because of the time it takes for the bacteria to grow in culture media.
Targeted molecular detection methods are usually more sensitive than conventional microbiological methods. Panel-based molecular tests identify multiple pathogens in a single test. In 2015, the U.S. Food and Drug Administration authorized the first commercial multiplex detection system for infectious causes of community-acquired meningitis and encephalitis. This test, the BioFire FilmArray system, detects 14 bacterial, viral, and fungal pathogens in a turnaround time of about 1 hour, including S. pneumoniae, N. meningitidis, H. influenzae, S. agalactiae (i.e., group B Streptococcus), E. coli (serotype K1), L. monocytogenes, HSV-1, HSV-2, varicella-zoster virus (VZV), cytomegalovirus (CMV), human herpesvirus 6 (HHV-6), human parechovirus (HPeV), and Cryptococcus neoformans/gattii.
A meta-analysis of eight precise diagnostic studies evaluating the BioFire FilmArray system showed a high sensitivity of 90% (95% confidence interval, 86%-93%) and specificity of 97% (95% CI, 94%-99%). The FilmArray ME panel can halve the time to microbiological result, allowing for earlier discontinuation of antimicrobial agents and hospital discharge in cases of viral meningitis.
Conclusion
Acute community-acquired meningitis is usually the result of viral or bacterial infections. Given the low specificity of clinical symptoms and, very often, of the general laboratory panel findings, many patients are empirically treated with antibiotics. High-sensitivity and -specificity molecular techniques allow for rapid identification of the bacterial etiology (which requires antibiotic therapy) or the viral etiology of meningitis. The latter can be managed only with symptom-specific medications and does not usually require extended hospitalization. Therefore, these new techniques can improve the quality of care for these patients with viral meningitis.
A version of this article first appeared on Medscape.com.
Deep brain stimulation fails to halt depression in Parkinson’s disease
Treatment with deep brain stimulation improved motor function and quality of life, but depression scores increased after 1 year, based on data from 20 adults.
Subthalamic nucleus deep brain stimulation (STN-DBS) has emerged as an effective treatment for Parkinson’s disease symptoms, with evidence supporting improved motor symptoms and quality of life, wrote Francesca Mameli, PsyD, of Foundation IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milan, and colleagues.
However, the effect of STN-DBS on personality in Parkinson’s disease (PD) has not been well investigated they said.
In a study published in Neuromodulation, the researchers reviewed data from 12 women and 8 men with PD who underwent bilateral STN-DBS.
Depression was assessed via the Montgomery-Asberg Depressive Rating Scale (MADRS), personality characteristics were assessed via the Minnesota Multiphasic Personality Inventory–2 (MMPI-2), and motor disabilities were assessed via UPDRS-III-Motor. The motor disabilities score was obtained in medication on and medication off conditions; the off condition followed a 12-hour overnight withdrawal of dopaminergic medication. Quality of life was assessed via the Parkinson’s Disease Questionnaire–8 (PDQ-8).
After 12 months, scores on the MMPI-2 were significantly higher on the D subscale, increased from a baseline mean of 56.05 to a 12-month mean of 61.90 (P = .015).
Other MMPI-2 scales showing significant increases included the DEP scale, LSE scale, WRK scale, and TRT scale. No differences appeared between male and female patients.
No significant changes occurred from pre-DBS baseline to the 12-month follow-up in MADRS scale assessment, with mean scores of 8.18 and 9.22, respectively.
A 40% improvement in UPDRS measures of motor function occurred among patients in the “medication-off” condition, although there was no significant change following DBS in the medication-on condition, the researchers said. Among 18 patients with PDQ-8 assessments, quality of life scores were significantly higher at 12 months’ post DBS compared to baseline pre DBS (40.15 vs. 30.73, P = .011).
The researchers also examined the relationship between the total electrical energy delivered (TEED) and the occurrence of personality trait shift. In the TEED analysis, “only the energy on the right side was inversely correlated with the changes in depression,” they wrote.
“Because of the complexity of psychiatric phenomena, it would be advisable to take a cautious approach by including psychiatric evaluation by interview for a better selection of patients who score close to the pathological cutoffs in MADRS and MMPI-2,” the researchers wrote in their discussion.
The study findings were limited by several factors including the small sample size, lack of data on the prevalence and severity of apathy, the use of scales based on self-reports, and inability to control for all factors that might affect depressive traits, the researchers noted. In addition, more research is needed to explore the correlation between TEED and personality trait changes, they said.
However, the results support the value of DBS in PD, but emphasize the need to manage expectations, they emphasized. “Expectations should never be unrealistic, and the caring team should ensure not only that patients fully understand the risks and potential benefits of the DBS but also that it will not stop the neurodegenerative progression of the disease,” they said.
The study was supported in part by the Italian Ministry of Health. The researchers had no financial conflicts to disclose.
Treatment with deep brain stimulation improved motor function and quality of life, but depression scores increased after 1 year, based on data from 20 adults.
Subthalamic nucleus deep brain stimulation (STN-DBS) has emerged as an effective treatment for Parkinson’s disease symptoms, with evidence supporting improved motor symptoms and quality of life, wrote Francesca Mameli, PsyD, of Foundation IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milan, and colleagues.
However, the effect of STN-DBS on personality in Parkinson’s disease (PD) has not been well investigated they said.
In a study published in Neuromodulation, the researchers reviewed data from 12 women and 8 men with PD who underwent bilateral STN-DBS.
Depression was assessed via the Montgomery-Asberg Depressive Rating Scale (MADRS), personality characteristics were assessed via the Minnesota Multiphasic Personality Inventory–2 (MMPI-2), and motor disabilities were assessed via UPDRS-III-Motor. The motor disabilities score was obtained in medication on and medication off conditions; the off condition followed a 12-hour overnight withdrawal of dopaminergic medication. Quality of life was assessed via the Parkinson’s Disease Questionnaire–8 (PDQ-8).
After 12 months, scores on the MMPI-2 were significantly higher on the D subscale, increased from a baseline mean of 56.05 to a 12-month mean of 61.90 (P = .015).
Other MMPI-2 scales showing significant increases included the DEP scale, LSE scale, WRK scale, and TRT scale. No differences appeared between male and female patients.
No significant changes occurred from pre-DBS baseline to the 12-month follow-up in MADRS scale assessment, with mean scores of 8.18 and 9.22, respectively.
A 40% improvement in UPDRS measures of motor function occurred among patients in the “medication-off” condition, although there was no significant change following DBS in the medication-on condition, the researchers said. Among 18 patients with PDQ-8 assessments, quality of life scores were significantly higher at 12 months’ post DBS compared to baseline pre DBS (40.15 vs. 30.73, P = .011).
The researchers also examined the relationship between the total electrical energy delivered (TEED) and the occurrence of personality trait shift. In the TEED analysis, “only the energy on the right side was inversely correlated with the changes in depression,” they wrote.
“Because of the complexity of psychiatric phenomena, it would be advisable to take a cautious approach by including psychiatric evaluation by interview for a better selection of patients who score close to the pathological cutoffs in MADRS and MMPI-2,” the researchers wrote in their discussion.
The study findings were limited by several factors including the small sample size, lack of data on the prevalence and severity of apathy, the use of scales based on self-reports, and inability to control for all factors that might affect depressive traits, the researchers noted. In addition, more research is needed to explore the correlation between TEED and personality trait changes, they said.
However, the results support the value of DBS in PD, but emphasize the need to manage expectations, they emphasized. “Expectations should never be unrealistic, and the caring team should ensure not only that patients fully understand the risks and potential benefits of the DBS but also that it will not stop the neurodegenerative progression of the disease,” they said.
The study was supported in part by the Italian Ministry of Health. The researchers had no financial conflicts to disclose.
Treatment with deep brain stimulation improved motor function and quality of life, but depression scores increased after 1 year, based on data from 20 adults.
Subthalamic nucleus deep brain stimulation (STN-DBS) has emerged as an effective treatment for Parkinson’s disease symptoms, with evidence supporting improved motor symptoms and quality of life, wrote Francesca Mameli, PsyD, of Foundation IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milan, and colleagues.
However, the effect of STN-DBS on personality in Parkinson’s disease (PD) has not been well investigated they said.
In a study published in Neuromodulation, the researchers reviewed data from 12 women and 8 men with PD who underwent bilateral STN-DBS.
Depression was assessed via the Montgomery-Asberg Depressive Rating Scale (MADRS), personality characteristics were assessed via the Minnesota Multiphasic Personality Inventory–2 (MMPI-2), and motor disabilities were assessed via UPDRS-III-Motor. The motor disabilities score was obtained in medication on and medication off conditions; the off condition followed a 12-hour overnight withdrawal of dopaminergic medication. Quality of life was assessed via the Parkinson’s Disease Questionnaire–8 (PDQ-8).
After 12 months, scores on the MMPI-2 were significantly higher on the D subscale, increased from a baseline mean of 56.05 to a 12-month mean of 61.90 (P = .015).
Other MMPI-2 scales showing significant increases included the DEP scale, LSE scale, WRK scale, and TRT scale. No differences appeared between male and female patients.
No significant changes occurred from pre-DBS baseline to the 12-month follow-up in MADRS scale assessment, with mean scores of 8.18 and 9.22, respectively.
A 40% improvement in UPDRS measures of motor function occurred among patients in the “medication-off” condition, although there was no significant change following DBS in the medication-on condition, the researchers said. Among 18 patients with PDQ-8 assessments, quality of life scores were significantly higher at 12 months’ post DBS compared to baseline pre DBS (40.15 vs. 30.73, P = .011).
The researchers also examined the relationship between the total electrical energy delivered (TEED) and the occurrence of personality trait shift. In the TEED analysis, “only the energy on the right side was inversely correlated with the changes in depression,” they wrote.
“Because of the complexity of psychiatric phenomena, it would be advisable to take a cautious approach by including psychiatric evaluation by interview for a better selection of patients who score close to the pathological cutoffs in MADRS and MMPI-2,” the researchers wrote in their discussion.
The study findings were limited by several factors including the small sample size, lack of data on the prevalence and severity of apathy, the use of scales based on self-reports, and inability to control for all factors that might affect depressive traits, the researchers noted. In addition, more research is needed to explore the correlation between TEED and personality trait changes, they said.
However, the results support the value of DBS in PD, but emphasize the need to manage expectations, they emphasized. “Expectations should never be unrealistic, and the caring team should ensure not only that patients fully understand the risks and potential benefits of the DBS but also that it will not stop the neurodegenerative progression of the disease,” they said.
The study was supported in part by the Italian Ministry of Health. The researchers had no financial conflicts to disclose.
FROM NEUROMODULATION
Do personality traits predict cognitive decline?
new research shows.
Investigators analyzed data from almost 2,000 individuals enrolled in the Rush Memory and Aging Project (MAP) – a longitudinal study of older adults living in the greater Chicago metropolitan region and northeastern Illinois – with recruitment that began in 1997 and continues through today. Participants received a personality assessment as well as annual assessments of their cognitive abilities.
Those with high scores on measures of conscientiousness were significantly less likely to progress from normal cognition to mild cognitive impairment (MCI) during the study. In fact, scoring an extra 1 standard deviation on the conscientiousness scale was associated with a 22% lower risk of transitioning from no cognitive impairment (NCI) to MCI. On the other hand, scoring an additional 1 SD on a neuroticism scale was associated with a 12% increased risk of transitioning to MCI.
Participants who scored high on extraversion, as well as those who scored high on conscientiousness or low on neuroticism, tended to maintain normal cognitive functioning longer than other participants.
“Personality traits reflect relatively enduring patterns of thinking and behaving, which may cumulatively affect engagement in healthy and unhealthy behaviors and thought patterns across the lifespan,” lead author Tomiko Yoneda, PhD, a postdoctoral researcher in the department of medical social sciences, Northwestern University, Chicago, said in an interview.
“The accumulation of lifelong experiences may then contribute to susceptibility of particular diseases or disorders, such as mild cognitive impairment, or contribute to individual differences in the ability to withstand age-related neurological changes,” she added.
The study was published online in the Journal of Personality and Social Psychology.
Competing risk factors
Personality traits “reflect an individual’s persistent patterns of thinking, feeling, and behaving,” Dr. Yoneda said.
“For example, conscientiousness is characterized by competence, dutifulness, and self-discipline, while neuroticism is characterized by anxiety, depressive symptoms, and emotional instability. Likewise, individuals high in extraversion tend to be enthusiastic, gregarious, talkative, and assertive,” she added.
Previous research “suggests that low conscientiousness and high neuroticism are associated with an increased risk of cognitive impairment,” she continued. However, “there is also an increased risk of death in older adulthood – in other words, these outcomes are ‘competing risk factors.’”
Dr. Yoneda said her team wanted to “examine the impact of personality traits on the simultaneous risk of transitioning to mild cognitive impairment, dementia, and death.”
For the study, the researchers analyzed data from 1,954 participants in MAP (mean age at baseline 80 years, 73.7% female, 86.8% White), who received a personality assessment and annual assessments of their cognitive abilities.
To assess personality traits – in particular, conscientiousness, neuroticism, and extraversion – the researchers used the NEO Five Factor Inventory (NEO-FFI). They also used multistate survival modeling to examine the potential association between these traits and transitions from one cognitive status category to another (NCI, MCI, and dementia) and to death.
Cognitive healthspan
By the end of the study, over half of the sample (54%) had died.
Most transitions showed “relative stability in cognitive status across measurement occasions.”
- NCI to NCI (n = 7,368)
- MCI to MCI (n = 1,244)
- Dementia to dementia (n = 876)
There were 725 “backward transitions” from MCI to NCI, “which may reflect improvement or within-person variability in cognitive functioning, or learning effects,” the authors note.
There were only 114 “backward transitions” from dementia to MCI and only 12 from dementia to NCI, “suggesting that improvement in cognitive status was relatively rare, particularly once an individual progresses to dementia.”
After adjusting for demographics, depressive symptoms, and apolipoprotein (APOE) ε4 allele, the researchers found that personality traits were the most important factors in the transition from NCI to MCI.
Higher conscientiousness was associated with a decreased risk of transitioning from NCI to MCI (hazard ratio, 0.78; 95% confidence interval, 0.72-0.85). Conversely, higher neuroticism was associated with an increased risk of transitioning from NCI to MCI (HR, 1.12; 95% CI, 1.04-1.21) and a significantly decreased likelihood of transition back from MCI to NCI (HR, 0.90; 95% CI, 0.81-1.00).
Scoring ~6 points on a conscientiousness scale ranging from 0-48 (that is, 1 SD on the scale) was significantly associated with ~22% lower risk of transitioning forward from NCI to MCI, while scoring ~7 more points on a neuroticism scale (1 SD) was significantly associated with ~12% higher risk of transitioning from NCI to MCI.
Higher extraversion was associated with an increased likelihood of transitioning from MCI back to NCI (HR, 1.12; 95% CI, 1.03-1.22), and although extraversion was not associated with a longer total lifespan, participants who scored high on extraversion, as well as those who scored low on conscientiousness or low on neuroticism, maintained normal cognitive function longer than other participants.
“Our results suggest that high conscientiousness and low neuroticism may protect individuals against mild cognitive impairment,” said Dr. Yoneda.
Importantly, individuals who were either higher in conscientiousness, higher in extraversion, or lower in neuroticism had more years of “cognitive healthspan,” meaning more years without cognitive impairment,” she added.
In addition, “individuals lower in neuroticism and higher in extraversion were more likely to recover after receiving an MCI diagnosis, suggesting that these traits may be protective even after an individual starts to progress to dementia,” she said.
The authors note that the study focused on only three of the Big Five personality traits, while the other 2 – openness to experience and agreeableness – may also be associated with cognitive aging processes and mortality.
Nevertheless, given the current results, alongside extensive research in the personality field, aiming to increase conscientiousness through persistent behavioral change is one potential strategy for promoting healthy cognitive aging, Dr. Yoneda said.
‘Invaluable window’
In a comment, Brent Roberts, PhD, professor of psychology, University of Illinois Urbana-Champaign, said the study provides an “invaluable window into how personality affects the process of decline and either accelerates it, as in the role of neuroticism, or decelerates it, as in the role of conscientiousness.”
“I think the most fascinating finding was the fact that extraversion was related to transitioning from MCI back to NCI. These types of transitions have simply not been part of prior research, and it provides utterly unique insights and opportunities for interventions that may actually help people recover from a decline,” said Dr. Roberts, who was not involved in the research.
Claire Sexton, DPhil, Alzheimer’s Association director of scientific programs and outreach, called the paper “novel” because it investigated the transitions between normal cognition and mild impairment and between mild impairment and dementia.
Dr. Sexton, who was associated with this research team, cautioned that is it observational, “so it can illuminate associations or correlations, but not causes. As a result, we can’t say for sure what the mechanisms are behind these potential connections between personality and cognition, and more research is needed.”
The research was supported by the Alzheimer Society Research Program, Social Sciences and Humanities Research Council, and the National Institute on Aging of the National Institutes of Health. Dr. Yoneda and co-authors, Dr. Roberts, and Dr. Sexton have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
new research shows.
Investigators analyzed data from almost 2,000 individuals enrolled in the Rush Memory and Aging Project (MAP) – a longitudinal study of older adults living in the greater Chicago metropolitan region and northeastern Illinois – with recruitment that began in 1997 and continues through today. Participants received a personality assessment as well as annual assessments of their cognitive abilities.
Those with high scores on measures of conscientiousness were significantly less likely to progress from normal cognition to mild cognitive impairment (MCI) during the study. In fact, scoring an extra 1 standard deviation on the conscientiousness scale was associated with a 22% lower risk of transitioning from no cognitive impairment (NCI) to MCI. On the other hand, scoring an additional 1 SD on a neuroticism scale was associated with a 12% increased risk of transitioning to MCI.
Participants who scored high on extraversion, as well as those who scored high on conscientiousness or low on neuroticism, tended to maintain normal cognitive functioning longer than other participants.
“Personality traits reflect relatively enduring patterns of thinking and behaving, which may cumulatively affect engagement in healthy and unhealthy behaviors and thought patterns across the lifespan,” lead author Tomiko Yoneda, PhD, a postdoctoral researcher in the department of medical social sciences, Northwestern University, Chicago, said in an interview.
“The accumulation of lifelong experiences may then contribute to susceptibility of particular diseases or disorders, such as mild cognitive impairment, or contribute to individual differences in the ability to withstand age-related neurological changes,” she added.
The study was published online in the Journal of Personality and Social Psychology.
Competing risk factors
Personality traits “reflect an individual’s persistent patterns of thinking, feeling, and behaving,” Dr. Yoneda said.
“For example, conscientiousness is characterized by competence, dutifulness, and self-discipline, while neuroticism is characterized by anxiety, depressive symptoms, and emotional instability. Likewise, individuals high in extraversion tend to be enthusiastic, gregarious, talkative, and assertive,” she added.
Previous research “suggests that low conscientiousness and high neuroticism are associated with an increased risk of cognitive impairment,” she continued. However, “there is also an increased risk of death in older adulthood – in other words, these outcomes are ‘competing risk factors.’”
Dr. Yoneda said her team wanted to “examine the impact of personality traits on the simultaneous risk of transitioning to mild cognitive impairment, dementia, and death.”
For the study, the researchers analyzed data from 1,954 participants in MAP (mean age at baseline 80 years, 73.7% female, 86.8% White), who received a personality assessment and annual assessments of their cognitive abilities.
To assess personality traits – in particular, conscientiousness, neuroticism, and extraversion – the researchers used the NEO Five Factor Inventory (NEO-FFI). They also used multistate survival modeling to examine the potential association between these traits and transitions from one cognitive status category to another (NCI, MCI, and dementia) and to death.
Cognitive healthspan
By the end of the study, over half of the sample (54%) had died.
Most transitions showed “relative stability in cognitive status across measurement occasions.”
- NCI to NCI (n = 7,368)
- MCI to MCI (n = 1,244)
- Dementia to dementia (n = 876)
There were 725 “backward transitions” from MCI to NCI, “which may reflect improvement or within-person variability in cognitive functioning, or learning effects,” the authors note.
There were only 114 “backward transitions” from dementia to MCI and only 12 from dementia to NCI, “suggesting that improvement in cognitive status was relatively rare, particularly once an individual progresses to dementia.”
After adjusting for demographics, depressive symptoms, and apolipoprotein (APOE) ε4 allele, the researchers found that personality traits were the most important factors in the transition from NCI to MCI.
Higher conscientiousness was associated with a decreased risk of transitioning from NCI to MCI (hazard ratio, 0.78; 95% confidence interval, 0.72-0.85). Conversely, higher neuroticism was associated with an increased risk of transitioning from NCI to MCI (HR, 1.12; 95% CI, 1.04-1.21) and a significantly decreased likelihood of transition back from MCI to NCI (HR, 0.90; 95% CI, 0.81-1.00).
Scoring ~6 points on a conscientiousness scale ranging from 0-48 (that is, 1 SD on the scale) was significantly associated with ~22% lower risk of transitioning forward from NCI to MCI, while scoring ~7 more points on a neuroticism scale (1 SD) was significantly associated with ~12% higher risk of transitioning from NCI to MCI.
Higher extraversion was associated with an increased likelihood of transitioning from MCI back to NCI (HR, 1.12; 95% CI, 1.03-1.22), and although extraversion was not associated with a longer total lifespan, participants who scored high on extraversion, as well as those who scored low on conscientiousness or low on neuroticism, maintained normal cognitive function longer than other participants.
“Our results suggest that high conscientiousness and low neuroticism may protect individuals against mild cognitive impairment,” said Dr. Yoneda.
Importantly, individuals who were either higher in conscientiousness, higher in extraversion, or lower in neuroticism had more years of “cognitive healthspan,” meaning more years without cognitive impairment,” she added.
In addition, “individuals lower in neuroticism and higher in extraversion were more likely to recover after receiving an MCI diagnosis, suggesting that these traits may be protective even after an individual starts to progress to dementia,” she said.
The authors note that the study focused on only three of the Big Five personality traits, while the other 2 – openness to experience and agreeableness – may also be associated with cognitive aging processes and mortality.
Nevertheless, given the current results, alongside extensive research in the personality field, aiming to increase conscientiousness through persistent behavioral change is one potential strategy for promoting healthy cognitive aging, Dr. Yoneda said.
‘Invaluable window’
In a comment, Brent Roberts, PhD, professor of psychology, University of Illinois Urbana-Champaign, said the study provides an “invaluable window into how personality affects the process of decline and either accelerates it, as in the role of neuroticism, or decelerates it, as in the role of conscientiousness.”
“I think the most fascinating finding was the fact that extraversion was related to transitioning from MCI back to NCI. These types of transitions have simply not been part of prior research, and it provides utterly unique insights and opportunities for interventions that may actually help people recover from a decline,” said Dr. Roberts, who was not involved in the research.
Claire Sexton, DPhil, Alzheimer’s Association director of scientific programs and outreach, called the paper “novel” because it investigated the transitions between normal cognition and mild impairment and between mild impairment and dementia.
Dr. Sexton, who was associated with this research team, cautioned that is it observational, “so it can illuminate associations or correlations, but not causes. As a result, we can’t say for sure what the mechanisms are behind these potential connections between personality and cognition, and more research is needed.”
The research was supported by the Alzheimer Society Research Program, Social Sciences and Humanities Research Council, and the National Institute on Aging of the National Institutes of Health. Dr. Yoneda and co-authors, Dr. Roberts, and Dr. Sexton have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
new research shows.
Investigators analyzed data from almost 2,000 individuals enrolled in the Rush Memory and Aging Project (MAP) – a longitudinal study of older adults living in the greater Chicago metropolitan region and northeastern Illinois – with recruitment that began in 1997 and continues through today. Participants received a personality assessment as well as annual assessments of their cognitive abilities.
Those with high scores on measures of conscientiousness were significantly less likely to progress from normal cognition to mild cognitive impairment (MCI) during the study. In fact, scoring an extra 1 standard deviation on the conscientiousness scale was associated with a 22% lower risk of transitioning from no cognitive impairment (NCI) to MCI. On the other hand, scoring an additional 1 SD on a neuroticism scale was associated with a 12% increased risk of transitioning to MCI.
Participants who scored high on extraversion, as well as those who scored high on conscientiousness or low on neuroticism, tended to maintain normal cognitive functioning longer than other participants.
“Personality traits reflect relatively enduring patterns of thinking and behaving, which may cumulatively affect engagement in healthy and unhealthy behaviors and thought patterns across the lifespan,” lead author Tomiko Yoneda, PhD, a postdoctoral researcher in the department of medical social sciences, Northwestern University, Chicago, said in an interview.
“The accumulation of lifelong experiences may then contribute to susceptibility of particular diseases or disorders, such as mild cognitive impairment, or contribute to individual differences in the ability to withstand age-related neurological changes,” she added.
The study was published online in the Journal of Personality and Social Psychology.
Competing risk factors
Personality traits “reflect an individual’s persistent patterns of thinking, feeling, and behaving,” Dr. Yoneda said.
“For example, conscientiousness is characterized by competence, dutifulness, and self-discipline, while neuroticism is characterized by anxiety, depressive symptoms, and emotional instability. Likewise, individuals high in extraversion tend to be enthusiastic, gregarious, talkative, and assertive,” she added.
Previous research “suggests that low conscientiousness and high neuroticism are associated with an increased risk of cognitive impairment,” she continued. However, “there is also an increased risk of death in older adulthood – in other words, these outcomes are ‘competing risk factors.’”
Dr. Yoneda said her team wanted to “examine the impact of personality traits on the simultaneous risk of transitioning to mild cognitive impairment, dementia, and death.”
For the study, the researchers analyzed data from 1,954 participants in MAP (mean age at baseline 80 years, 73.7% female, 86.8% White), who received a personality assessment and annual assessments of their cognitive abilities.
To assess personality traits – in particular, conscientiousness, neuroticism, and extraversion – the researchers used the NEO Five Factor Inventory (NEO-FFI). They also used multistate survival modeling to examine the potential association between these traits and transitions from one cognitive status category to another (NCI, MCI, and dementia) and to death.
Cognitive healthspan
By the end of the study, over half of the sample (54%) had died.
Most transitions showed “relative stability in cognitive status across measurement occasions.”
- NCI to NCI (n = 7,368)
- MCI to MCI (n = 1,244)
- Dementia to dementia (n = 876)
There were 725 “backward transitions” from MCI to NCI, “which may reflect improvement or within-person variability in cognitive functioning, or learning effects,” the authors note.
There were only 114 “backward transitions” from dementia to MCI and only 12 from dementia to NCI, “suggesting that improvement in cognitive status was relatively rare, particularly once an individual progresses to dementia.”
After adjusting for demographics, depressive symptoms, and apolipoprotein (APOE) ε4 allele, the researchers found that personality traits were the most important factors in the transition from NCI to MCI.
Higher conscientiousness was associated with a decreased risk of transitioning from NCI to MCI (hazard ratio, 0.78; 95% confidence interval, 0.72-0.85). Conversely, higher neuroticism was associated with an increased risk of transitioning from NCI to MCI (HR, 1.12; 95% CI, 1.04-1.21) and a significantly decreased likelihood of transition back from MCI to NCI (HR, 0.90; 95% CI, 0.81-1.00).
Scoring ~6 points on a conscientiousness scale ranging from 0-48 (that is, 1 SD on the scale) was significantly associated with ~22% lower risk of transitioning forward from NCI to MCI, while scoring ~7 more points on a neuroticism scale (1 SD) was significantly associated with ~12% higher risk of transitioning from NCI to MCI.
Higher extraversion was associated with an increased likelihood of transitioning from MCI back to NCI (HR, 1.12; 95% CI, 1.03-1.22), and although extraversion was not associated with a longer total lifespan, participants who scored high on extraversion, as well as those who scored low on conscientiousness or low on neuroticism, maintained normal cognitive function longer than other participants.
“Our results suggest that high conscientiousness and low neuroticism may protect individuals against mild cognitive impairment,” said Dr. Yoneda.
Importantly, individuals who were either higher in conscientiousness, higher in extraversion, or lower in neuroticism had more years of “cognitive healthspan,” meaning more years without cognitive impairment,” she added.
In addition, “individuals lower in neuroticism and higher in extraversion were more likely to recover after receiving an MCI diagnosis, suggesting that these traits may be protective even after an individual starts to progress to dementia,” she said.
The authors note that the study focused on only three of the Big Five personality traits, while the other 2 – openness to experience and agreeableness – may also be associated with cognitive aging processes and mortality.
Nevertheless, given the current results, alongside extensive research in the personality field, aiming to increase conscientiousness through persistent behavioral change is one potential strategy for promoting healthy cognitive aging, Dr. Yoneda said.
‘Invaluable window’
In a comment, Brent Roberts, PhD, professor of psychology, University of Illinois Urbana-Champaign, said the study provides an “invaluable window into how personality affects the process of decline and either accelerates it, as in the role of neuroticism, or decelerates it, as in the role of conscientiousness.”
“I think the most fascinating finding was the fact that extraversion was related to transitioning from MCI back to NCI. These types of transitions have simply not been part of prior research, and it provides utterly unique insights and opportunities for interventions that may actually help people recover from a decline,” said Dr. Roberts, who was not involved in the research.
Claire Sexton, DPhil, Alzheimer’s Association director of scientific programs and outreach, called the paper “novel” because it investigated the transitions between normal cognition and mild impairment and between mild impairment and dementia.
Dr. Sexton, who was associated with this research team, cautioned that is it observational, “so it can illuminate associations or correlations, but not causes. As a result, we can’t say for sure what the mechanisms are behind these potential connections between personality and cognition, and more research is needed.”
The research was supported by the Alzheimer Society Research Program, Social Sciences and Humanities Research Council, and the National Institute on Aging of the National Institutes of Health. Dr. Yoneda and co-authors, Dr. Roberts, and Dr. Sexton have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY
Childhood abuse may increase risk of MS in women
, according to the first prospective cohort study of its kind.
More research is needed to uncover underlying mechanisms of action, according to lead author Karine Eid, MD, a PhD candidate at Haukeland University Hospital, Bergen, Norway, and colleagues.
“Trauma and stressful life events have been associated with an increased risk of autoimmune disorders,” the investigators wrote in the Journal Of Neurology, Neurosurgery, & Psychiatry. “Whether adverse events in childhood can have an impact on MS susceptibility is not known.”
The present study recruited participants from the Norwegian Mother, Father and Child cohort, a population consisting of Norwegian women who were pregnant from 1999 to 2008. Of the 77,997 participating women, 14,477 reported emotional, sexual, and/or physical abuse in childhood, while the remaining 63,520 women reported no abuse. After a mean follow-up of 13 years, 300 women were diagnosed with MS, among whom 24% reported a history of childhood abuse, compared with 19% among women who did not develop MS.
To look for associations between childhood abuse and risk of MS, the investigators used a Cox model adjusted for confounders and mediators, including smoking, obesity, adult socioeconomic factors, and childhood social status. The model revealed that emotional abuse increased the risk of MS by 40% (hazard ratio [HR] 1.40; 95% confidence interval [CI], 1.03-1.90), and sexual abuse increased the risk of MS by 65% (HR 1.65; 95% CI, 1.13-2.39).
Although physical abuse alone did not significantly increase risk of MS (HR 1.31; 95% CI, 0.83-2.06), it did contribute to a dose-response relationship when women were exposed to more than one type of childhood abuse. Women exposed to two out of three abuse categories had a 66% increased risk of MS (HR 1.66; 95% CI, 1.04-2.67), whereas women exposed to all three types of abuse had the highest risk of MS, at 93% (HR 1.93; 95% CI, 1.02-3.67).
Dr. Eid and colleagues noted that their findings are supported by previous retrospective research, and discussed possible mechanisms of action.
“The increased risk of MS after exposure to childhood sexual and emotional abuse may have a biological explanation,” they wrote. “Childhood abuse can cause dysregulation of the hypothalamic-pituitary-adrenal axis, lead to oxidative stress, and induce a proinflammatory state decades into adulthood. Psychological stress has been shown to disrupt the blood-brain barrier and cause epigenetic changes that may increase the risk of neurodegenerative disorders, including MS.
“The underlying mechanisms behind this association should be investigated further,” they concluded.
Study findings should guide interventions
Commenting on the research, Ruth Ann Marrie, MD, PhD, professor of medicine and community health sciences and director of the multiple sclerosis clinic at Max Rady College of Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, said that the present study “has several strengths compared to prior studies – including that it is prospective and the sample size.”
Dr. Marrie, who was not involved in the study, advised clinicians in the field to take note of the findings, as patients with a history of abuse may need unique interventions.
“Providers need to recognize the higher prevalence of childhood maltreatment in people with MS,” Dr. Marrie said in an interview. “These findings dovetail with others that suggest that adverse childhood experiences are associated with increased mental health concerns and pain catastrophizing in people with MS. Affected individuals may benefit from additional psychological supports and trauma-informed care.”
Tiffany Joy Braley, MD, associate professor of neurology, and Carri Polick, RN and PhD candidate at the school of nursing, University of Michigan, Ann Arbor, who published a case report last year highlighting the importance of evaluating stress exposure in MS, suggested that the findings should guide interventions at both a system and patient level.
“Although a cause-and-effect relationship cannot be established by the current study, these and related findings should be considered in the context of system level and policy interventions that address links between environment and health care disparities,” they said in a joint, written comment. “Given recent impetus to provide trauma-informed health care, these data could be particularly informative in neurological conditions which are associated with high mental health comorbidity. Traumatic stress screening practices could lead to referrals for appropriate support services and more personalized health care.”
While several mechanisms have been proposed to explain the link between traumatic stress and MS, more work is needed in this area, they added.
This knowledge gap was acknowledged by Dr. Marrie.
“Our understanding of the etiology of MS remains incomplete,” Dr. Marrie said. “We still need a better understanding of mechanisms by which adverse childhood experiences lead to MS, how they interact with other risk factors for MS (beyond smoking and obesity), and whether there are any interventions that can mitigate the risk of developing MS that is associated with adverse childhood experiences.”
The investigators disclosed relationships with Novartis, Biogen, Merck, and others. Dr. Marrie receives research support from the Canadian Institutes of Health Research, the National Multiple Sclerosis Society, MS Society of Canada, the Consortium of Multiple Sclerosis Centers, Crohn’s and Colitis Canada, Research Manitoba, and the Arthritis Society; she has no pharmaceutical support. Dr. Braley and Ms. Polick reported no conflicts of interest.
, according to the first prospective cohort study of its kind.
More research is needed to uncover underlying mechanisms of action, according to lead author Karine Eid, MD, a PhD candidate at Haukeland University Hospital, Bergen, Norway, and colleagues.
“Trauma and stressful life events have been associated with an increased risk of autoimmune disorders,” the investigators wrote in the Journal Of Neurology, Neurosurgery, & Psychiatry. “Whether adverse events in childhood can have an impact on MS susceptibility is not known.”
The present study recruited participants from the Norwegian Mother, Father and Child cohort, a population consisting of Norwegian women who were pregnant from 1999 to 2008. Of the 77,997 participating women, 14,477 reported emotional, sexual, and/or physical abuse in childhood, while the remaining 63,520 women reported no abuse. After a mean follow-up of 13 years, 300 women were diagnosed with MS, among whom 24% reported a history of childhood abuse, compared with 19% among women who did not develop MS.
To look for associations between childhood abuse and risk of MS, the investigators used a Cox model adjusted for confounders and mediators, including smoking, obesity, adult socioeconomic factors, and childhood social status. The model revealed that emotional abuse increased the risk of MS by 40% (hazard ratio [HR] 1.40; 95% confidence interval [CI], 1.03-1.90), and sexual abuse increased the risk of MS by 65% (HR 1.65; 95% CI, 1.13-2.39).
Although physical abuse alone did not significantly increase risk of MS (HR 1.31; 95% CI, 0.83-2.06), it did contribute to a dose-response relationship when women were exposed to more than one type of childhood abuse. Women exposed to two out of three abuse categories had a 66% increased risk of MS (HR 1.66; 95% CI, 1.04-2.67), whereas women exposed to all three types of abuse had the highest risk of MS, at 93% (HR 1.93; 95% CI, 1.02-3.67).
Dr. Eid and colleagues noted that their findings are supported by previous retrospective research, and discussed possible mechanisms of action.
“The increased risk of MS after exposure to childhood sexual and emotional abuse may have a biological explanation,” they wrote. “Childhood abuse can cause dysregulation of the hypothalamic-pituitary-adrenal axis, lead to oxidative stress, and induce a proinflammatory state decades into adulthood. Psychological stress has been shown to disrupt the blood-brain barrier and cause epigenetic changes that may increase the risk of neurodegenerative disorders, including MS.
“The underlying mechanisms behind this association should be investigated further,” they concluded.
Study findings should guide interventions
Commenting on the research, Ruth Ann Marrie, MD, PhD, professor of medicine and community health sciences and director of the multiple sclerosis clinic at Max Rady College of Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, said that the present study “has several strengths compared to prior studies – including that it is prospective and the sample size.”
Dr. Marrie, who was not involved in the study, advised clinicians in the field to take note of the findings, as patients with a history of abuse may need unique interventions.
“Providers need to recognize the higher prevalence of childhood maltreatment in people with MS,” Dr. Marrie said in an interview. “These findings dovetail with others that suggest that adverse childhood experiences are associated with increased mental health concerns and pain catastrophizing in people with MS. Affected individuals may benefit from additional psychological supports and trauma-informed care.”
Tiffany Joy Braley, MD, associate professor of neurology, and Carri Polick, RN and PhD candidate at the school of nursing, University of Michigan, Ann Arbor, who published a case report last year highlighting the importance of evaluating stress exposure in MS, suggested that the findings should guide interventions at both a system and patient level.
“Although a cause-and-effect relationship cannot be established by the current study, these and related findings should be considered in the context of system level and policy interventions that address links between environment and health care disparities,” they said in a joint, written comment. “Given recent impetus to provide trauma-informed health care, these data could be particularly informative in neurological conditions which are associated with high mental health comorbidity. Traumatic stress screening practices could lead to referrals for appropriate support services and more personalized health care.”
While several mechanisms have been proposed to explain the link between traumatic stress and MS, more work is needed in this area, they added.
This knowledge gap was acknowledged by Dr. Marrie.
“Our understanding of the etiology of MS remains incomplete,” Dr. Marrie said. “We still need a better understanding of mechanisms by which adverse childhood experiences lead to MS, how they interact with other risk factors for MS (beyond smoking and obesity), and whether there are any interventions that can mitigate the risk of developing MS that is associated with adverse childhood experiences.”
The investigators disclosed relationships with Novartis, Biogen, Merck, and others. Dr. Marrie receives research support from the Canadian Institutes of Health Research, the National Multiple Sclerosis Society, MS Society of Canada, the Consortium of Multiple Sclerosis Centers, Crohn’s and Colitis Canada, Research Manitoba, and the Arthritis Society; she has no pharmaceutical support. Dr. Braley and Ms. Polick reported no conflicts of interest.
, according to the first prospective cohort study of its kind.
More research is needed to uncover underlying mechanisms of action, according to lead author Karine Eid, MD, a PhD candidate at Haukeland University Hospital, Bergen, Norway, and colleagues.
“Trauma and stressful life events have been associated with an increased risk of autoimmune disorders,” the investigators wrote in the Journal Of Neurology, Neurosurgery, & Psychiatry. “Whether adverse events in childhood can have an impact on MS susceptibility is not known.”
The present study recruited participants from the Norwegian Mother, Father and Child cohort, a population consisting of Norwegian women who were pregnant from 1999 to 2008. Of the 77,997 participating women, 14,477 reported emotional, sexual, and/or physical abuse in childhood, while the remaining 63,520 women reported no abuse. After a mean follow-up of 13 years, 300 women were diagnosed with MS, among whom 24% reported a history of childhood abuse, compared with 19% among women who did not develop MS.
To look for associations between childhood abuse and risk of MS, the investigators used a Cox model adjusted for confounders and mediators, including smoking, obesity, adult socioeconomic factors, and childhood social status. The model revealed that emotional abuse increased the risk of MS by 40% (hazard ratio [HR] 1.40; 95% confidence interval [CI], 1.03-1.90), and sexual abuse increased the risk of MS by 65% (HR 1.65; 95% CI, 1.13-2.39).
Although physical abuse alone did not significantly increase risk of MS (HR 1.31; 95% CI, 0.83-2.06), it did contribute to a dose-response relationship when women were exposed to more than one type of childhood abuse. Women exposed to two out of three abuse categories had a 66% increased risk of MS (HR 1.66; 95% CI, 1.04-2.67), whereas women exposed to all three types of abuse had the highest risk of MS, at 93% (HR 1.93; 95% CI, 1.02-3.67).
Dr. Eid and colleagues noted that their findings are supported by previous retrospective research, and discussed possible mechanisms of action.
“The increased risk of MS after exposure to childhood sexual and emotional abuse may have a biological explanation,” they wrote. “Childhood abuse can cause dysregulation of the hypothalamic-pituitary-adrenal axis, lead to oxidative stress, and induce a proinflammatory state decades into adulthood. Psychological stress has been shown to disrupt the blood-brain barrier and cause epigenetic changes that may increase the risk of neurodegenerative disorders, including MS.
“The underlying mechanisms behind this association should be investigated further,” they concluded.
Study findings should guide interventions
Commenting on the research, Ruth Ann Marrie, MD, PhD, professor of medicine and community health sciences and director of the multiple sclerosis clinic at Max Rady College of Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, said that the present study “has several strengths compared to prior studies – including that it is prospective and the sample size.”
Dr. Marrie, who was not involved in the study, advised clinicians in the field to take note of the findings, as patients with a history of abuse may need unique interventions.
“Providers need to recognize the higher prevalence of childhood maltreatment in people with MS,” Dr. Marrie said in an interview. “These findings dovetail with others that suggest that adverse childhood experiences are associated with increased mental health concerns and pain catastrophizing in people with MS. Affected individuals may benefit from additional psychological supports and trauma-informed care.”
Tiffany Joy Braley, MD, associate professor of neurology, and Carri Polick, RN and PhD candidate at the school of nursing, University of Michigan, Ann Arbor, who published a case report last year highlighting the importance of evaluating stress exposure in MS, suggested that the findings should guide interventions at both a system and patient level.
“Although a cause-and-effect relationship cannot be established by the current study, these and related findings should be considered in the context of system level and policy interventions that address links between environment and health care disparities,” they said in a joint, written comment. “Given recent impetus to provide trauma-informed health care, these data could be particularly informative in neurological conditions which are associated with high mental health comorbidity. Traumatic stress screening practices could lead to referrals for appropriate support services and more personalized health care.”
While several mechanisms have been proposed to explain the link between traumatic stress and MS, more work is needed in this area, they added.
This knowledge gap was acknowledged by Dr. Marrie.
“Our understanding of the etiology of MS remains incomplete,” Dr. Marrie said. “We still need a better understanding of mechanisms by which adverse childhood experiences lead to MS, how they interact with other risk factors for MS (beyond smoking and obesity), and whether there are any interventions that can mitigate the risk of developing MS that is associated with adverse childhood experiences.”
The investigators disclosed relationships with Novartis, Biogen, Merck, and others. Dr. Marrie receives research support from the Canadian Institutes of Health Research, the National Multiple Sclerosis Society, MS Society of Canada, the Consortium of Multiple Sclerosis Centers, Crohn’s and Colitis Canada, Research Manitoba, and the Arthritis Society; she has no pharmaceutical support. Dr. Braley and Ms. Polick reported no conflicts of interest.
FROM THE JOURNAL OF NEUROLOGY, NEUROSURGERY, & PSYCHIATRY
What’s the most likely cause of this man’s severe headaches?
He reports that these started 3 days ago. His headache is worse when he stands, and resolves when he lies down. Valsalva maneuver makes the headache much worse. The headaches are present in the occipital region. He also has noticed the onset of tinnitus. A physical exam reveals that his blood pressure is 110/70 mm Hg, his pulse is 60 beats per minute, and his temperature is 36.4° C. His standing BP is 105/60 mm Hg and standing pulse is 66 bpm. Both his neurologic exam and noncontrast head CT scan are normal.
Which of the following is the most likely diagnosis?
A) Subarachnoid hemorrhage
B) POTS (Postural orthostatic tachycardia syndrome)
C) Hypnic headache
D) Spontaneous intracranial hypotension (SIH)
E) Acoustic neuroma
The most likely cause for this patient’s headaches given his set of symptoms is spontaneous intracranial hypotension. Orthostatic headaches are common with POTS, but the absence of tachycardia with standing makes this diagnosis unlikely.
Spontaneous intracranial hypotension has symptoms that we are all familiar with in the post–lumbar puncture patient. In patients with post-LP headache, the positional nature makes it easy to diagnose. Patients who have had a lumbar puncture have a clear reason they have a cerebrospinal fluid (CSF) leak, leading to intracranial hypotension. Those with SIH do not.
Related research
Schievink summarized a lot of useful information in a review of patients with spontaneous intracranial hypotension.1 The incidence is about 5/100,000, with the most common age around 40 years old. The most common symptom is orthostatic headache. The headache usually occurs within 15 minutes upon standing, and many patients have the onset of headache rapidly upon standing.
Usually the headache improves with lying down, and it is often brought on with Valsalva maneuver. Many patients report headaches that are worse in the second half of the day.
Orthostatic headache occurs in almost all patients with spontaneous intracranial hypotension, but in one series it occurred only in 77% of patients with SIH.2 The patients who did not have typical headaches are more likely to have auditory symptoms such as tinnitus and muffled hearing.3
When you suspect SIH, appropriate workup is to start with brain MR imaging with contrast. Krantz and colleagues found dural enhancement was present in 83% of cases of SIH, venous distention sign in 75%, and brain sagging in 61%.4
About 10% of patients with SIH have normal brain imaging, so if the clinical features strongly suggest the diagnosis, moving on to spinal imaging with CT myelography or spinal MR are appropriate next steps.5
The causes of SIH are meningeal diverticula (usually in the thoracic or upper lumbar regions), ventral dural tears (usually from osteophytes), and cerebrospinal fluid–venous fistulas. Treatment of SIH has traditionally included a conservative approach of bed rest, oral hydration, and caffeine. The effectiveness of this is unknown, and, in one small series, 61% had headache symptoms at 6 months.6
Epidural blood patches are likely more rapidly effective than conservative therapy. In one study comparing the two treatments, Chung and colleagues found that 77% of the patients who received an epidural blood patch had complete headache relief at 4 weeks, compared with 40% of those who received conservative measures (P < .05).7
Clinical pearls
- Strongly consider SIH in patients with positional headache.
- Brain MR should be the first diagnostic test.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and serves as 3rd-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.
References
1. Schievink WI. Spontaneous spinal cerebrospinal fluid leaks and intracranial hypotension. JAMA. 2006;295:2286-96.
2. Mea E et al. Headache attributed to spontaneous intracranial hypotension. Neurol Sci. 2008;29:164-65.
3. Krantz PG et al. Spontaneous Intracranial Hypotension: 10 Myths and Misperceptions. Headache. 2018;58:948-59.
4. Krantz PG et. al. Imaging signs in spontaneous intracranial hypotension: prevalence and relationship to CSF pressure. AJNR Am J Neuroradiol. 2016;37:1374-8.
5. Krantz PG et al. Spontaneous intracranial hypotension: Pathogenesis, diagnosis, and treatment. Neuroimaging Clin N Am. 2019;29:581-94.
6. Kong D-S et. al. Clinical features and long-term results of spontaneous intracranial hypotension. Neurosurgery. 2005;57:91-6.
7. Chung SJ et al. Short- and long-term outcomes of spontaneous CSF hypovolemia. Eur Neurol. 2005;54:63-7.
He reports that these started 3 days ago. His headache is worse when he stands, and resolves when he lies down. Valsalva maneuver makes the headache much worse. The headaches are present in the occipital region. He also has noticed the onset of tinnitus. A physical exam reveals that his blood pressure is 110/70 mm Hg, his pulse is 60 beats per minute, and his temperature is 36.4° C. His standing BP is 105/60 mm Hg and standing pulse is 66 bpm. Both his neurologic exam and noncontrast head CT scan are normal.
Which of the following is the most likely diagnosis?
A) Subarachnoid hemorrhage
B) POTS (Postural orthostatic tachycardia syndrome)
C) Hypnic headache
D) Spontaneous intracranial hypotension (SIH)
E) Acoustic neuroma
The most likely cause for this patient’s headaches given his set of symptoms is spontaneous intracranial hypotension. Orthostatic headaches are common with POTS, but the absence of tachycardia with standing makes this diagnosis unlikely.
Spontaneous intracranial hypotension has symptoms that we are all familiar with in the post–lumbar puncture patient. In patients with post-LP headache, the positional nature makes it easy to diagnose. Patients who have had a lumbar puncture have a clear reason they have a cerebrospinal fluid (CSF) leak, leading to intracranial hypotension. Those with SIH do not.
Related research
Schievink summarized a lot of useful information in a review of patients with spontaneous intracranial hypotension.1 The incidence is about 5/100,000, with the most common age around 40 years old. The most common symptom is orthostatic headache. The headache usually occurs within 15 minutes upon standing, and many patients have the onset of headache rapidly upon standing.
Usually the headache improves with lying down, and it is often brought on with Valsalva maneuver. Many patients report headaches that are worse in the second half of the day.
Orthostatic headache occurs in almost all patients with spontaneous intracranial hypotension, but in one series it occurred only in 77% of patients with SIH.2 The patients who did not have typical headaches are more likely to have auditory symptoms such as tinnitus and muffled hearing.3
When you suspect SIH, appropriate workup is to start with brain MR imaging with contrast. Krantz and colleagues found dural enhancement was present in 83% of cases of SIH, venous distention sign in 75%, and brain sagging in 61%.4
About 10% of patients with SIH have normal brain imaging, so if the clinical features strongly suggest the diagnosis, moving on to spinal imaging with CT myelography or spinal MR are appropriate next steps.5
The causes of SIH are meningeal diverticula (usually in the thoracic or upper lumbar regions), ventral dural tears (usually from osteophytes), and cerebrospinal fluid–venous fistulas. Treatment of SIH has traditionally included a conservative approach of bed rest, oral hydration, and caffeine. The effectiveness of this is unknown, and, in one small series, 61% had headache symptoms at 6 months.6
Epidural blood patches are likely more rapidly effective than conservative therapy. In one study comparing the two treatments, Chung and colleagues found that 77% of the patients who received an epidural blood patch had complete headache relief at 4 weeks, compared with 40% of those who received conservative measures (P < .05).7
Clinical pearls
- Strongly consider SIH in patients with positional headache.
- Brain MR should be the first diagnostic test.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and serves as 3rd-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.
References
1. Schievink WI. Spontaneous spinal cerebrospinal fluid leaks and intracranial hypotension. JAMA. 2006;295:2286-96.
2. Mea E et al. Headache attributed to spontaneous intracranial hypotension. Neurol Sci. 2008;29:164-65.
3. Krantz PG et al. Spontaneous Intracranial Hypotension: 10 Myths and Misperceptions. Headache. 2018;58:948-59.
4. Krantz PG et. al. Imaging signs in spontaneous intracranial hypotension: prevalence and relationship to CSF pressure. AJNR Am J Neuroradiol. 2016;37:1374-8.
5. Krantz PG et al. Spontaneous intracranial hypotension: Pathogenesis, diagnosis, and treatment. Neuroimaging Clin N Am. 2019;29:581-94.
6. Kong D-S et. al. Clinical features and long-term results of spontaneous intracranial hypotension. Neurosurgery. 2005;57:91-6.
7. Chung SJ et al. Short- and long-term outcomes of spontaneous CSF hypovolemia. Eur Neurol. 2005;54:63-7.
He reports that these started 3 days ago. His headache is worse when he stands, and resolves when he lies down. Valsalva maneuver makes the headache much worse. The headaches are present in the occipital region. He also has noticed the onset of tinnitus. A physical exam reveals that his blood pressure is 110/70 mm Hg, his pulse is 60 beats per minute, and his temperature is 36.4° C. His standing BP is 105/60 mm Hg and standing pulse is 66 bpm. Both his neurologic exam and noncontrast head CT scan are normal.
Which of the following is the most likely diagnosis?
A) Subarachnoid hemorrhage
B) POTS (Postural orthostatic tachycardia syndrome)
C) Hypnic headache
D) Spontaneous intracranial hypotension (SIH)
E) Acoustic neuroma
The most likely cause for this patient’s headaches given his set of symptoms is spontaneous intracranial hypotension. Orthostatic headaches are common with POTS, but the absence of tachycardia with standing makes this diagnosis unlikely.
Spontaneous intracranial hypotension has symptoms that we are all familiar with in the post–lumbar puncture patient. In patients with post-LP headache, the positional nature makes it easy to diagnose. Patients who have had a lumbar puncture have a clear reason they have a cerebrospinal fluid (CSF) leak, leading to intracranial hypotension. Those with SIH do not.
Related research
Schievink summarized a lot of useful information in a review of patients with spontaneous intracranial hypotension.1 The incidence is about 5/100,000, with the most common age around 40 years old. The most common symptom is orthostatic headache. The headache usually occurs within 15 minutes upon standing, and many patients have the onset of headache rapidly upon standing.
Usually the headache improves with lying down, and it is often brought on with Valsalva maneuver. Many patients report headaches that are worse in the second half of the day.
Orthostatic headache occurs in almost all patients with spontaneous intracranial hypotension, but in one series it occurred only in 77% of patients with SIH.2 The patients who did not have typical headaches are more likely to have auditory symptoms such as tinnitus and muffled hearing.3
When you suspect SIH, appropriate workup is to start with brain MR imaging with contrast. Krantz and colleagues found dural enhancement was present in 83% of cases of SIH, venous distention sign in 75%, and brain sagging in 61%.4
About 10% of patients with SIH have normal brain imaging, so if the clinical features strongly suggest the diagnosis, moving on to spinal imaging with CT myelography or spinal MR are appropriate next steps.5
The causes of SIH are meningeal diverticula (usually in the thoracic or upper lumbar regions), ventral dural tears (usually from osteophytes), and cerebrospinal fluid–venous fistulas. Treatment of SIH has traditionally included a conservative approach of bed rest, oral hydration, and caffeine. The effectiveness of this is unknown, and, in one small series, 61% had headache symptoms at 6 months.6
Epidural blood patches are likely more rapidly effective than conservative therapy. In one study comparing the two treatments, Chung and colleagues found that 77% of the patients who received an epidural blood patch had complete headache relief at 4 weeks, compared with 40% of those who received conservative measures (P < .05).7
Clinical pearls
- Strongly consider SIH in patients with positional headache.
- Brain MR should be the first diagnostic test.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and serves as 3rd-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.
References
1. Schievink WI. Spontaneous spinal cerebrospinal fluid leaks and intracranial hypotension. JAMA. 2006;295:2286-96.
2. Mea E et al. Headache attributed to spontaneous intracranial hypotension. Neurol Sci. 2008;29:164-65.
3. Krantz PG et al. Spontaneous Intracranial Hypotension: 10 Myths and Misperceptions. Headache. 2018;58:948-59.
4. Krantz PG et. al. Imaging signs in spontaneous intracranial hypotension: prevalence and relationship to CSF pressure. AJNR Am J Neuroradiol. 2016;37:1374-8.
5. Krantz PG et al. Spontaneous intracranial hypotension: Pathogenesis, diagnosis, and treatment. Neuroimaging Clin N Am. 2019;29:581-94.
6. Kong D-S et. al. Clinical features and long-term results of spontaneous intracranial hypotension. Neurosurgery. 2005;57:91-6.
7. Chung SJ et al. Short- and long-term outcomes of spontaneous CSF hypovolemia. Eur Neurol. 2005;54:63-7.
Meta-analysis confirms neuroprotective benefit of metformin
Key takeaways
, according to a systematic review and meta-analysis of longitudinal data.
However, the heterogeneity between the available studies and the potential heterogeneity of diagnostic criteria may mean that validation studies are needed.
Why is this important?
Data suggest that metformin, the most commonly prescribed antidiabetic drug, may be neuroprotective, while diabetes is associated with an excess risk of neurodegenerative disease. Results of studies conducted specifically to investigate the benefit of the antidiabetic drug on cognitive prognosis have been unclear. A meta-analysis was published in 2020, but it included cross-sectional and case-control studies. Given the long observation period needed to measure such an outcome, only cohort studies conducted over several years can provide reliable results. This new meta-analysis attempts to circumvent this limitation.
Methods
The meta-analysis was conducted using studies published up to March 2021 that met the inclusion criteria (population-based cohort studies published in English in which the administration of metformin and associated risk of exposure were reported).
Main results
Twelve studies were included in this analysis, of which eight were retrospective and 11 were considered to be of good methodologic quality. In total, 194,792 patients were included.
Pooled data showed that the relative risk associated with onset of neurodegenerative disease was 0.77 (95% CI, 0.67-0.88) for patients with diabetes taking metformin versus those not taking metformin. However, heterogeneity between studies was high (I2; 78.8%; P < .001).
The effect was greater with longer metformin use, with an RR of 0.29 (95% CI, 0.13-0.44) for those who took metformin for 4 years or more. Similarly, the studies conducted in Asian countries versus other locations suggested an added benefit for this population (RR, 0.69; 95% CI, 0.64-0.74).
Sensitivity analyses confirmed these results, and subtype analyses showed no difference according to the nature of the neurodegenerative disease.
A version of this article first appeared on Univadis.
Key takeaways
, according to a systematic review and meta-analysis of longitudinal data.
However, the heterogeneity between the available studies and the potential heterogeneity of diagnostic criteria may mean that validation studies are needed.
Why is this important?
Data suggest that metformin, the most commonly prescribed antidiabetic drug, may be neuroprotective, while diabetes is associated with an excess risk of neurodegenerative disease. Results of studies conducted specifically to investigate the benefit of the antidiabetic drug on cognitive prognosis have been unclear. A meta-analysis was published in 2020, but it included cross-sectional and case-control studies. Given the long observation period needed to measure such an outcome, only cohort studies conducted over several years can provide reliable results. This new meta-analysis attempts to circumvent this limitation.
Methods
The meta-analysis was conducted using studies published up to March 2021 that met the inclusion criteria (population-based cohort studies published in English in which the administration of metformin and associated risk of exposure were reported).
Main results
Twelve studies were included in this analysis, of which eight were retrospective and 11 were considered to be of good methodologic quality. In total, 194,792 patients were included.
Pooled data showed that the relative risk associated with onset of neurodegenerative disease was 0.77 (95% CI, 0.67-0.88) for patients with diabetes taking metformin versus those not taking metformin. However, heterogeneity between studies was high (I2; 78.8%; P < .001).
The effect was greater with longer metformin use, with an RR of 0.29 (95% CI, 0.13-0.44) for those who took metformin for 4 years or more. Similarly, the studies conducted in Asian countries versus other locations suggested an added benefit for this population (RR, 0.69; 95% CI, 0.64-0.74).
Sensitivity analyses confirmed these results, and subtype analyses showed no difference according to the nature of the neurodegenerative disease.
A version of this article first appeared on Univadis.
Key takeaways
, according to a systematic review and meta-analysis of longitudinal data.
However, the heterogeneity between the available studies and the potential heterogeneity of diagnostic criteria may mean that validation studies are needed.
Why is this important?
Data suggest that metformin, the most commonly prescribed antidiabetic drug, may be neuroprotective, while diabetes is associated with an excess risk of neurodegenerative disease. Results of studies conducted specifically to investigate the benefit of the antidiabetic drug on cognitive prognosis have been unclear. A meta-analysis was published in 2020, but it included cross-sectional and case-control studies. Given the long observation period needed to measure such an outcome, only cohort studies conducted over several years can provide reliable results. This new meta-analysis attempts to circumvent this limitation.
Methods
The meta-analysis was conducted using studies published up to March 2021 that met the inclusion criteria (population-based cohort studies published in English in which the administration of metformin and associated risk of exposure were reported).
Main results
Twelve studies were included in this analysis, of which eight were retrospective and 11 were considered to be of good methodologic quality. In total, 194,792 patients were included.
Pooled data showed that the relative risk associated with onset of neurodegenerative disease was 0.77 (95% CI, 0.67-0.88) for patients with diabetes taking metformin versus those not taking metformin. However, heterogeneity between studies was high (I2; 78.8%; P < .001).
The effect was greater with longer metformin use, with an RR of 0.29 (95% CI, 0.13-0.44) for those who took metformin for 4 years or more. Similarly, the studies conducted in Asian countries versus other locations suggested an added benefit for this population (RR, 0.69; 95% CI, 0.64-0.74).
Sensitivity analyses confirmed these results, and subtype analyses showed no difference according to the nature of the neurodegenerative disease.
A version of this article first appeared on Univadis.