User login
Neurology Reviews covers innovative and emerging news in neurology and neuroscience every month, with a focus on practical approaches to treating Parkinson's disease, epilepsy, headache, stroke, multiple sclerosis, Alzheimer's disease, and other neurologic disorders.
PML
Progressive multifocal leukoencephalopathy
Rituxan
The leading independent newspaper covering neurology news and commentary.
Blood Biomarkers Are Highly Accurate in Diagnosing Alzheimer’s Disease
PHILADELPHIA — new research showed.
Accurate early diagnosis of Alzheimer’s disease is important because two monoclonal antibodies donanemab (Kisunla) and lecanemab (Leqembi) are now approved by the Food and Drug Administration (FDA) for early-stage Alzheimer’s disease. However, the use of these agents requires amyloid confirmation.
A key finding of the study was that primary care physicians had a diagnostic accuracy of 61%, and dementia specialists had an accuracy of 73%, after completing standard clinical evaluations and before seeing results of the blood test or other Alzheimer’s disease biomarkers, while the blood test used in the study had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.
“This underscores the potential improvement in diagnostic accuracy, especially in primary care, when implementing such a blood test,” said study investigator Sebastian Palmqvist, MD, PhD, associate professor of neurology at Lund University, Lund, and a consultant at Skåne University Hospital, Malmö, Sweden. “It also highlights the challenges in accurately identifying Alzheimer’s disease based solely on clinical evaluation and cognitive testing, even for specialists.”
The findings were presented at the 2024 Alzheimer’s Association International Conference (AAIC) and simultaneously published online in JAMA.
The study included two cohorts from primary and secondary care clinics in Sweden. Researchers analyzed plasma samples together at one time point in a single batch.
It also included two cohorts from Swedish primary and secondary care clinics where the plasma samples were analyzed prospectively (biweekly) in batches throughout the enrollment period, which more closely resembles clinical practice.
Primary care physicians and dementia specialists documented whether they believed their patients had Alzheimer’s disease pathology, basing the diagnoses on the standard evaluation that includes clinical examination, cognitive testing, and a CT scan prior to seeing any Alzheimer’s disease biomarker results.
They reported their certainty of the presence of Alzheimer’s disease pathology on a scale from 0 (not at all certain) to 10 (completely certain).
Plasma analyses were performed by personnel blinded to all clinical or biomarker data. Mass spectrometry assays were used to analyze Abeta42, Abeta40, phosphorylated tau 217 (p-tau217), and non–p-tau217.
Biomarkers used in the study included the percentage of plasma p-tau217, which is the ratio of p-tau217 relative to non–p-tau217, and the Abeta42 to Abeta40 ratio (the amyloid probability score 2 [APS2]). Researchers determined p-tau217 alone and when combined with the APS2.
The study included 1213 patients with cognitive symptoms — mean age 74.2 years and 48% women. Researchers applied biomarker cutoff values to the primary care cohort (n = 307) and the secondary care cohort (n = 300) and then evaluated the blood test prospectively in the primary care cohort (n = 208) and the secondary care cohort (n = 398).
The blood biomarker cutoff value was set at 90% specificity for Alzheimer’s disease pathology (the 1 cutoff-value approach). A 2 cutoff-value approach (using 1 upper and 1 lower cutoff value) was also used with values corresponding to 95% sensitivity and 95% specificity.
The primary outcome was presence of Alzheimer’s disease pathology. A positive finding of the Abeta biomarker was defined according to the FDA-approved cutoff value (≤ 0.072). A positive finding of the tau biomarker was defined as a p-tau217 level > 11.42 pg/mL in cerebrospinal fluid.
Researchers calculated the positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy, as well as area under the curve (AUC) values.
Accuracy in Specialty Versus Primary Care
When the plasma samples were analyzed in a single batch in the primary care cohort, the AUC was 0.97 when the APS2 was used. In the secondary care cohort, the AUC was 0.96 when the APS2 was used.
When plasma samples were analyzed prospectively (biweekly) in the primary care cohort, the AUC was 0.96 when the APS2 was used. In the secondary care cohort, the AUC was 0.97 when the APS2 was used.
The 2 cutoff-value approach achieved PPVs of 97%-99% in patients with cognitive impairment, which is the target population of currently available antiamyloid treatments.
Although NPVs were slightly lower in these patients (87%-92% using the APS2), “we argue that a very high positive predictive value is probably more important in diagnosing patients as having Alzheimer’s disease, especially before initiating costly and burdensome antiamyloid treatment,” the investigators noted.
The PPVs were less than optimal for accurate identification of Alzheimer’s disease pathology in patients with subjective cognitive decline regardless of the cutoff-value approach used. The researchers pointed out that this could be a disadvantage for clinical trials that include patients with presymptomatic Alzheimer’s disease but not in clinical practice because there are no clinical criteria for diagnosing Alzheimer’s disease at the subjective cognitive decline stage.
The NPVs were higher in patients with subjective cognitive decline (91%-94% for the APS2 or percentage of p-tau217 alone). This indicates the blood test would be more useful for ruling out underlying Alzheimer’s disease when only subtle symptoms are present, the researchers noted.
As for doctors identifying clinical Alzheimer’s disease, primary care physicians had a diagnostic accuracy of 61% (95% CI, 53%-69%) versus 91% (95% CI, 86%-96%) using the APS2. Dementia specialists had a diagnostic accuracy of 73% (95% CI, 68%-79%) versus 91% (95% CI, 86%-95%) using the APS2.
In the overall population, the diagnostic accuracy using the APS2 (90%; 95% CI, 88%-92%) was not different from that using the percentage of p-tau217 alone (90%; 95% CI, 88%-91%).
Very little was known about how a blood test would perform in a primary care setting, said Dr. Palmqvist. “Seeing that the test was just as accurate in primary care (about 90%) as it was in secondary care is really encouraging, especially since primary care is the first, and often final, point of entry into the healthcare system for cognitive evaluations.”
He said he was surprised the biomarkers performed so well in prospective, biweekly analyses throughout the study. “Previous studies have only demonstrated their effectiveness when all collected samples are analyzed at a single time point, which does not reflect how a blood test is used in clinical practice.”
He added that he was surprised that the tests were just as accurate in primary care as in a memory clinic setting with referred patients. This, despite older age and higher prevalence of comorbidities in primary care, such as chronic kidney disease (present in 26% of the primary care cohort), can be a confounding factor causing increased concentrations of p-tau217.
Next Steps
The diagnostic accuracy of the blood tests is on par with FDA-cleared cerebrospinal fluid biomarkers, noted the investigators, led by senior author Oskar Hansson, MD, PhD, Clinical Memory Research Unit, Department of Clinical Sciences Malm
As blood tests are “more time effective, cost effective, and convenient” for patients, “they could also potentially replace cerebrospinal fluid tests and PET,” they added.
Dr. Palmqvist emphasized that these tests should not be used as stand-alone diagnostic tools for Alzheimer’s disease but should complement the standard clinical evaluation that includes cognitive testing and a thorough interview with the patient and a spouse or relative.
“This is crucial because Alzheimer’s disease pathology can be asymptomatic for many years, and cognitive symptoms in some patients with Alzheimer’s disease pathology may primarily result from other conditions. Misinterpreting a positive Alzheimer’s disease blood test could lead to underdiagnosis of common non–Alzheimer’s disease conditions.”
With new antiamyloid treatments possibly slowing disease progression by 30%-40% when initiated early on, a blood test for Alzheimer’s disease could lead to more people receiving an accurate and earlier diagnosis, said Dr. Palmqvist. “This could potentially result in a better response to treatment. Results from drug trials clearly indicate that the earlier treatment begins, the more effectively it can slow disease progression.”
The test used in the study is already available in the United States, the investigators said, and a similar test will be accessible in Sweden within a few months. “However, the rollout will probably be gradual and will depend on how international and national guidelines recommend their use, so developing these guidelines will be a crucial next step for widespread implementation, particularly in primary care,” said Dr. Palmqvist.
He also underlined the importance of replicating the findings in more diverse populations. “This will help ensure the tests’ reliability and effectiveness across various demographic and clinical contexts.”
An important next research step is to examine how implementing a blood test for Alzheimer’s disease affects patient care. “This includes looking at changes in management, such as referrals, other examinations, and the initiation of appropriate treatments,” said Dr. Palmqvist.
Another study presented at the meeting showed that a highly accurate blood test could significantly reduce diagnostic wait times.
Convincing Research
In an accompanying editorial, Stephen Salloway, MD, Departments of Psychiatry and Neurology, Warren Alpert Medical School, Brown University, Providence, Rhode Island, and colleagues said the study “makes the case convincingly that highly sensitive blood measures of Alzheimer’s disease can be integrated into the clinical decision-making process, including in the primary care setting.”
These tests, they wrote, “can be used to enhance the ability of clinicians to accurately identify individuals with cognitive impairment and dementia due to Alzheimer’s disease.
“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid.”
A key limitation of the study was the lack of diversity in the study sample. This makes it difficult to generalize the results across other ethnic and racial groups, the editorialists noted. Plasma assays for Alzheimer’s disease in the United States will require approval from the FDA and coverage by the Centers for Medicare & Medicaid Services to be widely adopted.
The editorialists also pointed out that advances in the diagnosis and treatment of Alzheimer’s disease will require important changes to healthcare models, including providing additional resources and staffing.
The study was supported by the Alzheimer’s Association, National Institute on Aging, European Research Council, Swedish Research Council, the GHR Foundation, and other groups. The study was conducted as an academic collaboration between Lund University and C2N Diagnostics in the United States. Lund University or its affiliated researchers received no funding or compensation from C2N Diagnostics. C2N Diagnostics performed the plasma analyses blinded to any biomarker or clinical data and had no role in the statistical analysis or results. Dr. Palmqvist reported receiving institutional research support from ki:elements, Alzheimer’s Drug Discovery Foundation, and Avid Radiopharmaceuticals and consultancy or speaker fees from BioArctic, Biogen, Esai, Eli Lilly, and Roche. Dr. Hansson reported receiving personal fees from AC Immune, ALZpath, BioArctic, Biogen, Cerveau, Eisai, Eli Lilly, Fujirebio, Roche, Bristol-Myers Squibb, Merck, Novartis, Novo Nordisk, Roche, Sanofi, and Siemens and institutional research support from ADX, AVID Radiopharmaceuticals, Biogen, Eli Lilly, Eisai, Fujirebio, GE Healthcare, Pfizer, and Roche. Dr. Salloway reported receiving grants from Biogen, Roche, Lilly, Genentech, Eisai, and Novartis; personal fees from Biogen, Roche, Lilly, Genentech, Eisai, Novo Nordisk, Prothena, AbbVie, Acumen, and Kisbee; and nonfinancial support (travel expenses for conference attendance) from Biogen, Roche, Lilly, and Acumen.
A version of this article appeared on Medscape.com.
PHILADELPHIA — new research showed.
Accurate early diagnosis of Alzheimer’s disease is important because two monoclonal antibodies donanemab (Kisunla) and lecanemab (Leqembi) are now approved by the Food and Drug Administration (FDA) for early-stage Alzheimer’s disease. However, the use of these agents requires amyloid confirmation.
A key finding of the study was that primary care physicians had a diagnostic accuracy of 61%, and dementia specialists had an accuracy of 73%, after completing standard clinical evaluations and before seeing results of the blood test or other Alzheimer’s disease biomarkers, while the blood test used in the study had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.
“This underscores the potential improvement in diagnostic accuracy, especially in primary care, when implementing such a blood test,” said study investigator Sebastian Palmqvist, MD, PhD, associate professor of neurology at Lund University, Lund, and a consultant at Skåne University Hospital, Malmö, Sweden. “It also highlights the challenges in accurately identifying Alzheimer’s disease based solely on clinical evaluation and cognitive testing, even for specialists.”
The findings were presented at the 2024 Alzheimer’s Association International Conference (AAIC) and simultaneously published online in JAMA.
The study included two cohorts from primary and secondary care clinics in Sweden. Researchers analyzed plasma samples together at one time point in a single batch.
It also included two cohorts from Swedish primary and secondary care clinics where the plasma samples were analyzed prospectively (biweekly) in batches throughout the enrollment period, which more closely resembles clinical practice.
Primary care physicians and dementia specialists documented whether they believed their patients had Alzheimer’s disease pathology, basing the diagnoses on the standard evaluation that includes clinical examination, cognitive testing, and a CT scan prior to seeing any Alzheimer’s disease biomarker results.
They reported their certainty of the presence of Alzheimer’s disease pathology on a scale from 0 (not at all certain) to 10 (completely certain).
Plasma analyses were performed by personnel blinded to all clinical or biomarker data. Mass spectrometry assays were used to analyze Abeta42, Abeta40, phosphorylated tau 217 (p-tau217), and non–p-tau217.
Biomarkers used in the study included the percentage of plasma p-tau217, which is the ratio of p-tau217 relative to non–p-tau217, and the Abeta42 to Abeta40 ratio (the amyloid probability score 2 [APS2]). Researchers determined p-tau217 alone and when combined with the APS2.
The study included 1213 patients with cognitive symptoms — mean age 74.2 years and 48% women. Researchers applied biomarker cutoff values to the primary care cohort (n = 307) and the secondary care cohort (n = 300) and then evaluated the blood test prospectively in the primary care cohort (n = 208) and the secondary care cohort (n = 398).
The blood biomarker cutoff value was set at 90% specificity for Alzheimer’s disease pathology (the 1 cutoff-value approach). A 2 cutoff-value approach (using 1 upper and 1 lower cutoff value) was also used with values corresponding to 95% sensitivity and 95% specificity.
The primary outcome was presence of Alzheimer’s disease pathology. A positive finding of the Abeta biomarker was defined according to the FDA-approved cutoff value (≤ 0.072). A positive finding of the tau biomarker was defined as a p-tau217 level > 11.42 pg/mL in cerebrospinal fluid.
Researchers calculated the positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy, as well as area under the curve (AUC) values.
Accuracy in Specialty Versus Primary Care
When the plasma samples were analyzed in a single batch in the primary care cohort, the AUC was 0.97 when the APS2 was used. In the secondary care cohort, the AUC was 0.96 when the APS2 was used.
When plasma samples were analyzed prospectively (biweekly) in the primary care cohort, the AUC was 0.96 when the APS2 was used. In the secondary care cohort, the AUC was 0.97 when the APS2 was used.
The 2 cutoff-value approach achieved PPVs of 97%-99% in patients with cognitive impairment, which is the target population of currently available antiamyloid treatments.
Although NPVs were slightly lower in these patients (87%-92% using the APS2), “we argue that a very high positive predictive value is probably more important in diagnosing patients as having Alzheimer’s disease, especially before initiating costly and burdensome antiamyloid treatment,” the investigators noted.
The PPVs were less than optimal for accurate identification of Alzheimer’s disease pathology in patients with subjective cognitive decline regardless of the cutoff-value approach used. The researchers pointed out that this could be a disadvantage for clinical trials that include patients with presymptomatic Alzheimer’s disease but not in clinical practice because there are no clinical criteria for diagnosing Alzheimer’s disease at the subjective cognitive decline stage.
The NPVs were higher in patients with subjective cognitive decline (91%-94% for the APS2 or percentage of p-tau217 alone). This indicates the blood test would be more useful for ruling out underlying Alzheimer’s disease when only subtle symptoms are present, the researchers noted.
As for doctors identifying clinical Alzheimer’s disease, primary care physicians had a diagnostic accuracy of 61% (95% CI, 53%-69%) versus 91% (95% CI, 86%-96%) using the APS2. Dementia specialists had a diagnostic accuracy of 73% (95% CI, 68%-79%) versus 91% (95% CI, 86%-95%) using the APS2.
In the overall population, the diagnostic accuracy using the APS2 (90%; 95% CI, 88%-92%) was not different from that using the percentage of p-tau217 alone (90%; 95% CI, 88%-91%).
Very little was known about how a blood test would perform in a primary care setting, said Dr. Palmqvist. “Seeing that the test was just as accurate in primary care (about 90%) as it was in secondary care is really encouraging, especially since primary care is the first, and often final, point of entry into the healthcare system for cognitive evaluations.”
He said he was surprised the biomarkers performed so well in prospective, biweekly analyses throughout the study. “Previous studies have only demonstrated their effectiveness when all collected samples are analyzed at a single time point, which does not reflect how a blood test is used in clinical practice.”
He added that he was surprised that the tests were just as accurate in primary care as in a memory clinic setting with referred patients. This, despite older age and higher prevalence of comorbidities in primary care, such as chronic kidney disease (present in 26% of the primary care cohort), can be a confounding factor causing increased concentrations of p-tau217.
Next Steps
The diagnostic accuracy of the blood tests is on par with FDA-cleared cerebrospinal fluid biomarkers, noted the investigators, led by senior author Oskar Hansson, MD, PhD, Clinical Memory Research Unit, Department of Clinical Sciences Malm
As blood tests are “more time effective, cost effective, and convenient” for patients, “they could also potentially replace cerebrospinal fluid tests and PET,” they added.
Dr. Palmqvist emphasized that these tests should not be used as stand-alone diagnostic tools for Alzheimer’s disease but should complement the standard clinical evaluation that includes cognitive testing and a thorough interview with the patient and a spouse or relative.
“This is crucial because Alzheimer’s disease pathology can be asymptomatic for many years, and cognitive symptoms in some patients with Alzheimer’s disease pathology may primarily result from other conditions. Misinterpreting a positive Alzheimer’s disease blood test could lead to underdiagnosis of common non–Alzheimer’s disease conditions.”
With new antiamyloid treatments possibly slowing disease progression by 30%-40% when initiated early on, a blood test for Alzheimer’s disease could lead to more people receiving an accurate and earlier diagnosis, said Dr. Palmqvist. “This could potentially result in a better response to treatment. Results from drug trials clearly indicate that the earlier treatment begins, the more effectively it can slow disease progression.”
The test used in the study is already available in the United States, the investigators said, and a similar test will be accessible in Sweden within a few months. “However, the rollout will probably be gradual and will depend on how international and national guidelines recommend their use, so developing these guidelines will be a crucial next step for widespread implementation, particularly in primary care,” said Dr. Palmqvist.
He also underlined the importance of replicating the findings in more diverse populations. “This will help ensure the tests’ reliability and effectiveness across various demographic and clinical contexts.”
An important next research step is to examine how implementing a blood test for Alzheimer’s disease affects patient care. “This includes looking at changes in management, such as referrals, other examinations, and the initiation of appropriate treatments,” said Dr. Palmqvist.
Another study presented at the meeting showed that a highly accurate blood test could significantly reduce diagnostic wait times.
Convincing Research
In an accompanying editorial, Stephen Salloway, MD, Departments of Psychiatry and Neurology, Warren Alpert Medical School, Brown University, Providence, Rhode Island, and colleagues said the study “makes the case convincingly that highly sensitive blood measures of Alzheimer’s disease can be integrated into the clinical decision-making process, including in the primary care setting.”
These tests, they wrote, “can be used to enhance the ability of clinicians to accurately identify individuals with cognitive impairment and dementia due to Alzheimer’s disease.
“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid.”
A key limitation of the study was the lack of diversity in the study sample. This makes it difficult to generalize the results across other ethnic and racial groups, the editorialists noted. Plasma assays for Alzheimer’s disease in the United States will require approval from the FDA and coverage by the Centers for Medicare & Medicaid Services to be widely adopted.
The editorialists also pointed out that advances in the diagnosis and treatment of Alzheimer’s disease will require important changes to healthcare models, including providing additional resources and staffing.
The study was supported by the Alzheimer’s Association, National Institute on Aging, European Research Council, Swedish Research Council, the GHR Foundation, and other groups. The study was conducted as an academic collaboration between Lund University and C2N Diagnostics in the United States. Lund University or its affiliated researchers received no funding or compensation from C2N Diagnostics. C2N Diagnostics performed the plasma analyses blinded to any biomarker or clinical data and had no role in the statistical analysis or results. Dr. Palmqvist reported receiving institutional research support from ki:elements, Alzheimer’s Drug Discovery Foundation, and Avid Radiopharmaceuticals and consultancy or speaker fees from BioArctic, Biogen, Esai, Eli Lilly, and Roche. Dr. Hansson reported receiving personal fees from AC Immune, ALZpath, BioArctic, Biogen, Cerveau, Eisai, Eli Lilly, Fujirebio, Roche, Bristol-Myers Squibb, Merck, Novartis, Novo Nordisk, Roche, Sanofi, and Siemens and institutional research support from ADX, AVID Radiopharmaceuticals, Biogen, Eli Lilly, Eisai, Fujirebio, GE Healthcare, Pfizer, and Roche. Dr. Salloway reported receiving grants from Biogen, Roche, Lilly, Genentech, Eisai, and Novartis; personal fees from Biogen, Roche, Lilly, Genentech, Eisai, Novo Nordisk, Prothena, AbbVie, Acumen, and Kisbee; and nonfinancial support (travel expenses for conference attendance) from Biogen, Roche, Lilly, and Acumen.
A version of this article appeared on Medscape.com.
PHILADELPHIA — new research showed.
Accurate early diagnosis of Alzheimer’s disease is important because two monoclonal antibodies donanemab (Kisunla) and lecanemab (Leqembi) are now approved by the Food and Drug Administration (FDA) for early-stage Alzheimer’s disease. However, the use of these agents requires amyloid confirmation.
A key finding of the study was that primary care physicians had a diagnostic accuracy of 61%, and dementia specialists had an accuracy of 73%, after completing standard clinical evaluations and before seeing results of the blood test or other Alzheimer’s disease biomarkers, while the blood test used in the study had an accuracy of 91% for correctly classifying clinical, biomarker-verified Alzheimer’s disease.
“This underscores the potential improvement in diagnostic accuracy, especially in primary care, when implementing such a blood test,” said study investigator Sebastian Palmqvist, MD, PhD, associate professor of neurology at Lund University, Lund, and a consultant at Skåne University Hospital, Malmö, Sweden. “It also highlights the challenges in accurately identifying Alzheimer’s disease based solely on clinical evaluation and cognitive testing, even for specialists.”
The findings were presented at the 2024 Alzheimer’s Association International Conference (AAIC) and simultaneously published online in JAMA.
The study included two cohorts from primary and secondary care clinics in Sweden. Researchers analyzed plasma samples together at one time point in a single batch.
It also included two cohorts from Swedish primary and secondary care clinics where the plasma samples were analyzed prospectively (biweekly) in batches throughout the enrollment period, which more closely resembles clinical practice.
Primary care physicians and dementia specialists documented whether they believed their patients had Alzheimer’s disease pathology, basing the diagnoses on the standard evaluation that includes clinical examination, cognitive testing, and a CT scan prior to seeing any Alzheimer’s disease biomarker results.
They reported their certainty of the presence of Alzheimer’s disease pathology on a scale from 0 (not at all certain) to 10 (completely certain).
Plasma analyses were performed by personnel blinded to all clinical or biomarker data. Mass spectrometry assays were used to analyze Abeta42, Abeta40, phosphorylated tau 217 (p-tau217), and non–p-tau217.
Biomarkers used in the study included the percentage of plasma p-tau217, which is the ratio of p-tau217 relative to non–p-tau217, and the Abeta42 to Abeta40 ratio (the amyloid probability score 2 [APS2]). Researchers determined p-tau217 alone and when combined with the APS2.
The study included 1213 patients with cognitive symptoms — mean age 74.2 years and 48% women. Researchers applied biomarker cutoff values to the primary care cohort (n = 307) and the secondary care cohort (n = 300) and then evaluated the blood test prospectively in the primary care cohort (n = 208) and the secondary care cohort (n = 398).
The blood biomarker cutoff value was set at 90% specificity for Alzheimer’s disease pathology (the 1 cutoff-value approach). A 2 cutoff-value approach (using 1 upper and 1 lower cutoff value) was also used with values corresponding to 95% sensitivity and 95% specificity.
The primary outcome was presence of Alzheimer’s disease pathology. A positive finding of the Abeta biomarker was defined according to the FDA-approved cutoff value (≤ 0.072). A positive finding of the tau biomarker was defined as a p-tau217 level > 11.42 pg/mL in cerebrospinal fluid.
Researchers calculated the positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy, as well as area under the curve (AUC) values.
Accuracy in Specialty Versus Primary Care
When the plasma samples were analyzed in a single batch in the primary care cohort, the AUC was 0.97 when the APS2 was used. In the secondary care cohort, the AUC was 0.96 when the APS2 was used.
When plasma samples were analyzed prospectively (biweekly) in the primary care cohort, the AUC was 0.96 when the APS2 was used. In the secondary care cohort, the AUC was 0.97 when the APS2 was used.
The 2 cutoff-value approach achieved PPVs of 97%-99% in patients with cognitive impairment, which is the target population of currently available antiamyloid treatments.
Although NPVs were slightly lower in these patients (87%-92% using the APS2), “we argue that a very high positive predictive value is probably more important in diagnosing patients as having Alzheimer’s disease, especially before initiating costly and burdensome antiamyloid treatment,” the investigators noted.
The PPVs were less than optimal for accurate identification of Alzheimer’s disease pathology in patients with subjective cognitive decline regardless of the cutoff-value approach used. The researchers pointed out that this could be a disadvantage for clinical trials that include patients with presymptomatic Alzheimer’s disease but not in clinical practice because there are no clinical criteria for diagnosing Alzheimer’s disease at the subjective cognitive decline stage.
The NPVs were higher in patients with subjective cognitive decline (91%-94% for the APS2 or percentage of p-tau217 alone). This indicates the blood test would be more useful for ruling out underlying Alzheimer’s disease when only subtle symptoms are present, the researchers noted.
As for doctors identifying clinical Alzheimer’s disease, primary care physicians had a diagnostic accuracy of 61% (95% CI, 53%-69%) versus 91% (95% CI, 86%-96%) using the APS2. Dementia specialists had a diagnostic accuracy of 73% (95% CI, 68%-79%) versus 91% (95% CI, 86%-95%) using the APS2.
In the overall population, the diagnostic accuracy using the APS2 (90%; 95% CI, 88%-92%) was not different from that using the percentage of p-tau217 alone (90%; 95% CI, 88%-91%).
Very little was known about how a blood test would perform in a primary care setting, said Dr. Palmqvist. “Seeing that the test was just as accurate in primary care (about 90%) as it was in secondary care is really encouraging, especially since primary care is the first, and often final, point of entry into the healthcare system for cognitive evaluations.”
He said he was surprised the biomarkers performed so well in prospective, biweekly analyses throughout the study. “Previous studies have only demonstrated their effectiveness when all collected samples are analyzed at a single time point, which does not reflect how a blood test is used in clinical practice.”
He added that he was surprised that the tests were just as accurate in primary care as in a memory clinic setting with referred patients. This, despite older age and higher prevalence of comorbidities in primary care, such as chronic kidney disease (present in 26% of the primary care cohort), can be a confounding factor causing increased concentrations of p-tau217.
Next Steps
The diagnostic accuracy of the blood tests is on par with FDA-cleared cerebrospinal fluid biomarkers, noted the investigators, led by senior author Oskar Hansson, MD, PhD, Clinical Memory Research Unit, Department of Clinical Sciences Malm
As blood tests are “more time effective, cost effective, and convenient” for patients, “they could also potentially replace cerebrospinal fluid tests and PET,” they added.
Dr. Palmqvist emphasized that these tests should not be used as stand-alone diagnostic tools for Alzheimer’s disease but should complement the standard clinical evaluation that includes cognitive testing and a thorough interview with the patient and a spouse or relative.
“This is crucial because Alzheimer’s disease pathology can be asymptomatic for many years, and cognitive symptoms in some patients with Alzheimer’s disease pathology may primarily result from other conditions. Misinterpreting a positive Alzheimer’s disease blood test could lead to underdiagnosis of common non–Alzheimer’s disease conditions.”
With new antiamyloid treatments possibly slowing disease progression by 30%-40% when initiated early on, a blood test for Alzheimer’s disease could lead to more people receiving an accurate and earlier diagnosis, said Dr. Palmqvist. “This could potentially result in a better response to treatment. Results from drug trials clearly indicate that the earlier treatment begins, the more effectively it can slow disease progression.”
The test used in the study is already available in the United States, the investigators said, and a similar test will be accessible in Sweden within a few months. “However, the rollout will probably be gradual and will depend on how international and national guidelines recommend their use, so developing these guidelines will be a crucial next step for widespread implementation, particularly in primary care,” said Dr. Palmqvist.
He also underlined the importance of replicating the findings in more diverse populations. “This will help ensure the tests’ reliability and effectiveness across various demographic and clinical contexts.”
An important next research step is to examine how implementing a blood test for Alzheimer’s disease affects patient care. “This includes looking at changes in management, such as referrals, other examinations, and the initiation of appropriate treatments,” said Dr. Palmqvist.
Another study presented at the meeting showed that a highly accurate blood test could significantly reduce diagnostic wait times.
Convincing Research
In an accompanying editorial, Stephen Salloway, MD, Departments of Psychiatry and Neurology, Warren Alpert Medical School, Brown University, Providence, Rhode Island, and colleagues said the study “makes the case convincingly that highly sensitive blood measures of Alzheimer’s disease can be integrated into the clinical decision-making process, including in the primary care setting.”
These tests, they wrote, “can be used to enhance the ability of clinicians to accurately identify individuals with cognitive impairment and dementia due to Alzheimer’s disease.
“Current practice should focus on using these blood biomarkers in individuals with cognitive impairment rather than in those with normal cognition or subjective cognitive decline until further research demonstrates effective interventions for individuals considered cognitively normal with elevated levels of amyloid.”
A key limitation of the study was the lack of diversity in the study sample. This makes it difficult to generalize the results across other ethnic and racial groups, the editorialists noted. Plasma assays for Alzheimer’s disease in the United States will require approval from the FDA and coverage by the Centers for Medicare & Medicaid Services to be widely adopted.
The editorialists also pointed out that advances in the diagnosis and treatment of Alzheimer’s disease will require important changes to healthcare models, including providing additional resources and staffing.
The study was supported by the Alzheimer’s Association, National Institute on Aging, European Research Council, Swedish Research Council, the GHR Foundation, and other groups. The study was conducted as an academic collaboration between Lund University and C2N Diagnostics in the United States. Lund University or its affiliated researchers received no funding or compensation from C2N Diagnostics. C2N Diagnostics performed the plasma analyses blinded to any biomarker or clinical data and had no role in the statistical analysis or results. Dr. Palmqvist reported receiving institutional research support from ki:elements, Alzheimer’s Drug Discovery Foundation, and Avid Radiopharmaceuticals and consultancy or speaker fees from BioArctic, Biogen, Esai, Eli Lilly, and Roche. Dr. Hansson reported receiving personal fees from AC Immune, ALZpath, BioArctic, Biogen, Cerveau, Eisai, Eli Lilly, Fujirebio, Roche, Bristol-Myers Squibb, Merck, Novartis, Novo Nordisk, Roche, Sanofi, and Siemens and institutional research support from ADX, AVID Radiopharmaceuticals, Biogen, Eli Lilly, Eisai, Fujirebio, GE Healthcare, Pfizer, and Roche. Dr. Salloway reported receiving grants from Biogen, Roche, Lilly, Genentech, Eisai, and Novartis; personal fees from Biogen, Roche, Lilly, Genentech, Eisai, Novo Nordisk, Prothena, AbbVie, Acumen, and Kisbee; and nonfinancial support (travel expenses for conference attendance) from Biogen, Roche, Lilly, and Acumen.
A version of this article appeared on Medscape.com.
FROM AAIC 2024
Alzheimer’s Blood Test in Primary Care Could Slash Diagnostic, Treatment Wait Times
As disease-modifying treatments for Alzheimer’s disease (AD) become available,
. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.
“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.
The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
Projected Wait Times 100 Months by 2033
The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.
The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.
According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.
In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.
Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
Prioritizing Resources
“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.
The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.
“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.
He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.
Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment.
“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study.
This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.
A version of this article first appeared on Medscape.com.
As disease-modifying treatments for Alzheimer’s disease (AD) become available,
. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.
“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.
The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
Projected Wait Times 100 Months by 2033
The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.
The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.
According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.
In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.
Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
Prioritizing Resources
“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.
The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.
“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.
He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.
Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment.
“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study.
This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.
A version of this article first appeared on Medscape.com.
As disease-modifying treatments for Alzheimer’s disease (AD) become available,
. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.
“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.
The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
Projected Wait Times 100 Months by 2033
The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.
The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.
According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.
In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.
Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
Prioritizing Resources
“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.
The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.
“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.
He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.
Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment.
“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study.
This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM AAIC 2024
Undiagnosed, Untreated Tardive Dyskinesia, Hinders Adherence to Antipsychotics
This transcript has been edited for clarity.
Tardive dyskinesia is a chronic, potentially irreversible, hyperkinetic movement disorder. And the challenge with tardive dyskinesia is that it’s underdiagnosed and undertreated. With the expanded use of dopamine receptor–blocking agents, there are about 7.5 million Americans who are now exposed and at risk for tardive dyskinesia.
It’s thought that about 500,000-750,000 of these patients may in fact have tardive dyskinesia, but only 15% are treated. So why are people not being treated for tardive dyskinesia? Well, there are a number of possible answers.
Until a few years ago, there were no Food and Drug Administration (FDA)–approved treatments for tardive dyskinesia, and these antipsychotic medications that the patients were taking, in many cases, were potentially lifesaving drugs, so they couldn’t simply be stopped. As a result of that, I think physicians developed a certain psychic blindness to identifying tardive dyskinesia, because it was their drugs that were causing the disease and yet they couldn’t be stopped. So, there really wasn’t much they could do in terms of making the diagnosis.
In addition, they were trained that tardive dyskinesia doesn’t have much impact on patients. But we now know, through surveys and other studies, that tardive dyskinesia can have a tremendous impact on patients and on your ability to treat the patient’s underlying mental health issues. It’s estimated that 50% of patients with tardive dyskinesia actually reduce the amount of antipsychotic medication they’re taking on their own, and about 40% may in fact stop their antipsychotic medication altogether.
Thirty-five percent of patients stopped seeing their doctor after they developed tardive dyskinesia, and about 20% of patients actually told other patients not to take their antipsychotic medication. So, tardive dyskinesia is impacting your ability to treat patients. In addition, it impacts the patients themselves. Nearly three out of four patients with tardive dyskinesia said, in surveys, that it caused severe impact on their psychosocial functioning.
It also impacted caregivers, with 70% of caregivers saying that the patients with tardive dyskinesia made them more anxious and limited them socially. So, we have this tremendous impact from tardive dyskinesia.
In addition, physicians sometimes don’t identify tardive dyskinesia correctly. They mistake it for another movement disorder: drug-induced parkinsonism. Or it falls under the rubric of extrapyramidal symptoms (EPS), and they were trained that you treat EPS with benztropine. The challenge with that is that benztropine is only indicated for acute dystonia or for drug-induced parkinsonism. It actually makes tardive dyskinesia worse. And, in the product insert for benztropine, it’s recommended that it should not be used in tardive dyskinesia. So if you have a patient whom you suspect has tardive dyskinesia, you have to discontinue the benztropine. That’s a really important first step.
And then, what else should you do? There are now two FDA-approved treatments for tardive dyskinesia. These are valbenazine and deutetrabenazine. Both of these drugs have been demonstrated in large double-blind, placebo-controlled studies to reduce tardive dyskinesia, as measured by the Abnormal Involuntary Movement Scale, by about 30%. These drugs have been demonstrated to be safe and well tolerated, with the main side effect being somnolence.
Some people can also develop parkinsonism. Why could there be Parkinsonism? This is because vesicular monoamine transporter 2 (VMAT2) inhibitors work by reducing the amount of dopamine that can be packaged in the presynaptic neuron. That means that less dopamine is available to the synapse, and this reduces movement. The American Psychiatric Association has issued guidelines for the treatment of tardive dyskinesia and has said that moderate to severe tardive dyskinesia should be treated first-line with VMAT2 inhibitors and that mild tardive dyskinesia should also be treated with VMAT2 inhibitors if the tardive dyskinesia is impacting the patient.
Given the impact that tardive dyskinesia has on patients and caregivers, and the physician’s ability to treat these patients’ mental health issues, we need to become aggressive and treat the tardive dyskinesia so that patients can improve and be able to have their movements treated without impacting their underlying mental health issues.
Daniel Kremens, professor, Department of Neurology, Sidney Kimmel Medical College, Thomas Jefferson University, codirector, Parkinson’s Disease and Movement Disorders Division, Jack and Vickie Farber Center for Neuroscience, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, has disclosed relevant financial relationships with Teva Pharmaceuticals, AbbVie, Merz, Allergan, Bial, Cerevel, Amneal, Acadia, Supernus, Adamas, Acorda, Kyowa Kirin, and Neurocrine.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Tardive dyskinesia is a chronic, potentially irreversible, hyperkinetic movement disorder. And the challenge with tardive dyskinesia is that it’s underdiagnosed and undertreated. With the expanded use of dopamine receptor–blocking agents, there are about 7.5 million Americans who are now exposed and at risk for tardive dyskinesia.
It’s thought that about 500,000-750,000 of these patients may in fact have tardive dyskinesia, but only 15% are treated. So why are people not being treated for tardive dyskinesia? Well, there are a number of possible answers.
Until a few years ago, there were no Food and Drug Administration (FDA)–approved treatments for tardive dyskinesia, and these antipsychotic medications that the patients were taking, in many cases, were potentially lifesaving drugs, so they couldn’t simply be stopped. As a result of that, I think physicians developed a certain psychic blindness to identifying tardive dyskinesia, because it was their drugs that were causing the disease and yet they couldn’t be stopped. So, there really wasn’t much they could do in terms of making the diagnosis.
In addition, they were trained that tardive dyskinesia doesn’t have much impact on patients. But we now know, through surveys and other studies, that tardive dyskinesia can have a tremendous impact on patients and on your ability to treat the patient’s underlying mental health issues. It’s estimated that 50% of patients with tardive dyskinesia actually reduce the amount of antipsychotic medication they’re taking on their own, and about 40% may in fact stop their antipsychotic medication altogether.
Thirty-five percent of patients stopped seeing their doctor after they developed tardive dyskinesia, and about 20% of patients actually told other patients not to take their antipsychotic medication. So, tardive dyskinesia is impacting your ability to treat patients. In addition, it impacts the patients themselves. Nearly three out of four patients with tardive dyskinesia said, in surveys, that it caused severe impact on their psychosocial functioning.
It also impacted caregivers, with 70% of caregivers saying that the patients with tardive dyskinesia made them more anxious and limited them socially. So, we have this tremendous impact from tardive dyskinesia.
In addition, physicians sometimes don’t identify tardive dyskinesia correctly. They mistake it for another movement disorder: drug-induced parkinsonism. Or it falls under the rubric of extrapyramidal symptoms (EPS), and they were trained that you treat EPS with benztropine. The challenge with that is that benztropine is only indicated for acute dystonia or for drug-induced parkinsonism. It actually makes tardive dyskinesia worse. And, in the product insert for benztropine, it’s recommended that it should not be used in tardive dyskinesia. So if you have a patient whom you suspect has tardive dyskinesia, you have to discontinue the benztropine. That’s a really important first step.
And then, what else should you do? There are now two FDA-approved treatments for tardive dyskinesia. These are valbenazine and deutetrabenazine. Both of these drugs have been demonstrated in large double-blind, placebo-controlled studies to reduce tardive dyskinesia, as measured by the Abnormal Involuntary Movement Scale, by about 30%. These drugs have been demonstrated to be safe and well tolerated, with the main side effect being somnolence.
Some people can also develop parkinsonism. Why could there be Parkinsonism? This is because vesicular monoamine transporter 2 (VMAT2) inhibitors work by reducing the amount of dopamine that can be packaged in the presynaptic neuron. That means that less dopamine is available to the synapse, and this reduces movement. The American Psychiatric Association has issued guidelines for the treatment of tardive dyskinesia and has said that moderate to severe tardive dyskinesia should be treated first-line with VMAT2 inhibitors and that mild tardive dyskinesia should also be treated with VMAT2 inhibitors if the tardive dyskinesia is impacting the patient.
Given the impact that tardive dyskinesia has on patients and caregivers, and the physician’s ability to treat these patients’ mental health issues, we need to become aggressive and treat the tardive dyskinesia so that patients can improve and be able to have their movements treated without impacting their underlying mental health issues.
Daniel Kremens, professor, Department of Neurology, Sidney Kimmel Medical College, Thomas Jefferson University, codirector, Parkinson’s Disease and Movement Disorders Division, Jack and Vickie Farber Center for Neuroscience, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, has disclosed relevant financial relationships with Teva Pharmaceuticals, AbbVie, Merz, Allergan, Bial, Cerevel, Amneal, Acadia, Supernus, Adamas, Acorda, Kyowa Kirin, and Neurocrine.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Tardive dyskinesia is a chronic, potentially irreversible, hyperkinetic movement disorder. And the challenge with tardive dyskinesia is that it’s underdiagnosed and undertreated. With the expanded use of dopamine receptor–blocking agents, there are about 7.5 million Americans who are now exposed and at risk for tardive dyskinesia.
It’s thought that about 500,000-750,000 of these patients may in fact have tardive dyskinesia, but only 15% are treated. So why are people not being treated for tardive dyskinesia? Well, there are a number of possible answers.
Until a few years ago, there were no Food and Drug Administration (FDA)–approved treatments for tardive dyskinesia, and these antipsychotic medications that the patients were taking, in many cases, were potentially lifesaving drugs, so they couldn’t simply be stopped. As a result of that, I think physicians developed a certain psychic blindness to identifying tardive dyskinesia, because it was their drugs that were causing the disease and yet they couldn’t be stopped. So, there really wasn’t much they could do in terms of making the diagnosis.
In addition, they were trained that tardive dyskinesia doesn’t have much impact on patients. But we now know, through surveys and other studies, that tardive dyskinesia can have a tremendous impact on patients and on your ability to treat the patient’s underlying mental health issues. It’s estimated that 50% of patients with tardive dyskinesia actually reduce the amount of antipsychotic medication they’re taking on their own, and about 40% may in fact stop their antipsychotic medication altogether.
Thirty-five percent of patients stopped seeing their doctor after they developed tardive dyskinesia, and about 20% of patients actually told other patients not to take their antipsychotic medication. So, tardive dyskinesia is impacting your ability to treat patients. In addition, it impacts the patients themselves. Nearly three out of four patients with tardive dyskinesia said, in surveys, that it caused severe impact on their psychosocial functioning.
It also impacted caregivers, with 70% of caregivers saying that the patients with tardive dyskinesia made them more anxious and limited them socially. So, we have this tremendous impact from tardive dyskinesia.
In addition, physicians sometimes don’t identify tardive dyskinesia correctly. They mistake it for another movement disorder: drug-induced parkinsonism. Or it falls under the rubric of extrapyramidal symptoms (EPS), and they were trained that you treat EPS with benztropine. The challenge with that is that benztropine is only indicated for acute dystonia or for drug-induced parkinsonism. It actually makes tardive dyskinesia worse. And, in the product insert for benztropine, it’s recommended that it should not be used in tardive dyskinesia. So if you have a patient whom you suspect has tardive dyskinesia, you have to discontinue the benztropine. That’s a really important first step.
And then, what else should you do? There are now two FDA-approved treatments for tardive dyskinesia. These are valbenazine and deutetrabenazine. Both of these drugs have been demonstrated in large double-blind, placebo-controlled studies to reduce tardive dyskinesia, as measured by the Abnormal Involuntary Movement Scale, by about 30%. These drugs have been demonstrated to be safe and well tolerated, with the main side effect being somnolence.
Some people can also develop parkinsonism. Why could there be Parkinsonism? This is because vesicular monoamine transporter 2 (VMAT2) inhibitors work by reducing the amount of dopamine that can be packaged in the presynaptic neuron. That means that less dopamine is available to the synapse, and this reduces movement. The American Psychiatric Association has issued guidelines for the treatment of tardive dyskinesia and has said that moderate to severe tardive dyskinesia should be treated first-line with VMAT2 inhibitors and that mild tardive dyskinesia should also be treated with VMAT2 inhibitors if the tardive dyskinesia is impacting the patient.
Given the impact that tardive dyskinesia has on patients and caregivers, and the physician’s ability to treat these patients’ mental health issues, we need to become aggressive and treat the tardive dyskinesia so that patients can improve and be able to have their movements treated without impacting their underlying mental health issues.
Daniel Kremens, professor, Department of Neurology, Sidney Kimmel Medical College, Thomas Jefferson University, codirector, Parkinson’s Disease and Movement Disorders Division, Jack and Vickie Farber Center for Neuroscience, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, has disclosed relevant financial relationships with Teva Pharmaceuticals, AbbVie, Merz, Allergan, Bial, Cerevel, Amneal, Acadia, Supernus, Adamas, Acorda, Kyowa Kirin, and Neurocrine.
A version of this article first appeared on Medscape.com.
New Models Predict Time From Mild Cognitive Impairment to Dementia
Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.
The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.
The findings were published online in Neurology.
Easy-to-Use Prototype
On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.
Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.
For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.
Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.
At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.
In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.
The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).
However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.
For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.
External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.
Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.
Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.
“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.
Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.
The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.
The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.
A version of this article first appeared on Medscape.com.
Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.
The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.
The findings were published online in Neurology.
Easy-to-Use Prototype
On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.
Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.
For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.
Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.
At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.
In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.
The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).
However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.
For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.
External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.
Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.
Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.
“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.
Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.
The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.
The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.
A version of this article first appeared on Medscape.com.
Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.
The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.
The findings were published online in Neurology.
Easy-to-Use Prototype
On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.
Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.
For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.
Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.
At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.
In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.
The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).
However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.
For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.
External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.
Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.
Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.
“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.
Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.
The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.
The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.
A version of this article first appeared on Medscape.com.
FROM NEUROLOGY
‘Doesn’t Fit Anything I Trained for’: Committee Examines Treatment for Chronic Illness After Lyme Disease
WASHINGTON — Advancing treatment for what has been variably called chronic Lyme and posttreatment Lyme disease (PTLD) is under the eyes of a National Academies of Science, Engineering, and Medicine (NASEM) committee of experts for the first time — a year after the NASEM shone a spotlight on the need to accelerate research on chronic illnesses that follow known or suspected infections.
The committee will not make recommendations on specific approaches to diagnosis and treatment when it issues a report in early 2025 but will instead present “consensus findings” on treatment for chronic illness associated with Lyme disease, including recommendations for advancing treatment.
It’s an area void of the US Food and Drug Administration–approved therapies, void of any consensus on the off-label use of medications, and without any current standard of care or proven mechanisms and pathophysiology, said John Aucott, MD, director of the Johns Hopkins Medicine Lyme Disease Clinical Research Center, Baltimore, one of the invited speakers at a public meeting held by the NASEM in Washington, DC.
“The best way to look at this illness is not from the silos of infectious disease or the silos of rheumatology; you have to look across disciplines,” Dr. Aucott, also associate professor of medicine in the Division of Rheumatology, told the committee. “The story doesn’t fit anything I trained for in my infectious disease fellowship. Even today, I’d posit that PTLD is like an island — it’s still not connected to a lot of the mainstream of medicine.”
Rhisa Parera, who wrote and directed a 2021 documentary, Your Labs Are Normal, was one of several invited speakers who amplified the patient voice. Starting around age 7, she had pain in her knees, spine, and hips and vivid nightmares. In high school, she developed gastrointestinal issues, and in college, she developed debilitating neurologic symptoms.
Depression was her eventual diagnosis after having seen “every specialist in the book,” she said. At age 29, she received a positive western blot test and a Lyme disease diagnosis, at which point “I was prescribed 4 weeks of doxycycline and left in the dark,” the 34-year-old Black patient told the committee. Her health improved only after she began working with an “LLMD,” or Lyme-literate medical doctor (a term used in the patient community), while she lived with her mother and did not work, she said.
“I don’t share my Lyme disease history with other doctors. It’s pointless when you have those who will laugh at you, say you’re fine if you were treated, or just deny the disease completely,” Ms. Parera said. “We need this to be taught in medical school. It’s a literal emergency.”
Incidence and Potential Mechanisms
Limited research has suggested that 10%-20% of patients with Lyme disease develop persistent symptoms after standard antibiotic treatment advised by the Infectious Diseases Society of America (IDSA), Dr. Aucott said. (On its web page on chronic symptoms, the Centers for Disease Control and Prevention presents a more conservative range of 5%-10%.)
His own prospective cohort study at Johns Hopkins, published in 2022, found that 13.7% of 234 patients with prior Lyme disease met symptom and functional impact criteria for PTLD, compared with 4.1% of 49 participants without a history of Lyme disease — a statistically significant difference that he said should “put to rest” the question of “is it real?”
PTLD is the research case definition proposed by the IDSA in 2006; it requires that patients have prior documented Lyme disease, no other specific comorbidities, and specific symptoms (fatigue, widespread musculoskeletal pain, and/or cognitive difficulties) causing significant functional impact at least 6 months from their initial diagnosis and treatment.
In the real world, however, where diagnostics for acute Lyme disease are often inaccurate, erythema migrans is often absent, and the symptomatology of Lyme IACI is variable (and where there is no approved laboratory test or objective biomarker for diagnosing Lyme IACI), PTLD represents only a subset of a broader, heterogeneous population with persistent symptoms.
The term “Lyme IACI,” pronounced “Lyme eye-ACK-ee” at the meeting, builds on conversations at the 2023 NASEM workshop on infection-associated chronic illnesses and “encompasses a variety of terms that are used,” including PTLD, PTLD syndrome, persistent Lyme disease, and chronic Lyme disease, according to committee documents. Symptoms are distinct from the known complications of Lyme disease, such as arthritis or carditis.
The findings from Dr. Aucott’s SLICE cohort likely represent “the best outcome,” he said. They’re “probably not generalizable to a community setting where we see lots of missed diagnoses and delayed diagnoses,” as well as other tick-borne coinfections.
One of the challenges in designing future trials, in fact, relates to enrollment criteria and whether to use strict inclusion and exclusion criteria associated with the IDSA definition or take a broader approach to trial enrollment, he and others said. “You want to enroll patients for whom there’s no controversy that they’ve had Lyme infection ... for a study people believe in,” Dr. Aucott said during a discussion period, noting that it’s typical to screen over 100 patients to find one enrollee. “But it’s a tension we’re having.”
Timothy Sellati, PhD, chief scientific officer of the Global Lyme Alliance, urged change. “It’s really important to try to figure out how to alter our thinking on identifying and diagnosing chronic Lyme patients because they need to be recruited into clinical trials,” he said during his presentation.
“We think the best way to do this is to [develop and] employ composite diagnostic testing” that looks at unique Borrelia signatures (eg, protein, DNA, RNA, or metabolites), genetic and/or epigenetic signatures, inflammation signatures, T-cell-independent antibody signatures, and other elements, Dr. Sellati said.
Researchers designing treatment trials also face unknowns, Dr. Aucott and others said, about the role of potential mechanisms of Lyme IACI, from persistent Borrelia burgdorferi (or Borrelia mayonii) infection or the persistence of bacterial remnants (eg, nucleic acids or peptidoglycans) to infection-triggered pathology such as persistent immune dysregulation, chronic inflammation, autoimmunity, microbiome alterations, and dysautonomia and other neural network alterations.
The NASEM’s spotlight on Lyme IACI follows its long COVID-driven push last year to advance a common research agenda in infection-associated chronic illnesses. Investigators see common symptoms and potential shared mechanisms between long COVID, Lyme IACI, myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), and other complex chronic illnesses following infections.
At the Lyme IACI meeting, invited speakers described parts of the research landscape. Avindra Nath, MD, of the National Institute of Neurological Disorders and Stroke, for instance, described a recently published deep phenotyping study of 17 patients with ME/CFS that found decreased central catecholamine synthesis, circuit dysfunction of integrative brain regions, and immune profiling differences (eg, defects in B-cell maturation or T-cell exhaustion), compared with matched controls, that suggest the persistence of microbial antigens.
And John Leong, MD, PhD, of Tufts University, Boston, described his lab’s focus on understanding the microbe-host interactions that enable bloodstream dissemination and tissue invasion of B burgdorferi to take hold, increasing the risk for persistent symptoms. Other research at Tufts, he noted during a discussion period, has demonstrated the persistence of B burgdorferi to antibiotics in microtiter dishes. “Those organisms that survive are really difficult to eradicate in vitro,” Dr. Leong said.
Other physician investigators described research on nociplastic pain — a category of pain that can be triggered by infections, causing both amplified sensory processing and augmented central nervous system pain — and on whether reactivation of the Epstein-Barr virus could potentiate autoimmunity in the context of Borrelia infection.
Researchers are ready to test therapies while pathophysiology is unraveled — provided there is funding, Dr. Aucott said. The Clinical Trials Network for Lyme and Other Tick-Borne Diseases, coordinated by Brian Fallon, MD, of Columbia University, New York City, and funded several years ago by the Steven & Alexandra Cohen Foundation, has a slate of small pilot studies underway or being planned that address potential mechanisms (eg, studies of pulse intravenous ceftriaxone, tetracycline, transauricular vagus nerve stimulation, and mast cell modulation). And should full multisite trials be designed and funded, the network is ready with an infrastructure.
Need for Patient-Centered Outcomes
Persistent symptomatology is on the NIH’s radar screen. Efforts to understand causes were part of a strategic tick-borne disease research plan developed by the NIH in 2019. And in 2023, the National Institute of Allergy and Infectious Diseases (NIAID) funded seven projects addressing persistent symptoms that will run through 2028, C. Benjamin Beard, PhD, deputy division director of the CDC’s Division of Vector-Borne Disease, said at the NASEM committee meeting.
Patient advocates maintained that too much emphasis is placed on tick biology and pathophysiology. When Wendy Adams, research grant director and advisory board member of the Bay Area Lyme Foundation, and a colleague analyzed NIAID tick-borne disease funding from 2013 to 2021, they found that 75% of the funding went toward basic research, 15% to translational research, and “only 3% went to clinical research,” Ms. Adams told the committee.
Only 3% of the basic research budget was spent on coinfections, she said, and only 1% was spent on neurologic disease associated with tick-borne infections, both of which are survey-defined patient priorities. Moreover, “12% of the overall NIAID [tick-borne diseases] budget was spent on tick biology,” she said.
Research needs to involve community physicians who are utilizing the guidelines and approaches of the International Lyme and Associated Diseases Society to treat most patients with Lyme IACI, Ms. Adams said. “They have data to be mined,” she said, as does LymeDisease.org, which maintains a patient registry, MyLymeData, with over 18,000 patients. The organization has published two treatment studies, including one on antibiotic treatment response.
Lorraine Johnson, JD, MBA, CEO of LymeDisease.org and principal investigator of MyLymeData, stressed the importance of using patient-centered outcomes that incorporate minimal clinically important differences (MCIDs). “A change in the SF-36 score [without consideration of MCIDs] is not inherently important or meaningful to patients,” she said, referring to the SF-36 survey of health-related quality of life.
“This may seem like an esoteric issue, but two of the four clinical trials done [on retreatment of] persistent Lyme disease used the SF-36 as their outcome measure, and those studies, led by [Mark] Klempner, concluded that retreatment was not effective,” Ms. Johnson said. “Patients have been and continue to be harmed by [this research] because they’re told by physicians that antibiotics don’t work.”
A 2012 biostatistical review of these four RCTs — trials that helped inform the 2006 IDSA treatment guidelines — concluded that the Klempner studies “set the bar for treatment success too high,” Ms. Johnson said. Three of the four trials were likely underpowered to detect clinically meaningful treatment effects, the review also found.
The NASEM committee will hold additional public meetings and review a wide range of literature through this year. The formation of the committee was recommended by the US Department of Health and Human Services Tick-Borne Disease Working Group that was established by Congress in 2016 and concluded its work in 2022. The committee’s work is funded by the Cohen Foundation.
A version of this article appeared on Medscape.com.
WASHINGTON — Advancing treatment for what has been variably called chronic Lyme and posttreatment Lyme disease (PTLD) is under the eyes of a National Academies of Science, Engineering, and Medicine (NASEM) committee of experts for the first time — a year after the NASEM shone a spotlight on the need to accelerate research on chronic illnesses that follow known or suspected infections.
The committee will not make recommendations on specific approaches to diagnosis and treatment when it issues a report in early 2025 but will instead present “consensus findings” on treatment for chronic illness associated with Lyme disease, including recommendations for advancing treatment.
It’s an area void of the US Food and Drug Administration–approved therapies, void of any consensus on the off-label use of medications, and without any current standard of care or proven mechanisms and pathophysiology, said John Aucott, MD, director of the Johns Hopkins Medicine Lyme Disease Clinical Research Center, Baltimore, one of the invited speakers at a public meeting held by the NASEM in Washington, DC.
“The best way to look at this illness is not from the silos of infectious disease or the silos of rheumatology; you have to look across disciplines,” Dr. Aucott, also associate professor of medicine in the Division of Rheumatology, told the committee. “The story doesn’t fit anything I trained for in my infectious disease fellowship. Even today, I’d posit that PTLD is like an island — it’s still not connected to a lot of the mainstream of medicine.”
Rhisa Parera, who wrote and directed a 2021 documentary, Your Labs Are Normal, was one of several invited speakers who amplified the patient voice. Starting around age 7, she had pain in her knees, spine, and hips and vivid nightmares. In high school, she developed gastrointestinal issues, and in college, she developed debilitating neurologic symptoms.
Depression was her eventual diagnosis after having seen “every specialist in the book,” she said. At age 29, she received a positive western blot test and a Lyme disease diagnosis, at which point “I was prescribed 4 weeks of doxycycline and left in the dark,” the 34-year-old Black patient told the committee. Her health improved only after she began working with an “LLMD,” or Lyme-literate medical doctor (a term used in the patient community), while she lived with her mother and did not work, she said.
“I don’t share my Lyme disease history with other doctors. It’s pointless when you have those who will laugh at you, say you’re fine if you were treated, or just deny the disease completely,” Ms. Parera said. “We need this to be taught in medical school. It’s a literal emergency.”
Incidence and Potential Mechanisms
Limited research has suggested that 10%-20% of patients with Lyme disease develop persistent symptoms after standard antibiotic treatment advised by the Infectious Diseases Society of America (IDSA), Dr. Aucott said. (On its web page on chronic symptoms, the Centers for Disease Control and Prevention presents a more conservative range of 5%-10%.)
His own prospective cohort study at Johns Hopkins, published in 2022, found that 13.7% of 234 patients with prior Lyme disease met symptom and functional impact criteria for PTLD, compared with 4.1% of 49 participants without a history of Lyme disease — a statistically significant difference that he said should “put to rest” the question of “is it real?”
PTLD is the research case definition proposed by the IDSA in 2006; it requires that patients have prior documented Lyme disease, no other specific comorbidities, and specific symptoms (fatigue, widespread musculoskeletal pain, and/or cognitive difficulties) causing significant functional impact at least 6 months from their initial diagnosis and treatment.
In the real world, however, where diagnostics for acute Lyme disease are often inaccurate, erythema migrans is often absent, and the symptomatology of Lyme IACI is variable (and where there is no approved laboratory test or objective biomarker for diagnosing Lyme IACI), PTLD represents only a subset of a broader, heterogeneous population with persistent symptoms.
The term “Lyme IACI,” pronounced “Lyme eye-ACK-ee” at the meeting, builds on conversations at the 2023 NASEM workshop on infection-associated chronic illnesses and “encompasses a variety of terms that are used,” including PTLD, PTLD syndrome, persistent Lyme disease, and chronic Lyme disease, according to committee documents. Symptoms are distinct from the known complications of Lyme disease, such as arthritis or carditis.
The findings from Dr. Aucott’s SLICE cohort likely represent “the best outcome,” he said. They’re “probably not generalizable to a community setting where we see lots of missed diagnoses and delayed diagnoses,” as well as other tick-borne coinfections.
One of the challenges in designing future trials, in fact, relates to enrollment criteria and whether to use strict inclusion and exclusion criteria associated with the IDSA definition or take a broader approach to trial enrollment, he and others said. “You want to enroll patients for whom there’s no controversy that they’ve had Lyme infection ... for a study people believe in,” Dr. Aucott said during a discussion period, noting that it’s typical to screen over 100 patients to find one enrollee. “But it’s a tension we’re having.”
Timothy Sellati, PhD, chief scientific officer of the Global Lyme Alliance, urged change. “It’s really important to try to figure out how to alter our thinking on identifying and diagnosing chronic Lyme patients because they need to be recruited into clinical trials,” he said during his presentation.
“We think the best way to do this is to [develop and] employ composite diagnostic testing” that looks at unique Borrelia signatures (eg, protein, DNA, RNA, or metabolites), genetic and/or epigenetic signatures, inflammation signatures, T-cell-independent antibody signatures, and other elements, Dr. Sellati said.
Researchers designing treatment trials also face unknowns, Dr. Aucott and others said, about the role of potential mechanisms of Lyme IACI, from persistent Borrelia burgdorferi (or Borrelia mayonii) infection or the persistence of bacterial remnants (eg, nucleic acids or peptidoglycans) to infection-triggered pathology such as persistent immune dysregulation, chronic inflammation, autoimmunity, microbiome alterations, and dysautonomia and other neural network alterations.
The NASEM’s spotlight on Lyme IACI follows its long COVID-driven push last year to advance a common research agenda in infection-associated chronic illnesses. Investigators see common symptoms and potential shared mechanisms between long COVID, Lyme IACI, myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), and other complex chronic illnesses following infections.
At the Lyme IACI meeting, invited speakers described parts of the research landscape. Avindra Nath, MD, of the National Institute of Neurological Disorders and Stroke, for instance, described a recently published deep phenotyping study of 17 patients with ME/CFS that found decreased central catecholamine synthesis, circuit dysfunction of integrative brain regions, and immune profiling differences (eg, defects in B-cell maturation or T-cell exhaustion), compared with matched controls, that suggest the persistence of microbial antigens.
And John Leong, MD, PhD, of Tufts University, Boston, described his lab’s focus on understanding the microbe-host interactions that enable bloodstream dissemination and tissue invasion of B burgdorferi to take hold, increasing the risk for persistent symptoms. Other research at Tufts, he noted during a discussion period, has demonstrated the persistence of B burgdorferi to antibiotics in microtiter dishes. “Those organisms that survive are really difficult to eradicate in vitro,” Dr. Leong said.
Other physician investigators described research on nociplastic pain — a category of pain that can be triggered by infections, causing both amplified sensory processing and augmented central nervous system pain — and on whether reactivation of the Epstein-Barr virus could potentiate autoimmunity in the context of Borrelia infection.
Researchers are ready to test therapies while pathophysiology is unraveled — provided there is funding, Dr. Aucott said. The Clinical Trials Network for Lyme and Other Tick-Borne Diseases, coordinated by Brian Fallon, MD, of Columbia University, New York City, and funded several years ago by the Steven & Alexandra Cohen Foundation, has a slate of small pilot studies underway or being planned that address potential mechanisms (eg, studies of pulse intravenous ceftriaxone, tetracycline, transauricular vagus nerve stimulation, and mast cell modulation). And should full multisite trials be designed and funded, the network is ready with an infrastructure.
Need for Patient-Centered Outcomes
Persistent symptomatology is on the NIH’s radar screen. Efforts to understand causes were part of a strategic tick-borne disease research plan developed by the NIH in 2019. And in 2023, the National Institute of Allergy and Infectious Diseases (NIAID) funded seven projects addressing persistent symptoms that will run through 2028, C. Benjamin Beard, PhD, deputy division director of the CDC’s Division of Vector-Borne Disease, said at the NASEM committee meeting.
Patient advocates maintained that too much emphasis is placed on tick biology and pathophysiology. When Wendy Adams, research grant director and advisory board member of the Bay Area Lyme Foundation, and a colleague analyzed NIAID tick-borne disease funding from 2013 to 2021, they found that 75% of the funding went toward basic research, 15% to translational research, and “only 3% went to clinical research,” Ms. Adams told the committee.
Only 3% of the basic research budget was spent on coinfections, she said, and only 1% was spent on neurologic disease associated with tick-borne infections, both of which are survey-defined patient priorities. Moreover, “12% of the overall NIAID [tick-borne diseases] budget was spent on tick biology,” she said.
Research needs to involve community physicians who are utilizing the guidelines and approaches of the International Lyme and Associated Diseases Society to treat most patients with Lyme IACI, Ms. Adams said. “They have data to be mined,” she said, as does LymeDisease.org, which maintains a patient registry, MyLymeData, with over 18,000 patients. The organization has published two treatment studies, including one on antibiotic treatment response.
Lorraine Johnson, JD, MBA, CEO of LymeDisease.org and principal investigator of MyLymeData, stressed the importance of using patient-centered outcomes that incorporate minimal clinically important differences (MCIDs). “A change in the SF-36 score [without consideration of MCIDs] is not inherently important or meaningful to patients,” she said, referring to the SF-36 survey of health-related quality of life.
“This may seem like an esoteric issue, but two of the four clinical trials done [on retreatment of] persistent Lyme disease used the SF-36 as their outcome measure, and those studies, led by [Mark] Klempner, concluded that retreatment was not effective,” Ms. Johnson said. “Patients have been and continue to be harmed by [this research] because they’re told by physicians that antibiotics don’t work.”
A 2012 biostatistical review of these four RCTs — trials that helped inform the 2006 IDSA treatment guidelines — concluded that the Klempner studies “set the bar for treatment success too high,” Ms. Johnson said. Three of the four trials were likely underpowered to detect clinically meaningful treatment effects, the review also found.
The NASEM committee will hold additional public meetings and review a wide range of literature through this year. The formation of the committee was recommended by the US Department of Health and Human Services Tick-Borne Disease Working Group that was established by Congress in 2016 and concluded its work in 2022. The committee’s work is funded by the Cohen Foundation.
A version of this article appeared on Medscape.com.
WASHINGTON — Advancing treatment for what has been variably called chronic Lyme and posttreatment Lyme disease (PTLD) is under the eyes of a National Academies of Science, Engineering, and Medicine (NASEM) committee of experts for the first time — a year after the NASEM shone a spotlight on the need to accelerate research on chronic illnesses that follow known or suspected infections.
The committee will not make recommendations on specific approaches to diagnosis and treatment when it issues a report in early 2025 but will instead present “consensus findings” on treatment for chronic illness associated with Lyme disease, including recommendations for advancing treatment.
It’s an area void of the US Food and Drug Administration–approved therapies, void of any consensus on the off-label use of medications, and without any current standard of care or proven mechanisms and pathophysiology, said John Aucott, MD, director of the Johns Hopkins Medicine Lyme Disease Clinical Research Center, Baltimore, one of the invited speakers at a public meeting held by the NASEM in Washington, DC.
“The best way to look at this illness is not from the silos of infectious disease or the silos of rheumatology; you have to look across disciplines,” Dr. Aucott, also associate professor of medicine in the Division of Rheumatology, told the committee. “The story doesn’t fit anything I trained for in my infectious disease fellowship. Even today, I’d posit that PTLD is like an island — it’s still not connected to a lot of the mainstream of medicine.”
Rhisa Parera, who wrote and directed a 2021 documentary, Your Labs Are Normal, was one of several invited speakers who amplified the patient voice. Starting around age 7, she had pain in her knees, spine, and hips and vivid nightmares. In high school, she developed gastrointestinal issues, and in college, she developed debilitating neurologic symptoms.
Depression was her eventual diagnosis after having seen “every specialist in the book,” she said. At age 29, she received a positive western blot test and a Lyme disease diagnosis, at which point “I was prescribed 4 weeks of doxycycline and left in the dark,” the 34-year-old Black patient told the committee. Her health improved only after she began working with an “LLMD,” or Lyme-literate medical doctor (a term used in the patient community), while she lived with her mother and did not work, she said.
“I don’t share my Lyme disease history with other doctors. It’s pointless when you have those who will laugh at you, say you’re fine if you were treated, or just deny the disease completely,” Ms. Parera said. “We need this to be taught in medical school. It’s a literal emergency.”
Incidence and Potential Mechanisms
Limited research has suggested that 10%-20% of patients with Lyme disease develop persistent symptoms after standard antibiotic treatment advised by the Infectious Diseases Society of America (IDSA), Dr. Aucott said. (On its web page on chronic symptoms, the Centers for Disease Control and Prevention presents a more conservative range of 5%-10%.)
His own prospective cohort study at Johns Hopkins, published in 2022, found that 13.7% of 234 patients with prior Lyme disease met symptom and functional impact criteria for PTLD, compared with 4.1% of 49 participants without a history of Lyme disease — a statistically significant difference that he said should “put to rest” the question of “is it real?”
PTLD is the research case definition proposed by the IDSA in 2006; it requires that patients have prior documented Lyme disease, no other specific comorbidities, and specific symptoms (fatigue, widespread musculoskeletal pain, and/or cognitive difficulties) causing significant functional impact at least 6 months from their initial diagnosis and treatment.
In the real world, however, where diagnostics for acute Lyme disease are often inaccurate, erythema migrans is often absent, and the symptomatology of Lyme IACI is variable (and where there is no approved laboratory test or objective biomarker for diagnosing Lyme IACI), PTLD represents only a subset of a broader, heterogeneous population with persistent symptoms.
The term “Lyme IACI,” pronounced “Lyme eye-ACK-ee” at the meeting, builds on conversations at the 2023 NASEM workshop on infection-associated chronic illnesses and “encompasses a variety of terms that are used,” including PTLD, PTLD syndrome, persistent Lyme disease, and chronic Lyme disease, according to committee documents. Symptoms are distinct from the known complications of Lyme disease, such as arthritis or carditis.
The findings from Dr. Aucott’s SLICE cohort likely represent “the best outcome,” he said. They’re “probably not generalizable to a community setting where we see lots of missed diagnoses and delayed diagnoses,” as well as other tick-borne coinfections.
One of the challenges in designing future trials, in fact, relates to enrollment criteria and whether to use strict inclusion and exclusion criteria associated with the IDSA definition or take a broader approach to trial enrollment, he and others said. “You want to enroll patients for whom there’s no controversy that they’ve had Lyme infection ... for a study people believe in,” Dr. Aucott said during a discussion period, noting that it’s typical to screen over 100 patients to find one enrollee. “But it’s a tension we’re having.”
Timothy Sellati, PhD, chief scientific officer of the Global Lyme Alliance, urged change. “It’s really important to try to figure out how to alter our thinking on identifying and diagnosing chronic Lyme patients because they need to be recruited into clinical trials,” he said during his presentation.
“We think the best way to do this is to [develop and] employ composite diagnostic testing” that looks at unique Borrelia signatures (eg, protein, DNA, RNA, or metabolites), genetic and/or epigenetic signatures, inflammation signatures, T-cell-independent antibody signatures, and other elements, Dr. Sellati said.
Researchers designing treatment trials also face unknowns, Dr. Aucott and others said, about the role of potential mechanisms of Lyme IACI, from persistent Borrelia burgdorferi (or Borrelia mayonii) infection or the persistence of bacterial remnants (eg, nucleic acids or peptidoglycans) to infection-triggered pathology such as persistent immune dysregulation, chronic inflammation, autoimmunity, microbiome alterations, and dysautonomia and other neural network alterations.
The NASEM’s spotlight on Lyme IACI follows its long COVID-driven push last year to advance a common research agenda in infection-associated chronic illnesses. Investigators see common symptoms and potential shared mechanisms between long COVID, Lyme IACI, myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), and other complex chronic illnesses following infections.
At the Lyme IACI meeting, invited speakers described parts of the research landscape. Avindra Nath, MD, of the National Institute of Neurological Disorders and Stroke, for instance, described a recently published deep phenotyping study of 17 patients with ME/CFS that found decreased central catecholamine synthesis, circuit dysfunction of integrative brain regions, and immune profiling differences (eg, defects in B-cell maturation or T-cell exhaustion), compared with matched controls, that suggest the persistence of microbial antigens.
And John Leong, MD, PhD, of Tufts University, Boston, described his lab’s focus on understanding the microbe-host interactions that enable bloodstream dissemination and tissue invasion of B burgdorferi to take hold, increasing the risk for persistent symptoms. Other research at Tufts, he noted during a discussion period, has demonstrated the persistence of B burgdorferi to antibiotics in microtiter dishes. “Those organisms that survive are really difficult to eradicate in vitro,” Dr. Leong said.
Other physician investigators described research on nociplastic pain — a category of pain that can be triggered by infections, causing both amplified sensory processing and augmented central nervous system pain — and on whether reactivation of the Epstein-Barr virus could potentiate autoimmunity in the context of Borrelia infection.
Researchers are ready to test therapies while pathophysiology is unraveled — provided there is funding, Dr. Aucott said. The Clinical Trials Network for Lyme and Other Tick-Borne Diseases, coordinated by Brian Fallon, MD, of Columbia University, New York City, and funded several years ago by the Steven & Alexandra Cohen Foundation, has a slate of small pilot studies underway or being planned that address potential mechanisms (eg, studies of pulse intravenous ceftriaxone, tetracycline, transauricular vagus nerve stimulation, and mast cell modulation). And should full multisite trials be designed and funded, the network is ready with an infrastructure.
Need for Patient-Centered Outcomes
Persistent symptomatology is on the NIH’s radar screen. Efforts to understand causes were part of a strategic tick-borne disease research plan developed by the NIH in 2019. And in 2023, the National Institute of Allergy and Infectious Diseases (NIAID) funded seven projects addressing persistent symptoms that will run through 2028, C. Benjamin Beard, PhD, deputy division director of the CDC’s Division of Vector-Borne Disease, said at the NASEM committee meeting.
Patient advocates maintained that too much emphasis is placed on tick biology and pathophysiology. When Wendy Adams, research grant director and advisory board member of the Bay Area Lyme Foundation, and a colleague analyzed NIAID tick-borne disease funding from 2013 to 2021, they found that 75% of the funding went toward basic research, 15% to translational research, and “only 3% went to clinical research,” Ms. Adams told the committee.
Only 3% of the basic research budget was spent on coinfections, she said, and only 1% was spent on neurologic disease associated with tick-borne infections, both of which are survey-defined patient priorities. Moreover, “12% of the overall NIAID [tick-borne diseases] budget was spent on tick biology,” she said.
Research needs to involve community physicians who are utilizing the guidelines and approaches of the International Lyme and Associated Diseases Society to treat most patients with Lyme IACI, Ms. Adams said. “They have data to be mined,” she said, as does LymeDisease.org, which maintains a patient registry, MyLymeData, with over 18,000 patients. The organization has published two treatment studies, including one on antibiotic treatment response.
Lorraine Johnson, JD, MBA, CEO of LymeDisease.org and principal investigator of MyLymeData, stressed the importance of using patient-centered outcomes that incorporate minimal clinically important differences (MCIDs). “A change in the SF-36 score [without consideration of MCIDs] is not inherently important or meaningful to patients,” she said, referring to the SF-36 survey of health-related quality of life.
“This may seem like an esoteric issue, but two of the four clinical trials done [on retreatment of] persistent Lyme disease used the SF-36 as their outcome measure, and those studies, led by [Mark] Klempner, concluded that retreatment was not effective,” Ms. Johnson said. “Patients have been and continue to be harmed by [this research] because they’re told by physicians that antibiotics don’t work.”
A 2012 biostatistical review of these four RCTs — trials that helped inform the 2006 IDSA treatment guidelines — concluded that the Klempner studies “set the bar for treatment success too high,” Ms. Johnson said. Three of the four trials were likely underpowered to detect clinically meaningful treatment effects, the review also found.
The NASEM committee will hold additional public meetings and review a wide range of literature through this year. The formation of the committee was recommended by the US Department of Health and Human Services Tick-Borne Disease Working Group that was established by Congress in 2016 and concluded its work in 2022. The committee’s work is funded by the Cohen Foundation.
A version of this article appeared on Medscape.com.
High Blood Sugar May Drive Dementia, German Researchers Warn
On World Brain Day (July 22, 2024), the German Society of Neurology (DGN) and the German Brain Foundation pointed out that too much sugar can harm the brain. The current results of the Global Burden of Diseases study shows that stroke and dementia are among the top 10 causes of death. A healthy, active lifestyle with sufficient exercise and sleep, along with the avoidance of harmful substances like alcohol, nicotine, or excessive sugar, protects the brain.
“Of course, the dose makes the poison as the brain, being the body’s powerhouse, needs glucose to function,” said Frank Erbguth, MD, PhD, president of the German Brain Foundation, in a press release from DGN and the German Brain Foundation. “However, with a permanent increase in blood sugar levels due to too many, too lavish meals and constant snacking on the side, we overload the system and fuel the development of neurologic diseases, particularly dementia and stroke.”
The per capita consumption of sugar was 33.2 kg in 2021/2022, which is almost twice the recommended amount. The German Nutrition Society recommends that no more than 10% of energy come from sugar. With a goal of 2000 kilocalories, that’s 50 g per day, or 18 kg per year. This total includes not only added sugar but also naturally occurring sugar, such as in fruits, honey, or juices.
What’s the Mechanism?
In Germany, around 250,000 people are diagnosed with dementia annually, and 15%-25% have vascular dementia. That proportion represents between 40,000 and 60,000 new cases each year.
In addition, glycosaminoglycans, which are complex sugar molecules, can directly impair cognition. They affect the function of synapses between nerve cells and, thus, affect neuronal plasticity. Experimental data presented at the 2023 American Chemical Society Congress have shown this phenomenon.
Twenty years ago, a study provided evidence that a diet high in fat and sugar disrupts neuronal plasticity and can impair the function of the hippocampus in the long term. A recent meta-analysis confirms these findings: Although mental performance improves at 2-12 hours after sugar consumption, sustained sugar intake can permanently damage cognitive function.
Diabetes mellitus can indirectly cause brain damage. Since the 1990s, it has been known that patients with type 2 diabetes have a significantly higher risk for dementia. It is suspected that glucose metabolism is also disrupted in neurons, thus contributing to the development of Alzheimer’s disease. Insulin also plays a role in the formation of Alzheimer’s plaques.
The Max Planck Institute for Metabolism Research demonstrated in 2023 that regular consumption of high-sugar and high-fat foods can change the brain. This leads to an increased craving for high-sugar and high-fat foods, which in turn promotes the development of obesity and type 2 diabetes.
Reduce Sugar Consumption
DGN and the German Brain Foundation advise minimizing sugar consumption. This process is often challenging, as even a small dose of sugar can trigger the gut to send signals to the brain via the vagus nerve, thus causing a strong craving for more sugar. “This could be the reason why some people quickly eat a whole chocolate bar after just one piece,” said Dr. Erbguth. In addition, dopamine, a “feel-good hormone,” is released in the brain when consuming sugar, thus leading to a desire for more.
“It is wise to break free from this cycle by largely avoiding sugar,” said Peter Berlit, MD, secretary general and spokesperson for DGN. “The effort is worth it, as 40% of all dementia cases and 90% of all strokes are preventable, with many of them linked to industrial sugar,” said Dr. Berlit. DGN and the German Brain Foundation support the call for a tax on particularly sugary beverages. They also pointed out that foods like yogurt or tomato ketchup contain sugar, and alcohol can also significantly raise blood sugar levels.
This story was translated from the Medscape German edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
On World Brain Day (July 22, 2024), the German Society of Neurology (DGN) and the German Brain Foundation pointed out that too much sugar can harm the brain. The current results of the Global Burden of Diseases study shows that stroke and dementia are among the top 10 causes of death. A healthy, active lifestyle with sufficient exercise and sleep, along with the avoidance of harmful substances like alcohol, nicotine, or excessive sugar, protects the brain.
“Of course, the dose makes the poison as the brain, being the body’s powerhouse, needs glucose to function,” said Frank Erbguth, MD, PhD, president of the German Brain Foundation, in a press release from DGN and the German Brain Foundation. “However, with a permanent increase in blood sugar levels due to too many, too lavish meals and constant snacking on the side, we overload the system and fuel the development of neurologic diseases, particularly dementia and stroke.”
The per capita consumption of sugar was 33.2 kg in 2021/2022, which is almost twice the recommended amount. The German Nutrition Society recommends that no more than 10% of energy come from sugar. With a goal of 2000 kilocalories, that’s 50 g per day, or 18 kg per year. This total includes not only added sugar but also naturally occurring sugar, such as in fruits, honey, or juices.
What’s the Mechanism?
In Germany, around 250,000 people are diagnosed with dementia annually, and 15%-25% have vascular dementia. That proportion represents between 40,000 and 60,000 new cases each year.
In addition, glycosaminoglycans, which are complex sugar molecules, can directly impair cognition. They affect the function of synapses between nerve cells and, thus, affect neuronal plasticity. Experimental data presented at the 2023 American Chemical Society Congress have shown this phenomenon.
Twenty years ago, a study provided evidence that a diet high in fat and sugar disrupts neuronal plasticity and can impair the function of the hippocampus in the long term. A recent meta-analysis confirms these findings: Although mental performance improves at 2-12 hours after sugar consumption, sustained sugar intake can permanently damage cognitive function.
Diabetes mellitus can indirectly cause brain damage. Since the 1990s, it has been known that patients with type 2 diabetes have a significantly higher risk for dementia. It is suspected that glucose metabolism is also disrupted in neurons, thus contributing to the development of Alzheimer’s disease. Insulin also plays a role in the formation of Alzheimer’s plaques.
The Max Planck Institute for Metabolism Research demonstrated in 2023 that regular consumption of high-sugar and high-fat foods can change the brain. This leads to an increased craving for high-sugar and high-fat foods, which in turn promotes the development of obesity and type 2 diabetes.
Reduce Sugar Consumption
DGN and the German Brain Foundation advise minimizing sugar consumption. This process is often challenging, as even a small dose of sugar can trigger the gut to send signals to the brain via the vagus nerve, thus causing a strong craving for more sugar. “This could be the reason why some people quickly eat a whole chocolate bar after just one piece,” said Dr. Erbguth. In addition, dopamine, a “feel-good hormone,” is released in the brain when consuming sugar, thus leading to a desire for more.
“It is wise to break free from this cycle by largely avoiding sugar,” said Peter Berlit, MD, secretary general and spokesperson for DGN. “The effort is worth it, as 40% of all dementia cases and 90% of all strokes are preventable, with many of them linked to industrial sugar,” said Dr. Berlit. DGN and the German Brain Foundation support the call for a tax on particularly sugary beverages. They also pointed out that foods like yogurt or tomato ketchup contain sugar, and alcohol can also significantly raise blood sugar levels.
This story was translated from the Medscape German edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
On World Brain Day (July 22, 2024), the German Society of Neurology (DGN) and the German Brain Foundation pointed out that too much sugar can harm the brain. The current results of the Global Burden of Diseases study shows that stroke and dementia are among the top 10 causes of death. A healthy, active lifestyle with sufficient exercise and sleep, along with the avoidance of harmful substances like alcohol, nicotine, or excessive sugar, protects the brain.
“Of course, the dose makes the poison as the brain, being the body’s powerhouse, needs glucose to function,” said Frank Erbguth, MD, PhD, president of the German Brain Foundation, in a press release from DGN and the German Brain Foundation. “However, with a permanent increase in blood sugar levels due to too many, too lavish meals and constant snacking on the side, we overload the system and fuel the development of neurologic diseases, particularly dementia and stroke.”
The per capita consumption of sugar was 33.2 kg in 2021/2022, which is almost twice the recommended amount. The German Nutrition Society recommends that no more than 10% of energy come from sugar. With a goal of 2000 kilocalories, that’s 50 g per day, or 18 kg per year. This total includes not only added sugar but also naturally occurring sugar, such as in fruits, honey, or juices.
What’s the Mechanism?
In Germany, around 250,000 people are diagnosed with dementia annually, and 15%-25% have vascular dementia. That proportion represents between 40,000 and 60,000 new cases each year.
In addition, glycosaminoglycans, which are complex sugar molecules, can directly impair cognition. They affect the function of synapses between nerve cells and, thus, affect neuronal plasticity. Experimental data presented at the 2023 American Chemical Society Congress have shown this phenomenon.
Twenty years ago, a study provided evidence that a diet high in fat and sugar disrupts neuronal plasticity and can impair the function of the hippocampus in the long term. A recent meta-analysis confirms these findings: Although mental performance improves at 2-12 hours after sugar consumption, sustained sugar intake can permanently damage cognitive function.
Diabetes mellitus can indirectly cause brain damage. Since the 1990s, it has been known that patients with type 2 diabetes have a significantly higher risk for dementia. It is suspected that glucose metabolism is also disrupted in neurons, thus contributing to the development of Alzheimer’s disease. Insulin also plays a role in the formation of Alzheimer’s plaques.
The Max Planck Institute for Metabolism Research demonstrated in 2023 that regular consumption of high-sugar and high-fat foods can change the brain. This leads to an increased craving for high-sugar and high-fat foods, which in turn promotes the development of obesity and type 2 diabetes.
Reduce Sugar Consumption
DGN and the German Brain Foundation advise minimizing sugar consumption. This process is often challenging, as even a small dose of sugar can trigger the gut to send signals to the brain via the vagus nerve, thus causing a strong craving for more sugar. “This could be the reason why some people quickly eat a whole chocolate bar after just one piece,” said Dr. Erbguth. In addition, dopamine, a “feel-good hormone,” is released in the brain when consuming sugar, thus leading to a desire for more.
“It is wise to break free from this cycle by largely avoiding sugar,” said Peter Berlit, MD, secretary general and spokesperson for DGN. “The effort is worth it, as 40% of all dementia cases and 90% of all strokes are preventable, with many of them linked to industrial sugar,” said Dr. Berlit. DGN and the German Brain Foundation support the call for a tax on particularly sugary beverages. They also pointed out that foods like yogurt or tomato ketchup contain sugar, and alcohol can also significantly raise blood sugar levels.
This story was translated from the Medscape German edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Two Diets Linked to Improved Cognition, Slowed Brain Aging
An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.
Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.
“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.
The findings were published online in Cell Metabolism.
Cognitive Outcomes
The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.
Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.
Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.
The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.
Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.
The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.
Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.
Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.
Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
Hypothesis-Generating Research
AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.
In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.
An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.
Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.
The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.
The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
A version of this article first appeared on Medscape.com.
An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.
Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.
“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.
The findings were published online in Cell Metabolism.
Cognitive Outcomes
The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.
Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.
Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.
The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.
Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.
The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.
Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.
Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.
Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
Hypothesis-Generating Research
AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.
In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.
An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.
Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.
The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.
The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
A version of this article first appeared on Medscape.com.
An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.
Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.
“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.
The findings were published online in Cell Metabolism.
Cognitive Outcomes
The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.
Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.
Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.
The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.
Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.
The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.
Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.
Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.
Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
Hypothesis-Generating Research
AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.
In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.
An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.
Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.
The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.
The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
A version of this article first appeared on Medscape.com.
FROM CELL METABOLISM
Risk of MACE Comparable Among Biologic Classes for Psoriasis, PsA
TOPLINE:
a database analysis finds.
METHODOLOGY:
- Data from the TriNetX health records database included 32,758 patients treated with TNF inhibitors (TNFi, 62.9%), interleukin-17 inhibitors (IL-17i, 15.4%), IL-23i (10.7%), and IL-12i/IL-23i (10.7%).
- The researchers calculated time-dependent risk for MACE using multinomial Cox proportional hazard ratios. The reference was TNFi exposure.
- Subset analyses compared MACE in patients with and without existing cardiovascular disease.
TAKEAWAY:
- Compared with TNFi use, there was no difference in the incidence of MACE events in the IL-17i, IL-23i, or IL-12i/IL-23i group.
- There were also no significant differences between biologic groups in the incidence of congestive heart failure, myocardial infarction, or cerebral vascular accident/stroke.
IN PRACTICE:
Despite some concern about increased risk for MACE with TNFi use, this study suggests no special risk for patients with psoriasis or PsA associated with TNFi vs other biologics. “Given our results, as it pertains to MACE, prescribers shouldn’t favor any one biologic class over another,” said lead investigator Shikha Singla, MD, medical director of the Psoriatic Arthritis Program at Medical College of Wisconsin in Milwaukee, Wisconsin.
SOURCE:
Bonit Gill, MD, a second-year fellow at Medical College of Wisconsin, presented the study as a poster at the annual meeting of the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis.
LIMITATIONS:
The study’s retrospective nature makes it impossible to prove causation and the patients included in the study were from Wisconsin, which may limit generalizability.
DISCLOSURES:
Dr. Gill had no relevant financial disclosures. Other study authors participated in trials or consulted for AbbVie, AstraZeneca, Novartis, Eli Lilly, Janssen, and UCB.
A version of this article first appeared on Medscape.com.
TOPLINE:
a database analysis finds.
METHODOLOGY:
- Data from the TriNetX health records database included 32,758 patients treated with TNF inhibitors (TNFi, 62.9%), interleukin-17 inhibitors (IL-17i, 15.4%), IL-23i (10.7%), and IL-12i/IL-23i (10.7%).
- The researchers calculated time-dependent risk for MACE using multinomial Cox proportional hazard ratios. The reference was TNFi exposure.
- Subset analyses compared MACE in patients with and without existing cardiovascular disease.
TAKEAWAY:
- Compared with TNFi use, there was no difference in the incidence of MACE events in the IL-17i, IL-23i, or IL-12i/IL-23i group.
- There were also no significant differences between biologic groups in the incidence of congestive heart failure, myocardial infarction, or cerebral vascular accident/stroke.
IN PRACTICE:
Despite some concern about increased risk for MACE with TNFi use, this study suggests no special risk for patients with psoriasis or PsA associated with TNFi vs other biologics. “Given our results, as it pertains to MACE, prescribers shouldn’t favor any one biologic class over another,” said lead investigator Shikha Singla, MD, medical director of the Psoriatic Arthritis Program at Medical College of Wisconsin in Milwaukee, Wisconsin.
SOURCE:
Bonit Gill, MD, a second-year fellow at Medical College of Wisconsin, presented the study as a poster at the annual meeting of the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis.
LIMITATIONS:
The study’s retrospective nature makes it impossible to prove causation and the patients included in the study were from Wisconsin, which may limit generalizability.
DISCLOSURES:
Dr. Gill had no relevant financial disclosures. Other study authors participated in trials or consulted for AbbVie, AstraZeneca, Novartis, Eli Lilly, Janssen, and UCB.
A version of this article first appeared on Medscape.com.
TOPLINE:
a database analysis finds.
METHODOLOGY:
- Data from the TriNetX health records database included 32,758 patients treated with TNF inhibitors (TNFi, 62.9%), interleukin-17 inhibitors (IL-17i, 15.4%), IL-23i (10.7%), and IL-12i/IL-23i (10.7%).
- The researchers calculated time-dependent risk for MACE using multinomial Cox proportional hazard ratios. The reference was TNFi exposure.
- Subset analyses compared MACE in patients with and without existing cardiovascular disease.
TAKEAWAY:
- Compared with TNFi use, there was no difference in the incidence of MACE events in the IL-17i, IL-23i, or IL-12i/IL-23i group.
- There were also no significant differences between biologic groups in the incidence of congestive heart failure, myocardial infarction, or cerebral vascular accident/stroke.
IN PRACTICE:
Despite some concern about increased risk for MACE with TNFi use, this study suggests no special risk for patients with psoriasis or PsA associated with TNFi vs other biologics. “Given our results, as it pertains to MACE, prescribers shouldn’t favor any one biologic class over another,” said lead investigator Shikha Singla, MD, medical director of the Psoriatic Arthritis Program at Medical College of Wisconsin in Milwaukee, Wisconsin.
SOURCE:
Bonit Gill, MD, a second-year fellow at Medical College of Wisconsin, presented the study as a poster at the annual meeting of the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis.
LIMITATIONS:
The study’s retrospective nature makes it impossible to prove causation and the patients included in the study were from Wisconsin, which may limit generalizability.
DISCLOSURES:
Dr. Gill had no relevant financial disclosures. Other study authors participated in trials or consulted for AbbVie, AstraZeneca, Novartis, Eli Lilly, Janssen, and UCB.
A version of this article first appeared on Medscape.com.
Study Links Newer Shingles Vaccine to Delayed Dementia Diagnosis
The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine.
“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England.
But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk.
“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said.
The study was published online on July 25 in Nature Medicine.
‘Natural Experiment’
Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection.
The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017.
Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch.
They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020.
Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected.
As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%.
Reduced Risk or Delayed Diagnosis?
Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.
“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained.
But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported.
“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested.
But the researchers cautioned that this study could not prove causality.
“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned.
The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom.
Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older.
In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.
Mechanism Uncertain
Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.
“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted.
The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role.
“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said.
Stronger Effect in Women
Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men.
In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women.
In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine.
As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect.
“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented.
Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study.
He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine.
“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented.
Outside Experts Positive
Outside experts, providing comment to the Science Media Centre, welcomed the new research.
“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association.
The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.
“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”
Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.
In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality.
“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.
Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.
A version of this article first appeared on Medscape.com.
The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine.
“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England.
But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk.
“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said.
The study was published online on July 25 in Nature Medicine.
‘Natural Experiment’
Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection.
The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017.
Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch.
They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020.
Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected.
As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%.
Reduced Risk or Delayed Diagnosis?
Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.
“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained.
But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported.
“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested.
But the researchers cautioned that this study could not prove causality.
“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned.
The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom.
Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older.
In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.
Mechanism Uncertain
Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.
“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted.
The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role.
“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said.
Stronger Effect in Women
Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men.
In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women.
In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine.
As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect.
“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented.
Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study.
He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine.
“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented.
Outside Experts Positive
Outside experts, providing comment to the Science Media Centre, welcomed the new research.
“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association.
The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.
“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”
Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.
In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality.
“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.
Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.
A version of this article first appeared on Medscape.com.
The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine.
“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England.
But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk.
“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said.
The study was published online on July 25 in Nature Medicine.
‘Natural Experiment’
Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection.
The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017.
Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch.
They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020.
Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected.
As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%.
Reduced Risk or Delayed Diagnosis?
Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.
“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained.
But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported.
“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested.
But the researchers cautioned that this study could not prove causality.
“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned.
The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom.
Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older.
In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.
Mechanism Uncertain
Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.
“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted.
The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role.
“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said.
Stronger Effect in Women
Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men.
In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women.
In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine.
As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect.
“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented.
Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study.
He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine.
“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented.
Outside Experts Positive
Outside experts, providing comment to the Science Media Centre, welcomed the new research.
“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association.
The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.
“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”
Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.
In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality.
“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.
Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.
A version of this article first appeared on Medscape.com.
FROM NATURE MEDICINE
The Rise of the Scribes
“We really aren’t taking care of records — we’re taking care of people.” — Dr. Lawrence Weed
What is the purpose of a progress note? Anyone? Yes, you there. “Insurance billing?” Yes, that’s a good one. Anyone else? “To remember what you did?” Excellent. Another? Yes, that’s right, for others to follow along in your care. These are all good reasons for a progress note to exist. But they aren’t the whole story. Let’s start at the beginning.
Charts were once a collection of paper sheets with handwritten notes. Sometimes illegible, sometimes beautiful, always efficient. A progress note back then could be just 10 characters, AK, LN2, X,X,X,X,X (with X’s marking nitrogen sprays). Then came the healthcare K-Pg event: the conversion to EMRs. Those doctors who survived evolved into computer programmers, creating blocks of text from a few keystrokes. But like toddler-sized Legos, the blocks made it impossible to build a note that is nuanced or precise. Worse yet, many notes consisting of blocks from one note added awkwardly to a new note, creating grotesque structures unrecognizable as anything that should exist in nature. Words and numbers, but no information.
Thanks to the eternity of EMR, these creations live on, hideous and useless. They waste not only the server’s energy but also our time. Few things are more maddening than scrolling to reach the bottom of another physician’s note only to find there is nothing there.
Whose fault is this? Anyone? Yes, that’s right, insurers. As there are probably no payers in this audience, let’s blame them. I agree, the crushing burden of documentation-to-get-reimbursed has forced us to create “notes” that add no value to us but add up points for us to get paid for them. CMS, payers, prior authorizations, and now even patients, it seems we are documenting for lots of people except for us. There isn’t time to satisfy all and this significant burden for every encounter is a proximate cause for doctors despair. Until now.
A fully formed, comprehensive, sometimes pretty note that satisfies all audiences. Dr. Larry Weed must be dancing in heaven. It was Dr. Weed who led us from the nicotine-stained logs of the 1950s to the powerful problem-based notes we use today, an innovation that rivals the stethoscope in its impact.
Professor Weed also predicted that computers would be important to capture and make sense of patient data, helping us make accurate diagnoses and efficient plans. Again, he was right. He would surely be advocating to take advantage of AI scribes’ marvelous ability to capture salient data and present it in the form of a problem-oriented medical record.
AI scribes will be ubiquitous soon; I’m fast and even for me they save time. They also allow, for the first time in a decade, to turn from the glow of a screen to actually face the patient – we no longer have to scribe and care simultaneously. Hallelujah. And yet, lest I disappoint you without a twist, it seems with AI scribes, like EMRs we lose a little something too.
Like self-driving cars or ChatGPT-generated letters, they remove cognitive loads. They are lovely when you have to multitask or are trying to recall a visit from hours (days) ago. Using them, you’ll feel faster, lighter, freer, happier. But what’s missing is the thinking. At the end, you have an exquisite note, but you didn’t write it. It has the salient points, but none of the mental work to create it. AI scribes subvert the valuable work of synthesis. That was the critical part of Dr. Weed’s discovery: writing problem-oriented notes helped us think better.
Writing allows for the friction that helps us process what is going on with a patient. It allows for the discovery of diagnoses and prompts plans. When I was an intern, one of my attendings would hand write notes, succinctly showing what he had observed and was thinking. He’d sketch diagrams in the chart, for example, to help illustrate how we’d work though the toxic, metabolic, and infectious etiologies of acute liver failure. Sublime.
The act of writing also helps remind us there is a person attached to these words. Like a handwritten sympathy card, it is intimate, human. Even using our EMR, I’d still often type sentences that help tell the patient’s story. “Her sister just died. Utterly devastated. I’ll forward chart to Bob (her PCP) to check in on her.” Or: “Scratch golfer wants to know why he is getting so many SCCs now. ‘Like bankruptcy, gradually then suddenly,’ I explained. I think I broke through.”
Since we’ve concluded the purpose of a note is mostly to capture data, AI scribes are a godsend. They do so with remarkable quality and efficiency. We’ll just have to remember if the diagnosis is unclear, then it might help to write the note out yourself. And even when done by the AI machine, we might add human touches now and again lest there be no art left in what we do.
“For sale. Sun hat. Never worn.”
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on X. Write to him at dermnews@mdedge.com.
“We really aren’t taking care of records — we’re taking care of people.” — Dr. Lawrence Weed
What is the purpose of a progress note? Anyone? Yes, you there. “Insurance billing?” Yes, that’s a good one. Anyone else? “To remember what you did?” Excellent. Another? Yes, that’s right, for others to follow along in your care. These are all good reasons for a progress note to exist. But they aren’t the whole story. Let’s start at the beginning.
Charts were once a collection of paper sheets with handwritten notes. Sometimes illegible, sometimes beautiful, always efficient. A progress note back then could be just 10 characters, AK, LN2, X,X,X,X,X (with X’s marking nitrogen sprays). Then came the healthcare K-Pg event: the conversion to EMRs. Those doctors who survived evolved into computer programmers, creating blocks of text from a few keystrokes. But like toddler-sized Legos, the blocks made it impossible to build a note that is nuanced or precise. Worse yet, many notes consisting of blocks from one note added awkwardly to a new note, creating grotesque structures unrecognizable as anything that should exist in nature. Words and numbers, but no information.
Thanks to the eternity of EMR, these creations live on, hideous and useless. They waste not only the server’s energy but also our time. Few things are more maddening than scrolling to reach the bottom of another physician’s note only to find there is nothing there.
Whose fault is this? Anyone? Yes, that’s right, insurers. As there are probably no payers in this audience, let’s blame them. I agree, the crushing burden of documentation-to-get-reimbursed has forced us to create “notes” that add no value to us but add up points for us to get paid for them. CMS, payers, prior authorizations, and now even patients, it seems we are documenting for lots of people except for us. There isn’t time to satisfy all and this significant burden for every encounter is a proximate cause for doctors despair. Until now.
A fully formed, comprehensive, sometimes pretty note that satisfies all audiences. Dr. Larry Weed must be dancing in heaven. It was Dr. Weed who led us from the nicotine-stained logs of the 1950s to the powerful problem-based notes we use today, an innovation that rivals the stethoscope in its impact.
Professor Weed also predicted that computers would be important to capture and make sense of patient data, helping us make accurate diagnoses and efficient plans. Again, he was right. He would surely be advocating to take advantage of AI scribes’ marvelous ability to capture salient data and present it in the form of a problem-oriented medical record.
AI scribes will be ubiquitous soon; I’m fast and even for me they save time. They also allow, for the first time in a decade, to turn from the glow of a screen to actually face the patient – we no longer have to scribe and care simultaneously. Hallelujah. And yet, lest I disappoint you without a twist, it seems with AI scribes, like EMRs we lose a little something too.
Like self-driving cars or ChatGPT-generated letters, they remove cognitive loads. They are lovely when you have to multitask or are trying to recall a visit from hours (days) ago. Using them, you’ll feel faster, lighter, freer, happier. But what’s missing is the thinking. At the end, you have an exquisite note, but you didn’t write it. It has the salient points, but none of the mental work to create it. AI scribes subvert the valuable work of synthesis. That was the critical part of Dr. Weed’s discovery: writing problem-oriented notes helped us think better.
Writing allows for the friction that helps us process what is going on with a patient. It allows for the discovery of diagnoses and prompts plans. When I was an intern, one of my attendings would hand write notes, succinctly showing what he had observed and was thinking. He’d sketch diagrams in the chart, for example, to help illustrate how we’d work though the toxic, metabolic, and infectious etiologies of acute liver failure. Sublime.
The act of writing also helps remind us there is a person attached to these words. Like a handwritten sympathy card, it is intimate, human. Even using our EMR, I’d still often type sentences that help tell the patient’s story. “Her sister just died. Utterly devastated. I’ll forward chart to Bob (her PCP) to check in on her.” Or: “Scratch golfer wants to know why he is getting so many SCCs now. ‘Like bankruptcy, gradually then suddenly,’ I explained. I think I broke through.”
Since we’ve concluded the purpose of a note is mostly to capture data, AI scribes are a godsend. They do so with remarkable quality and efficiency. We’ll just have to remember if the diagnosis is unclear, then it might help to write the note out yourself. And even when done by the AI machine, we might add human touches now and again lest there be no art left in what we do.
“For sale. Sun hat. Never worn.”
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on X. Write to him at dermnews@mdedge.com.
“We really aren’t taking care of records — we’re taking care of people.” — Dr. Lawrence Weed
What is the purpose of a progress note? Anyone? Yes, you there. “Insurance billing?” Yes, that’s a good one. Anyone else? “To remember what you did?” Excellent. Another? Yes, that’s right, for others to follow along in your care. These are all good reasons for a progress note to exist. But they aren’t the whole story. Let’s start at the beginning.
Charts were once a collection of paper sheets with handwritten notes. Sometimes illegible, sometimes beautiful, always efficient. A progress note back then could be just 10 characters, AK, LN2, X,X,X,X,X (with X’s marking nitrogen sprays). Then came the healthcare K-Pg event: the conversion to EMRs. Those doctors who survived evolved into computer programmers, creating blocks of text from a few keystrokes. But like toddler-sized Legos, the blocks made it impossible to build a note that is nuanced or precise. Worse yet, many notes consisting of blocks from one note added awkwardly to a new note, creating grotesque structures unrecognizable as anything that should exist in nature. Words and numbers, but no information.
Thanks to the eternity of EMR, these creations live on, hideous and useless. They waste not only the server’s energy but also our time. Few things are more maddening than scrolling to reach the bottom of another physician’s note only to find there is nothing there.
Whose fault is this? Anyone? Yes, that’s right, insurers. As there are probably no payers in this audience, let’s blame them. I agree, the crushing burden of documentation-to-get-reimbursed has forced us to create “notes” that add no value to us but add up points for us to get paid for them. CMS, payers, prior authorizations, and now even patients, it seems we are documenting for lots of people except for us. There isn’t time to satisfy all and this significant burden for every encounter is a proximate cause for doctors despair. Until now.
A fully formed, comprehensive, sometimes pretty note that satisfies all audiences. Dr. Larry Weed must be dancing in heaven. It was Dr. Weed who led us from the nicotine-stained logs of the 1950s to the powerful problem-based notes we use today, an innovation that rivals the stethoscope in its impact.
Professor Weed also predicted that computers would be important to capture and make sense of patient data, helping us make accurate diagnoses and efficient plans. Again, he was right. He would surely be advocating to take advantage of AI scribes’ marvelous ability to capture salient data and present it in the form of a problem-oriented medical record.
AI scribes will be ubiquitous soon; I’m fast and even for me they save time. They also allow, for the first time in a decade, to turn from the glow of a screen to actually face the patient – we no longer have to scribe and care simultaneously. Hallelujah. And yet, lest I disappoint you without a twist, it seems with AI scribes, like EMRs we lose a little something too.
Like self-driving cars or ChatGPT-generated letters, they remove cognitive loads. They are lovely when you have to multitask or are trying to recall a visit from hours (days) ago. Using them, you’ll feel faster, lighter, freer, happier. But what’s missing is the thinking. At the end, you have an exquisite note, but you didn’t write it. It has the salient points, but none of the mental work to create it. AI scribes subvert the valuable work of synthesis. That was the critical part of Dr. Weed’s discovery: writing problem-oriented notes helped us think better.
Writing allows for the friction that helps us process what is going on with a patient. It allows for the discovery of diagnoses and prompts plans. When I was an intern, one of my attendings would hand write notes, succinctly showing what he had observed and was thinking. He’d sketch diagrams in the chart, for example, to help illustrate how we’d work though the toxic, metabolic, and infectious etiologies of acute liver failure. Sublime.
The act of writing also helps remind us there is a person attached to these words. Like a handwritten sympathy card, it is intimate, human. Even using our EMR, I’d still often type sentences that help tell the patient’s story. “Her sister just died. Utterly devastated. I’ll forward chart to Bob (her PCP) to check in on her.” Or: “Scratch golfer wants to know why he is getting so many SCCs now. ‘Like bankruptcy, gradually then suddenly,’ I explained. I think I broke through.”
Since we’ve concluded the purpose of a note is mostly to capture data, AI scribes are a godsend. They do so with remarkable quality and efficiency. We’ll just have to remember if the diagnosis is unclear, then it might help to write the note out yourself. And even when done by the AI machine, we might add human touches now and again lest there be no art left in what we do.
“For sale. Sun hat. Never worn.”
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on X. Write to him at dermnews@mdedge.com.