Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort
Allow Teaser Image

Updated Alzheimer’s Guidelines Chart the Full Diagnostic Journey

Article Type
Changed
Wed, 01/15/2025 - 15:05

New evidence–based clinical practice guidelines from the Alzheimer’s Association provide updated recommendations on evaluating individuals suspected of having Alzheimer’s disease and Alzheimer’s disease–related neurodegenerative disorders in both primary and specialty care settings.

This is the first update since 2001 for specialists and the first guideline for primary care physicians. Executive summaries of the guidelines were published in three articles online on December 23 in a special issue of Alzheimer’s & Dementia.

 

What’s New? 

“With this guideline, we expand the scope of prior guidelines by providing recommendations for practicing clinicians on the process from start to finish,” coauthor Brad Dickerson, MD, director of the Massachusetts General Hospital Frontotemporal Disorders Unit and professor of neurology at Harvard Medical School, Boston, said in a statement.

“If clinicians adopt these recommendations and healthcare systems provide adequate resources, outcomes should improve in most patients in most practice settings,” Dickerson added in an interview.

Through a modified-Delphi approach and guideline-development process, an expert workgroup representing primary and specialty care reviewed 7374 publications, of which 133 met inclusion criteria.

Based on the information, the workgroup outlined a three-step patient-centered evaluation process, which includes assessing cognitive functional status, identifying the cognitive-behavioral syndrome based on specific symptoms, and determining the likely brain diseases or conditions causing the symptoms.

 

What Are the Key Recommendations?

The guidelines include 19 “practical” recommendations that are applicable to any practice setting. They capture the core elements of a high-quality evaluation and disclosure process, the author said. Here is a brief summary of the recommendations: 

Initial evaluation: Perform a multitiered evaluation for patients who self-report or whose care partner or clinician reports cognitive, behavioral, or functional changes.

Patient-centered communication: Partner with the patient and/or care partner to establish shared goals for the evaluation process; assess the patient’s capacity to engage in goal setting.

Diagnostic formulation: Use a tiered approach to assessments and tests based on individual presentation, risk factors, and profile, aiming to determine the level of impairment, cognitive-behavioral syndrome, and likely causes and contributing factors.

History taking: Gather reliable information from informants about changes in cognition, activities of daily living, mood, neuropsychiatric symptoms, and sensory/motor functions. Document individualized risk factors for cognitive decline.

Examination: Conduct a comprehensive examination of cognition, mood, behavior, and a dementia-focused neurologic evaluation using validated tools.

Laboratory tests: Perform tiered, individualized laboratory evaluations, starting with routine tests for all patients.

Structural imaging: Obtain structural brain imaging (MRI preferred, CT as an alternative) to help establish a cause.

Ongoing communication: Engage in ongoing dialogue with patient/care partner to guide them throughout the diagnostic process.

Diagnostic disclosure: Share findings honestly and compassionately, explaining the syndrome, its severity, probable cause, prognosis, treatment options and support resources.

Specialist referral: Refer patients with atypical, uncertain, early-onset, or rapidly progressing symptoms to a dementia subspecialist.

Neuropsychological testing: Use in instances of diagnostic uncertainty or patients with complex clinical profiles. At a minimum, the neuropsychological evaluation should include normed neuropsychological testing of the domains of learning and memory (in particular delayed free and cued recall/recognition), attention, executive function, visuospatial function, and language.

Advanced diagnostic testing: When diagnostic uncertainty remains, obtain additional laboratory tests tailored to individual patient profiles.

Molecular imaging: In a patient with an established cognitive-behavioral syndrome in whom there is continued diagnostic uncertainty regarding cause(s) after structural imaging, a dementia specialist can obtain molecular imaging with fluorodeoxyglucose PET to improve diagnostic accuracy.

Cerebrospinal fluid (CSF) analysis: Utilize CSF biomarkers to evaluate amyloid beta and tau profiles in cases with unresolved diagnostic uncertainty.

Amyloid PET imaging: Perform amyloid PET scans for patients with persistent diagnostic uncertainty after other assessments.

Genetic counseling and testing: Consider genetic testing for patients with strong autosomal dominant family histories and involve a genetic counselor.

 

Future Directions?

Maria C. Carrillo, PhD, chief science officer and medical affairs lead for the Alzheimer’s Association, encourages clinicians to incorporate these guidelines into their practice.

“These guidelines are important because they guide clinicians in the evaluation of memory complaints, which could have many underlying causes. That is the necessary start for an early and accurate Alzheimer’s diagnosis,” Carrillo said in a statement.

Dickerson said the new guidelines do not address blood-based biomarkers “because nobody really feels that they are ready for prime time yet, even though they’re getting rolled out as clinical products.” 

However, the recommendations will be revised as needed. “That’s one of the values of setting this up as a process; whenever any new development occurs, it will be easy to update the guidelines to show where that new test or new biomarker fits in the overall process,” he said.

 

New Appropriate Use Guidance

A separate workgroup, jointly convened by the Alzheimer’s Association and the Society of Nuclear Medicine and Molecular Imaging, has revised appropriate use criteria (AUC) for amyloid PET imaging and developed AUC for tau PET imaging.

They were simultaneously published online in Alzheimer’s & Dementia and The Journal of Nuclear Medicine. They are the first revision since the initial AUC for amyloid PET was introduced in 2013.

“The updated amyloid/tau appropriate use criteria will help ensure these tracers are used in a cost-effective manner and the scan results will be used appropriately to add value to the diagnosis and management of dementia,” said workgroup members Kevin Donohoe, MD, with Beth Israel Deaconess Medical Center, Boston, and Phillip Kuo, MD, with City of Hope National Medical Center, Duarte, California.

The AUC include 17 real-world scenarios in which amyloid or tau PET may be considered, with the two tests considered separately and given their own rating for each scenario.

Overall, the strongest evidence for their use includes assessment and prognosis for people with mild cognitive impairment; assessment of people with dementia when the cause is not clearly known; and determining eligibility for treatment with new disease-modifying therapies, and monitoring response to these treatments, the workgroup said.

“Whereas the prior AUC was written at a time when only the deposition of amyloid could be documented, the new therapeutic agents allow us to demonstrate the actual clearance of amyloid during therapy,” Donohoe and Kuo explained.

“These new therapeutic agents are expensive and, as with most medications, may cause unwanted side effects. The most recent version of the AUC includes information about the appropriate use of amyloid imaging for both documenting the presence of amyloid deposits in the brain, making anti-amyloid therapy an option, as well as documenting the effectiveness of the therapeutic agents as amyloid is (or is not) cleared from the brain,” Donahoe and Kuo noted.

The revised AUC also state that, in most cases, amyloid and tau PET tests should not be used for people who do not have cognitive impairment, even if they carry the APOE4 risk-related gene for Alzheimer’s disease; nonmedical use such as for legal concerns, insurance coverage, or employment screening; and in place of genetic testing in patients suspected of carrying a disease-causing genetic mutation.

In a statement, lead author Gil D. Rabinovici, MD, with University of California, San Francisco, emphasized that the AUC “should be considered guidelines for clinicians, not a substitute for careful clinical judgment that considers the full clinical context for each patient with cognitive complaints.”

This research was funded by the Alzheimer’s Association. Disclosures for guideline authors are available with the original articles.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New evidence–based clinical practice guidelines from the Alzheimer’s Association provide updated recommendations on evaluating individuals suspected of having Alzheimer’s disease and Alzheimer’s disease–related neurodegenerative disorders in both primary and specialty care settings.

This is the first update since 2001 for specialists and the first guideline for primary care physicians. Executive summaries of the guidelines were published in three articles online on December 23 in a special issue of Alzheimer’s & Dementia.

 

What’s New? 

“With this guideline, we expand the scope of prior guidelines by providing recommendations for practicing clinicians on the process from start to finish,” coauthor Brad Dickerson, MD, director of the Massachusetts General Hospital Frontotemporal Disorders Unit and professor of neurology at Harvard Medical School, Boston, said in a statement.

“If clinicians adopt these recommendations and healthcare systems provide adequate resources, outcomes should improve in most patients in most practice settings,” Dickerson added in an interview.

Through a modified-Delphi approach and guideline-development process, an expert workgroup representing primary and specialty care reviewed 7374 publications, of which 133 met inclusion criteria.

Based on the information, the workgroup outlined a three-step patient-centered evaluation process, which includes assessing cognitive functional status, identifying the cognitive-behavioral syndrome based on specific symptoms, and determining the likely brain diseases or conditions causing the symptoms.

 

What Are the Key Recommendations?

The guidelines include 19 “practical” recommendations that are applicable to any practice setting. They capture the core elements of a high-quality evaluation and disclosure process, the author said. Here is a brief summary of the recommendations: 

Initial evaluation: Perform a multitiered evaluation for patients who self-report or whose care partner or clinician reports cognitive, behavioral, or functional changes.

Patient-centered communication: Partner with the patient and/or care partner to establish shared goals for the evaluation process; assess the patient’s capacity to engage in goal setting.

Diagnostic formulation: Use a tiered approach to assessments and tests based on individual presentation, risk factors, and profile, aiming to determine the level of impairment, cognitive-behavioral syndrome, and likely causes and contributing factors.

History taking: Gather reliable information from informants about changes in cognition, activities of daily living, mood, neuropsychiatric symptoms, and sensory/motor functions. Document individualized risk factors for cognitive decline.

Examination: Conduct a comprehensive examination of cognition, mood, behavior, and a dementia-focused neurologic evaluation using validated tools.

Laboratory tests: Perform tiered, individualized laboratory evaluations, starting with routine tests for all patients.

Structural imaging: Obtain structural brain imaging (MRI preferred, CT as an alternative) to help establish a cause.

Ongoing communication: Engage in ongoing dialogue with patient/care partner to guide them throughout the diagnostic process.

Diagnostic disclosure: Share findings honestly and compassionately, explaining the syndrome, its severity, probable cause, prognosis, treatment options and support resources.

Specialist referral: Refer patients with atypical, uncertain, early-onset, or rapidly progressing symptoms to a dementia subspecialist.

Neuropsychological testing: Use in instances of diagnostic uncertainty or patients with complex clinical profiles. At a minimum, the neuropsychological evaluation should include normed neuropsychological testing of the domains of learning and memory (in particular delayed free and cued recall/recognition), attention, executive function, visuospatial function, and language.

Advanced diagnostic testing: When diagnostic uncertainty remains, obtain additional laboratory tests tailored to individual patient profiles.

Molecular imaging: In a patient with an established cognitive-behavioral syndrome in whom there is continued diagnostic uncertainty regarding cause(s) after structural imaging, a dementia specialist can obtain molecular imaging with fluorodeoxyglucose PET to improve diagnostic accuracy.

Cerebrospinal fluid (CSF) analysis: Utilize CSF biomarkers to evaluate amyloid beta and tau profiles in cases with unresolved diagnostic uncertainty.

Amyloid PET imaging: Perform amyloid PET scans for patients with persistent diagnostic uncertainty after other assessments.

Genetic counseling and testing: Consider genetic testing for patients with strong autosomal dominant family histories and involve a genetic counselor.

 

Future Directions?

Maria C. Carrillo, PhD, chief science officer and medical affairs lead for the Alzheimer’s Association, encourages clinicians to incorporate these guidelines into their practice.

“These guidelines are important because they guide clinicians in the evaluation of memory complaints, which could have many underlying causes. That is the necessary start for an early and accurate Alzheimer’s diagnosis,” Carrillo said in a statement.

Dickerson said the new guidelines do not address blood-based biomarkers “because nobody really feels that they are ready for prime time yet, even though they’re getting rolled out as clinical products.” 

However, the recommendations will be revised as needed. “That’s one of the values of setting this up as a process; whenever any new development occurs, it will be easy to update the guidelines to show where that new test or new biomarker fits in the overall process,” he said.

 

New Appropriate Use Guidance

A separate workgroup, jointly convened by the Alzheimer’s Association and the Society of Nuclear Medicine and Molecular Imaging, has revised appropriate use criteria (AUC) for amyloid PET imaging and developed AUC for tau PET imaging.

They were simultaneously published online in Alzheimer’s & Dementia and The Journal of Nuclear Medicine. They are the first revision since the initial AUC for amyloid PET was introduced in 2013.

“The updated amyloid/tau appropriate use criteria will help ensure these tracers are used in a cost-effective manner and the scan results will be used appropriately to add value to the diagnosis and management of dementia,” said workgroup members Kevin Donohoe, MD, with Beth Israel Deaconess Medical Center, Boston, and Phillip Kuo, MD, with City of Hope National Medical Center, Duarte, California.

The AUC include 17 real-world scenarios in which amyloid or tau PET may be considered, with the two tests considered separately and given their own rating for each scenario.

Overall, the strongest evidence for their use includes assessment and prognosis for people with mild cognitive impairment; assessment of people with dementia when the cause is not clearly known; and determining eligibility for treatment with new disease-modifying therapies, and monitoring response to these treatments, the workgroup said.

“Whereas the prior AUC was written at a time when only the deposition of amyloid could be documented, the new therapeutic agents allow us to demonstrate the actual clearance of amyloid during therapy,” Donohoe and Kuo explained.

“These new therapeutic agents are expensive and, as with most medications, may cause unwanted side effects. The most recent version of the AUC includes information about the appropriate use of amyloid imaging for both documenting the presence of amyloid deposits in the brain, making anti-amyloid therapy an option, as well as documenting the effectiveness of the therapeutic agents as amyloid is (or is not) cleared from the brain,” Donahoe and Kuo noted.

The revised AUC also state that, in most cases, amyloid and tau PET tests should not be used for people who do not have cognitive impairment, even if they carry the APOE4 risk-related gene for Alzheimer’s disease; nonmedical use such as for legal concerns, insurance coverage, or employment screening; and in place of genetic testing in patients suspected of carrying a disease-causing genetic mutation.

In a statement, lead author Gil D. Rabinovici, MD, with University of California, San Francisco, emphasized that the AUC “should be considered guidelines for clinicians, not a substitute for careful clinical judgment that considers the full clinical context for each patient with cognitive complaints.”

This research was funded by the Alzheimer’s Association. Disclosures for guideline authors are available with the original articles.

A version of this article first appeared on Medscape.com.

New evidence–based clinical practice guidelines from the Alzheimer’s Association provide updated recommendations on evaluating individuals suspected of having Alzheimer’s disease and Alzheimer’s disease–related neurodegenerative disorders in both primary and specialty care settings.

This is the first update since 2001 for specialists and the first guideline for primary care physicians. Executive summaries of the guidelines were published in three articles online on December 23 in a special issue of Alzheimer’s & Dementia.

 

What’s New? 

“With this guideline, we expand the scope of prior guidelines by providing recommendations for practicing clinicians on the process from start to finish,” coauthor Brad Dickerson, MD, director of the Massachusetts General Hospital Frontotemporal Disorders Unit and professor of neurology at Harvard Medical School, Boston, said in a statement.

“If clinicians adopt these recommendations and healthcare systems provide adequate resources, outcomes should improve in most patients in most practice settings,” Dickerson added in an interview.

Through a modified-Delphi approach and guideline-development process, an expert workgroup representing primary and specialty care reviewed 7374 publications, of which 133 met inclusion criteria.

Based on the information, the workgroup outlined a three-step patient-centered evaluation process, which includes assessing cognitive functional status, identifying the cognitive-behavioral syndrome based on specific symptoms, and determining the likely brain diseases or conditions causing the symptoms.

 

What Are the Key Recommendations?

The guidelines include 19 “practical” recommendations that are applicable to any practice setting. They capture the core elements of a high-quality evaluation and disclosure process, the author said. Here is a brief summary of the recommendations: 

Initial evaluation: Perform a multitiered evaluation for patients who self-report or whose care partner or clinician reports cognitive, behavioral, or functional changes.

Patient-centered communication: Partner with the patient and/or care partner to establish shared goals for the evaluation process; assess the patient’s capacity to engage in goal setting.

Diagnostic formulation: Use a tiered approach to assessments and tests based on individual presentation, risk factors, and profile, aiming to determine the level of impairment, cognitive-behavioral syndrome, and likely causes and contributing factors.

History taking: Gather reliable information from informants about changes in cognition, activities of daily living, mood, neuropsychiatric symptoms, and sensory/motor functions. Document individualized risk factors for cognitive decline.

Examination: Conduct a comprehensive examination of cognition, mood, behavior, and a dementia-focused neurologic evaluation using validated tools.

Laboratory tests: Perform tiered, individualized laboratory evaluations, starting with routine tests for all patients.

Structural imaging: Obtain structural brain imaging (MRI preferred, CT as an alternative) to help establish a cause.

Ongoing communication: Engage in ongoing dialogue with patient/care partner to guide them throughout the diagnostic process.

Diagnostic disclosure: Share findings honestly and compassionately, explaining the syndrome, its severity, probable cause, prognosis, treatment options and support resources.

Specialist referral: Refer patients with atypical, uncertain, early-onset, or rapidly progressing symptoms to a dementia subspecialist.

Neuropsychological testing: Use in instances of diagnostic uncertainty or patients with complex clinical profiles. At a minimum, the neuropsychological evaluation should include normed neuropsychological testing of the domains of learning and memory (in particular delayed free and cued recall/recognition), attention, executive function, visuospatial function, and language.

Advanced diagnostic testing: When diagnostic uncertainty remains, obtain additional laboratory tests tailored to individual patient profiles.

Molecular imaging: In a patient with an established cognitive-behavioral syndrome in whom there is continued diagnostic uncertainty regarding cause(s) after structural imaging, a dementia specialist can obtain molecular imaging with fluorodeoxyglucose PET to improve diagnostic accuracy.

Cerebrospinal fluid (CSF) analysis: Utilize CSF biomarkers to evaluate amyloid beta and tau profiles in cases with unresolved diagnostic uncertainty.

Amyloid PET imaging: Perform amyloid PET scans for patients with persistent diagnostic uncertainty after other assessments.

Genetic counseling and testing: Consider genetic testing for patients with strong autosomal dominant family histories and involve a genetic counselor.

 

Future Directions?

Maria C. Carrillo, PhD, chief science officer and medical affairs lead for the Alzheimer’s Association, encourages clinicians to incorporate these guidelines into their practice.

“These guidelines are important because they guide clinicians in the evaluation of memory complaints, which could have many underlying causes. That is the necessary start for an early and accurate Alzheimer’s diagnosis,” Carrillo said in a statement.

Dickerson said the new guidelines do not address blood-based biomarkers “because nobody really feels that they are ready for prime time yet, even though they’re getting rolled out as clinical products.” 

However, the recommendations will be revised as needed. “That’s one of the values of setting this up as a process; whenever any new development occurs, it will be easy to update the guidelines to show where that new test or new biomarker fits in the overall process,” he said.

 

New Appropriate Use Guidance

A separate workgroup, jointly convened by the Alzheimer’s Association and the Society of Nuclear Medicine and Molecular Imaging, has revised appropriate use criteria (AUC) for amyloid PET imaging and developed AUC for tau PET imaging.

They were simultaneously published online in Alzheimer’s & Dementia and The Journal of Nuclear Medicine. They are the first revision since the initial AUC for amyloid PET was introduced in 2013.

“The updated amyloid/tau appropriate use criteria will help ensure these tracers are used in a cost-effective manner and the scan results will be used appropriately to add value to the diagnosis and management of dementia,” said workgroup members Kevin Donohoe, MD, with Beth Israel Deaconess Medical Center, Boston, and Phillip Kuo, MD, with City of Hope National Medical Center, Duarte, California.

The AUC include 17 real-world scenarios in which amyloid or tau PET may be considered, with the two tests considered separately and given their own rating for each scenario.

Overall, the strongest evidence for their use includes assessment and prognosis for people with mild cognitive impairment; assessment of people with dementia when the cause is not clearly known; and determining eligibility for treatment with new disease-modifying therapies, and monitoring response to these treatments, the workgroup said.

“Whereas the prior AUC was written at a time when only the deposition of amyloid could be documented, the new therapeutic agents allow us to demonstrate the actual clearance of amyloid during therapy,” Donohoe and Kuo explained.

“These new therapeutic agents are expensive and, as with most medications, may cause unwanted side effects. The most recent version of the AUC includes information about the appropriate use of amyloid imaging for both documenting the presence of amyloid deposits in the brain, making anti-amyloid therapy an option, as well as documenting the effectiveness of the therapeutic agents as amyloid is (or is not) cleared from the brain,” Donahoe and Kuo noted.

The revised AUC also state that, in most cases, amyloid and tau PET tests should not be used for people who do not have cognitive impairment, even if they carry the APOE4 risk-related gene for Alzheimer’s disease; nonmedical use such as for legal concerns, insurance coverage, or employment screening; and in place of genetic testing in patients suspected of carrying a disease-causing genetic mutation.

In a statement, lead author Gil D. Rabinovici, MD, with University of California, San Francisco, emphasized that the AUC “should be considered guidelines for clinicians, not a substitute for careful clinical judgment that considers the full clinical context for each patient with cognitive complaints.”

This research was funded by the Alzheimer’s Association. Disclosures for guideline authors are available with the original articles.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ALZHEIMER’S & DEMENTIA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 01/15/2025 - 15:03
Un-Gate On Date
Wed, 01/15/2025 - 15:03
Use ProPublica
CFC Schedule Remove Status
Wed, 01/15/2025 - 15:03
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 01/15/2025 - 15:03

How Long Does It Take to See a Neurologist?

Article Type
Changed
Wed, 01/15/2025 - 13:05

The average wait time to see a neurologist following an initial referral was just over a month for older adults, with nearly 1 in 5 patients waiting more than 3 months, a cross-sectional analysis of Medicare data showed.

Wait times were not affected by the number of available neurologists. However, those with multiple sclerosis (MS), epilepsy, Parkinson’s disease, dementia, and sleep disorders had the longest wait times.

“In general, early referral to specialists has been shown to improve outcomes and increase patient satisfaction,” study author Chun Chieh Lin, PhD, MBA, of Ohio State University, Columbus, said in a press release. “Our findings underscore the need to develop new strategies to help people with neurological conditions see neurologists faster.”

The findings were published online in Neurology.

 

No National Benchmark for Wait Times

For this study, researchers analyzed a large sample of fee-for-service Medicare data from 2018 to 2019. Researchers identified patients with a year or less between their last referring physician visit and a new neurologist visit.

Exclusion criteria included enrollment in health maintenance organization plans without continuous enrollment in Medicare Part A and Part B for 2 years before the index neurologist visit, missing patient data, no physician referral at all, or referral by a different neurologist.

In addition to assessing wait times, investigators examined the availability of neurologists who provided medical services to Medicare beneficiaries in the 2018 dataset across 306 hospital referral regions in the United States, based on zip codes.

Results showed that 163,313 patients (average age, 74 years; 58% women; 85% White) were referred by 84,975 physicians to 10,250 neurologists across the United States.

Overall, the average wait time from physician referral to index neurologist visit was 34 days (range, 1-365 days), with longer wait times for White patients, women, and those aged 65-69 years. Overall, 18% waited longer than 90 days for an appointment.

The most common conditions diagnosed at the index neurologist visit were chronic pain/abnormality of gait (13%), sleep disorders (11%), and peripheral neuropathy (10%).

Using a linear mixed-effects statistical model, investigators found that patients with back pain waited an average of 30 days to see a neurologist, with longer waits for other conditions. Those with MS had an average wait that was 29 days longer, patients with epilepsy waited an average of 10 days longer, and those with Parkinson’s disease waited 9 days longer (P < .0001).

The number of available neurologists (range, 10-50 neurologists per 100,000 Medicare patients) did not affect wait times. However, there were differences in wait times across states because of different policies or regulations regarding healthcare access, with wait times ranging from a median high of 49 days in Idaho to a low of 24 days in Wyoming.

Notably, when patients saw a neurologist outside of their physician’s referral area, wait times were longer by an average of 11 days.

More than 40% of patients with new neurology referrals had prior office-based visits for the same neurologic diagnosis. For these patients, the median time between diagnosis and index neurologist visit was 342 days (range, 66-753 days).

Female patients in this category waited slightly longer (median, 353 days) than male patients (median, 328 days), and Black and Hispanic patients had longer median waits than White patients (389.5 days and 397 days respectively, vs 337 days; P = .0003).

“It is important to note that there is no national benchmark for determining appropriate wait times for specialist care, making it difficult to standardize expectations for timely access to specialists,” the authors noted.

The investigators suggested that a direct communication channel between primary care physicians and neurologists such as an eConsult service may hasten access to neurology consultation without the need for a formal appointment. Telemedicine in rural areas could also shorten wait times, they added.

Study limitations included the inability to determine if patients followed through with their index neurology visits or whether the last visit with the physician was the time of referral, as this could not be determined through the claims data.

This study was funded by the American Academy of Neurology. Lin reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The average wait time to see a neurologist following an initial referral was just over a month for older adults, with nearly 1 in 5 patients waiting more than 3 months, a cross-sectional analysis of Medicare data showed.

Wait times were not affected by the number of available neurologists. However, those with multiple sclerosis (MS), epilepsy, Parkinson’s disease, dementia, and sleep disorders had the longest wait times.

“In general, early referral to specialists has been shown to improve outcomes and increase patient satisfaction,” study author Chun Chieh Lin, PhD, MBA, of Ohio State University, Columbus, said in a press release. “Our findings underscore the need to develop new strategies to help people with neurological conditions see neurologists faster.”

The findings were published online in Neurology.

 

No National Benchmark for Wait Times

For this study, researchers analyzed a large sample of fee-for-service Medicare data from 2018 to 2019. Researchers identified patients with a year or less between their last referring physician visit and a new neurologist visit.

Exclusion criteria included enrollment in health maintenance organization plans without continuous enrollment in Medicare Part A and Part B for 2 years before the index neurologist visit, missing patient data, no physician referral at all, or referral by a different neurologist.

In addition to assessing wait times, investigators examined the availability of neurologists who provided medical services to Medicare beneficiaries in the 2018 dataset across 306 hospital referral regions in the United States, based on zip codes.

Results showed that 163,313 patients (average age, 74 years; 58% women; 85% White) were referred by 84,975 physicians to 10,250 neurologists across the United States.

Overall, the average wait time from physician referral to index neurologist visit was 34 days (range, 1-365 days), with longer wait times for White patients, women, and those aged 65-69 years. Overall, 18% waited longer than 90 days for an appointment.

The most common conditions diagnosed at the index neurologist visit were chronic pain/abnormality of gait (13%), sleep disorders (11%), and peripheral neuropathy (10%).

Using a linear mixed-effects statistical model, investigators found that patients with back pain waited an average of 30 days to see a neurologist, with longer waits for other conditions. Those with MS had an average wait that was 29 days longer, patients with epilepsy waited an average of 10 days longer, and those with Parkinson’s disease waited 9 days longer (P < .0001).

The number of available neurologists (range, 10-50 neurologists per 100,000 Medicare patients) did not affect wait times. However, there were differences in wait times across states because of different policies or regulations regarding healthcare access, with wait times ranging from a median high of 49 days in Idaho to a low of 24 days in Wyoming.

Notably, when patients saw a neurologist outside of their physician’s referral area, wait times were longer by an average of 11 days.

More than 40% of patients with new neurology referrals had prior office-based visits for the same neurologic diagnosis. For these patients, the median time between diagnosis and index neurologist visit was 342 days (range, 66-753 days).

Female patients in this category waited slightly longer (median, 353 days) than male patients (median, 328 days), and Black and Hispanic patients had longer median waits than White patients (389.5 days and 397 days respectively, vs 337 days; P = .0003).

“It is important to note that there is no national benchmark for determining appropriate wait times for specialist care, making it difficult to standardize expectations for timely access to specialists,” the authors noted.

The investigators suggested that a direct communication channel between primary care physicians and neurologists such as an eConsult service may hasten access to neurology consultation without the need for a formal appointment. Telemedicine in rural areas could also shorten wait times, they added.

Study limitations included the inability to determine if patients followed through with their index neurology visits or whether the last visit with the physician was the time of referral, as this could not be determined through the claims data.

This study was funded by the American Academy of Neurology. Lin reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

The average wait time to see a neurologist following an initial referral was just over a month for older adults, with nearly 1 in 5 patients waiting more than 3 months, a cross-sectional analysis of Medicare data showed.

Wait times were not affected by the number of available neurologists. However, those with multiple sclerosis (MS), epilepsy, Parkinson’s disease, dementia, and sleep disorders had the longest wait times.

“In general, early referral to specialists has been shown to improve outcomes and increase patient satisfaction,” study author Chun Chieh Lin, PhD, MBA, of Ohio State University, Columbus, said in a press release. “Our findings underscore the need to develop new strategies to help people with neurological conditions see neurologists faster.”

The findings were published online in Neurology.

 

No National Benchmark for Wait Times

For this study, researchers analyzed a large sample of fee-for-service Medicare data from 2018 to 2019. Researchers identified patients with a year or less between their last referring physician visit and a new neurologist visit.

Exclusion criteria included enrollment in health maintenance organization plans without continuous enrollment in Medicare Part A and Part B for 2 years before the index neurologist visit, missing patient data, no physician referral at all, or referral by a different neurologist.

In addition to assessing wait times, investigators examined the availability of neurologists who provided medical services to Medicare beneficiaries in the 2018 dataset across 306 hospital referral regions in the United States, based on zip codes.

Results showed that 163,313 patients (average age, 74 years; 58% women; 85% White) were referred by 84,975 physicians to 10,250 neurologists across the United States.

Overall, the average wait time from physician referral to index neurologist visit was 34 days (range, 1-365 days), with longer wait times for White patients, women, and those aged 65-69 years. Overall, 18% waited longer than 90 days for an appointment.

The most common conditions diagnosed at the index neurologist visit were chronic pain/abnormality of gait (13%), sleep disorders (11%), and peripheral neuropathy (10%).

Using a linear mixed-effects statistical model, investigators found that patients with back pain waited an average of 30 days to see a neurologist, with longer waits for other conditions. Those with MS had an average wait that was 29 days longer, patients with epilepsy waited an average of 10 days longer, and those with Parkinson’s disease waited 9 days longer (P < .0001).

The number of available neurologists (range, 10-50 neurologists per 100,000 Medicare patients) did not affect wait times. However, there were differences in wait times across states because of different policies or regulations regarding healthcare access, with wait times ranging from a median high of 49 days in Idaho to a low of 24 days in Wyoming.

Notably, when patients saw a neurologist outside of their physician’s referral area, wait times were longer by an average of 11 days.

More than 40% of patients with new neurology referrals had prior office-based visits for the same neurologic diagnosis. For these patients, the median time between diagnosis and index neurologist visit was 342 days (range, 66-753 days).

Female patients in this category waited slightly longer (median, 353 days) than male patients (median, 328 days), and Black and Hispanic patients had longer median waits than White patients (389.5 days and 397 days respectively, vs 337 days; P = .0003).

“It is important to note that there is no national benchmark for determining appropriate wait times for specialist care, making it difficult to standardize expectations for timely access to specialists,” the authors noted.

The investigators suggested that a direct communication channel between primary care physicians and neurologists such as an eConsult service may hasten access to neurology consultation without the need for a formal appointment. Telemedicine in rural areas could also shorten wait times, they added.

Study limitations included the inability to determine if patients followed through with their index neurology visits or whether the last visit with the physician was the time of referral, as this could not be determined through the claims data.

This study was funded by the American Academy of Neurology. Lin reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 01/15/2025 - 13:03
Un-Gate On Date
Wed, 01/15/2025 - 13:03
Use ProPublica
CFC Schedule Remove Status
Wed, 01/15/2025 - 13:03
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 01/15/2025 - 13:03

MRI-Invisible Prostate Lesions: Are They Dangerous?

Article Type
Changed
Thu, 01/09/2025 - 12:24

MRI-invisible prostate lesions. It sounds like the stuff of science fiction and fantasy, a creation from the minds of H.G. Wells, who wrote The Invisible Man, or J.K. Rowling, who authored the Harry Potter series.

But MRI-invisible prostate lesions are real. And what these lesions may, or may not, indicate is the subject of intense debate.

MRI plays an increasingly important role in detecting and diagnosing prostate cancer, staging prostate cancer as well as monitoring disease progression. However, on occasion, a puzzling phenomenon arises. Certain prostate lesions that appear when pathologists examine biopsied tissue samples under a microscope are not visible on MRI. The prostate tissue will, instead, appear normal to a radiologist’s eye.

Why are certain lesions invisible with MRI? And is it dangerous for patients if these lesions are not detected? 

Some experts believe these MRI-invisible lesions are nothing to worry about.

If the clinician can’t see the cancer on MRI, then it simply isn’t a threat, according to Mark Emberton, MD, a pioneer in prostate MRIs and director of interventional oncology at University College London, England.

Laurence Klotz, MD, of the University of Toronto, Ontario, Canada, agreed, noting that “invisible cancers are clinically insignificant and don’t require systematic biopsies.”

Emberton and Klotz compared MRI-invisible lesions to grade group 1 prostate cancer (Gleason score ≤ 6) — the least aggressive category that indicates the cancer that is not likely to spread or kill. For patients on active surveillance, those with MRI-invisible cancers do drastically better than those with visible cancers, Klotz explained.

But other experts in the field are skeptical that MRI-invisible lesions are truly innocuous.

Although statistically an MRI-visible prostate lesion indicates a more aggressive tumor, that is not always the case for every individual, said Brian Helfand, MD, PhD, chief of urology at NorthShore University Health System, Evanston, Illinois.

MRIs can lead to false negatives in about 10%-20% of patients who have clinically significant prostate cancer, though estimates vary.

In one analysis, 16% of men with no suspicious lesions on MRI had clinically significant prostate cancer identified after undergoing a systematic biopsy. Another analysis found that about 35% of MRI-invisible prostate cancers identified via biopsy were clinically significant.

Other studies, however, have indicated that negative MRI results accurately indicate patients at low risk of developing clinically significant cancers. A recent JAMA Oncology analysis, for instance, found that only seven of 233 men (3%) with negative MRI results at baseline who completed 3 years of monitoring were diagnosed with clinically significant prostate cancer.

When a patient has an MRI-invisible prostate tumor, there are a couple of reasons the MRI may not be picking it up, said urologic oncologist Alexander Putnam Cole, MD, assistant professor of surgery, Harvard Medical School, Boston, Massachusetts. “One is that the cancer is aggressive but just very small,” said Cole.

“Another possibility is that the cancer looks very similar to background prostate tissue, which is something that you might expect if you think about more of a low-grade cancer,” he explained.

The experience level of the radiologist interpreting the MRI can also play into the accuracy of the reading.

But Cole agreed that “in general, MRI visibility is associated with molecular and histologic features of progression and aggressiveness and non-visible cancers are less likely to have aggressive features.”

The genomic profiles of MRI-visible and -invisible cancers bear this out.

According to Todd Morgan, MD, chief of urologic oncology at Michigan Medicine, University of Michigan, Ann Arbor, the gene expression in visible disease tends to be linked to more aggressive prostate tumors whereas gene expression in invisible disease does not.

In one analysis, for instance, researchers found that four genes — PHYHD1, CENPF, ALDH2, and GDF15 — associated with worse progression-free survival and metastasis-free survival in prostate cancer also predicted MRI visibility.

“Genes that are associated with visibility are essentially the same genes that are associated with aggressive cancers,” Klotz said.

 

Next Steps After Negative MRI Result

What do MRI-invisible lesions mean for patient care? If, for instance, a patient has elevated PSA levels but a normal MRI, is a targeted or systematic biopsy warranted?

The overarching message, according to Klotz, is that “you don’t need to find them.” Klotz noted, however, that patients with a negative MRI result should still be followed with periodic repeat imaging.

Several trials support this approach of using MRI to decide who needs a biopsy and delaying a biopsy in men with normal MRIs.

The recent JAMA Oncology analysis found that, among men with negative MRI results, 86% avoided a biopsy over 3 years, with clinically significant prostate cancer detected in only 4% of men across the study period — four in the initial diagnostic phase and seven in the 3-year monitoring phase. However, during the initial diagnostic phase, more than half the men with positive MRI findings had clinically significant prostate cancer detected.

Another recent study found that patients with negative MRI results were much less likely to upgrade to higher Gleason scores over time. Among 522 patients who underwent a systematic and targeted biopsy within 18 months of their grade group 1 designation, 9.2% with negative MRI findings had tumors reclassified as grade group 2 or higher vs 27% with positive MRI findings, and 2.3% with negative MRI findings had tumors reclassified as grade group 3 or higher vs 7.8% with positive MRI findings.

These data suggest that men with grade group 1 cancer and negative MRI result “may be able to avoid confirmatory biopsies until a routine surveillance biopsy in 2-3 years,” according to study author Christian Pavlovich, MD, professor of urologic oncology at the Johns Hopkins University School of Medicine, Baltimore.

Cole used MRI findings to triage who gets a biopsy. When a biopsy is warranted, “I usually recommend adding in some systematic sampling of the other side to assess for nonvisible cancers,” he noted.

Sampling prostate tissue outside the target area “adds maybe 1-2 minutes to the procedure and doesn’t drastically increase the morbidity or risks,” Cole said. It also can help “confirm there is cancer in the MRI target and also confirm there is no cancer in the nonvisible areas.” 

According to Klotz, if imaging demonstrates progression, patients should receive a biopsy — in most cases, a targeted biopsy only. And, Klotz noted, skipping routine prostate biopsies in men with negative MRI results can save thousands of men from these procedures, which carry risks for infections and sepsis.

Looking beyond Gleason scores for risk prediction, MRI “visibility is a very powerful risk stratifier,” he said.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

MRI-invisible prostate lesions. It sounds like the stuff of science fiction and fantasy, a creation from the minds of H.G. Wells, who wrote The Invisible Man, or J.K. Rowling, who authored the Harry Potter series.

But MRI-invisible prostate lesions are real. And what these lesions may, or may not, indicate is the subject of intense debate.

MRI plays an increasingly important role in detecting and diagnosing prostate cancer, staging prostate cancer as well as monitoring disease progression. However, on occasion, a puzzling phenomenon arises. Certain prostate lesions that appear when pathologists examine biopsied tissue samples under a microscope are not visible on MRI. The prostate tissue will, instead, appear normal to a radiologist’s eye.

Why are certain lesions invisible with MRI? And is it dangerous for patients if these lesions are not detected? 

Some experts believe these MRI-invisible lesions are nothing to worry about.

If the clinician can’t see the cancer on MRI, then it simply isn’t a threat, according to Mark Emberton, MD, a pioneer in prostate MRIs and director of interventional oncology at University College London, England.

Laurence Klotz, MD, of the University of Toronto, Ontario, Canada, agreed, noting that “invisible cancers are clinically insignificant and don’t require systematic biopsies.”

Emberton and Klotz compared MRI-invisible lesions to grade group 1 prostate cancer (Gleason score ≤ 6) — the least aggressive category that indicates the cancer that is not likely to spread or kill. For patients on active surveillance, those with MRI-invisible cancers do drastically better than those with visible cancers, Klotz explained.

But other experts in the field are skeptical that MRI-invisible lesions are truly innocuous.

Although statistically an MRI-visible prostate lesion indicates a more aggressive tumor, that is not always the case for every individual, said Brian Helfand, MD, PhD, chief of urology at NorthShore University Health System, Evanston, Illinois.

MRIs can lead to false negatives in about 10%-20% of patients who have clinically significant prostate cancer, though estimates vary.

In one analysis, 16% of men with no suspicious lesions on MRI had clinically significant prostate cancer identified after undergoing a systematic biopsy. Another analysis found that about 35% of MRI-invisible prostate cancers identified via biopsy were clinically significant.

Other studies, however, have indicated that negative MRI results accurately indicate patients at low risk of developing clinically significant cancers. A recent JAMA Oncology analysis, for instance, found that only seven of 233 men (3%) with negative MRI results at baseline who completed 3 years of monitoring were diagnosed with clinically significant prostate cancer.

When a patient has an MRI-invisible prostate tumor, there are a couple of reasons the MRI may not be picking it up, said urologic oncologist Alexander Putnam Cole, MD, assistant professor of surgery, Harvard Medical School, Boston, Massachusetts. “One is that the cancer is aggressive but just very small,” said Cole.

“Another possibility is that the cancer looks very similar to background prostate tissue, which is something that you might expect if you think about more of a low-grade cancer,” he explained.

The experience level of the radiologist interpreting the MRI can also play into the accuracy of the reading.

But Cole agreed that “in general, MRI visibility is associated with molecular and histologic features of progression and aggressiveness and non-visible cancers are less likely to have aggressive features.”

The genomic profiles of MRI-visible and -invisible cancers bear this out.

According to Todd Morgan, MD, chief of urologic oncology at Michigan Medicine, University of Michigan, Ann Arbor, the gene expression in visible disease tends to be linked to more aggressive prostate tumors whereas gene expression in invisible disease does not.

In one analysis, for instance, researchers found that four genes — PHYHD1, CENPF, ALDH2, and GDF15 — associated with worse progression-free survival and metastasis-free survival in prostate cancer also predicted MRI visibility.

“Genes that are associated with visibility are essentially the same genes that are associated with aggressive cancers,” Klotz said.

 

Next Steps After Negative MRI Result

What do MRI-invisible lesions mean for patient care? If, for instance, a patient has elevated PSA levels but a normal MRI, is a targeted or systematic biopsy warranted?

The overarching message, according to Klotz, is that “you don’t need to find them.” Klotz noted, however, that patients with a negative MRI result should still be followed with periodic repeat imaging.

Several trials support this approach of using MRI to decide who needs a biopsy and delaying a biopsy in men with normal MRIs.

The recent JAMA Oncology analysis found that, among men with negative MRI results, 86% avoided a biopsy over 3 years, with clinically significant prostate cancer detected in only 4% of men across the study period — four in the initial diagnostic phase and seven in the 3-year monitoring phase. However, during the initial diagnostic phase, more than half the men with positive MRI findings had clinically significant prostate cancer detected.

Another recent study found that patients with negative MRI results were much less likely to upgrade to higher Gleason scores over time. Among 522 patients who underwent a systematic and targeted biopsy within 18 months of their grade group 1 designation, 9.2% with negative MRI findings had tumors reclassified as grade group 2 or higher vs 27% with positive MRI findings, and 2.3% with negative MRI findings had tumors reclassified as grade group 3 or higher vs 7.8% with positive MRI findings.

These data suggest that men with grade group 1 cancer and negative MRI result “may be able to avoid confirmatory biopsies until a routine surveillance biopsy in 2-3 years,” according to study author Christian Pavlovich, MD, professor of urologic oncology at the Johns Hopkins University School of Medicine, Baltimore.

Cole used MRI findings to triage who gets a biopsy. When a biopsy is warranted, “I usually recommend adding in some systematic sampling of the other side to assess for nonvisible cancers,” he noted.

Sampling prostate tissue outside the target area “adds maybe 1-2 minutes to the procedure and doesn’t drastically increase the morbidity or risks,” Cole said. It also can help “confirm there is cancer in the MRI target and also confirm there is no cancer in the nonvisible areas.” 

According to Klotz, if imaging demonstrates progression, patients should receive a biopsy — in most cases, a targeted biopsy only. And, Klotz noted, skipping routine prostate biopsies in men with negative MRI results can save thousands of men from these procedures, which carry risks for infections and sepsis.

Looking beyond Gleason scores for risk prediction, MRI “visibility is a very powerful risk stratifier,” he said.

A version of this article appeared on Medscape.com.

MRI-invisible prostate lesions. It sounds like the stuff of science fiction and fantasy, a creation from the minds of H.G. Wells, who wrote The Invisible Man, or J.K. Rowling, who authored the Harry Potter series.

But MRI-invisible prostate lesions are real. And what these lesions may, or may not, indicate is the subject of intense debate.

MRI plays an increasingly important role in detecting and diagnosing prostate cancer, staging prostate cancer as well as monitoring disease progression. However, on occasion, a puzzling phenomenon arises. Certain prostate lesions that appear when pathologists examine biopsied tissue samples under a microscope are not visible on MRI. The prostate tissue will, instead, appear normal to a radiologist’s eye.

Why are certain lesions invisible with MRI? And is it dangerous for patients if these lesions are not detected? 

Some experts believe these MRI-invisible lesions are nothing to worry about.

If the clinician can’t see the cancer on MRI, then it simply isn’t a threat, according to Mark Emberton, MD, a pioneer in prostate MRIs and director of interventional oncology at University College London, England.

Laurence Klotz, MD, of the University of Toronto, Ontario, Canada, agreed, noting that “invisible cancers are clinically insignificant and don’t require systematic biopsies.”

Emberton and Klotz compared MRI-invisible lesions to grade group 1 prostate cancer (Gleason score ≤ 6) — the least aggressive category that indicates the cancer that is not likely to spread or kill. For patients on active surveillance, those with MRI-invisible cancers do drastically better than those with visible cancers, Klotz explained.

But other experts in the field are skeptical that MRI-invisible lesions are truly innocuous.

Although statistically an MRI-visible prostate lesion indicates a more aggressive tumor, that is not always the case for every individual, said Brian Helfand, MD, PhD, chief of urology at NorthShore University Health System, Evanston, Illinois.

MRIs can lead to false negatives in about 10%-20% of patients who have clinically significant prostate cancer, though estimates vary.

In one analysis, 16% of men with no suspicious lesions on MRI had clinically significant prostate cancer identified after undergoing a systematic biopsy. Another analysis found that about 35% of MRI-invisible prostate cancers identified via biopsy were clinically significant.

Other studies, however, have indicated that negative MRI results accurately indicate patients at low risk of developing clinically significant cancers. A recent JAMA Oncology analysis, for instance, found that only seven of 233 men (3%) with negative MRI results at baseline who completed 3 years of monitoring were diagnosed with clinically significant prostate cancer.

When a patient has an MRI-invisible prostate tumor, there are a couple of reasons the MRI may not be picking it up, said urologic oncologist Alexander Putnam Cole, MD, assistant professor of surgery, Harvard Medical School, Boston, Massachusetts. “One is that the cancer is aggressive but just very small,” said Cole.

“Another possibility is that the cancer looks very similar to background prostate tissue, which is something that you might expect if you think about more of a low-grade cancer,” he explained.

The experience level of the radiologist interpreting the MRI can also play into the accuracy of the reading.

But Cole agreed that “in general, MRI visibility is associated with molecular and histologic features of progression and aggressiveness and non-visible cancers are less likely to have aggressive features.”

The genomic profiles of MRI-visible and -invisible cancers bear this out.

According to Todd Morgan, MD, chief of urologic oncology at Michigan Medicine, University of Michigan, Ann Arbor, the gene expression in visible disease tends to be linked to more aggressive prostate tumors whereas gene expression in invisible disease does not.

In one analysis, for instance, researchers found that four genes — PHYHD1, CENPF, ALDH2, and GDF15 — associated with worse progression-free survival and metastasis-free survival in prostate cancer also predicted MRI visibility.

“Genes that are associated with visibility are essentially the same genes that are associated with aggressive cancers,” Klotz said.

 

Next Steps After Negative MRI Result

What do MRI-invisible lesions mean for patient care? If, for instance, a patient has elevated PSA levels but a normal MRI, is a targeted or systematic biopsy warranted?

The overarching message, according to Klotz, is that “you don’t need to find them.” Klotz noted, however, that patients with a negative MRI result should still be followed with periodic repeat imaging.

Several trials support this approach of using MRI to decide who needs a biopsy and delaying a biopsy in men with normal MRIs.

The recent JAMA Oncology analysis found that, among men with negative MRI results, 86% avoided a biopsy over 3 years, with clinically significant prostate cancer detected in only 4% of men across the study period — four in the initial diagnostic phase and seven in the 3-year monitoring phase. However, during the initial diagnostic phase, more than half the men with positive MRI findings had clinically significant prostate cancer detected.

Another recent study found that patients with negative MRI results were much less likely to upgrade to higher Gleason scores over time. Among 522 patients who underwent a systematic and targeted biopsy within 18 months of their grade group 1 designation, 9.2% with negative MRI findings had tumors reclassified as grade group 2 or higher vs 27% with positive MRI findings, and 2.3% with negative MRI findings had tumors reclassified as grade group 3 or higher vs 7.8% with positive MRI findings.

These data suggest that men with grade group 1 cancer and negative MRI result “may be able to avoid confirmatory biopsies until a routine surveillance biopsy in 2-3 years,” according to study author Christian Pavlovich, MD, professor of urologic oncology at the Johns Hopkins University School of Medicine, Baltimore.

Cole used MRI findings to triage who gets a biopsy. When a biopsy is warranted, “I usually recommend adding in some systematic sampling of the other side to assess for nonvisible cancers,” he noted.

Sampling prostate tissue outside the target area “adds maybe 1-2 minutes to the procedure and doesn’t drastically increase the morbidity or risks,” Cole said. It also can help “confirm there is cancer in the MRI target and also confirm there is no cancer in the nonvisible areas.” 

According to Klotz, if imaging demonstrates progression, patients should receive a biopsy — in most cases, a targeted biopsy only. And, Klotz noted, skipping routine prostate biopsies in men with negative MRI results can save thousands of men from these procedures, which carry risks for infections and sepsis.

Looking beyond Gleason scores for risk prediction, MRI “visibility is a very powerful risk stratifier,” he said.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 01/09/2025 - 12:23
Un-Gate On Date
Thu, 01/09/2025 - 12:23
Use ProPublica
CFC Schedule Remove Status
Thu, 01/09/2025 - 12:23
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 01/09/2025 - 12:23

Traumatic Brain Injury May Reactivate Herpes Virus Leading to Neurodegeneration

Article Type
Changed
Thu, 01/09/2025 - 12:16

Mild traumatic brain injury (TBI) may reactivate latent herpes simplex virus type 1 (HSV-1) in the brain and contribute to neurodegeneration and development of Alzheimer’s disease pathology, a new study suggested.

Using a three-dimensional (3D) human brain tissue model, researchers observed that quiescent HSV-1 can be reactivated by a mechanical jolt mimicking concussion, leading to signature markers of Alzheimer’s disease, including neuroinflammation and production of amyloid beta and phosphorylated tau (p-tau) and gliosis — a phenotype made worse by repeated head injury.

“This opens the question as to whether antiviral drugs or anti-inflammatory agents might be useful as early preventive treatments after head trauma to stop HSV-1 activation in its tracks and lower the risk of Alzheimer’s disease,” lead investigator Dana Cairns, PhD, with the Department of Biomedical Engineering at Tufts University, Medford, Massachusetts, said in a statement.

But outside experts urged caution in drawing any firm conclusions, pending further study.

The study was published online in the journal Science Signaling.

 

HSV-1: A Major Alzheimer’s Disease Risk Factor?

TBI is a major risk factor for Alzheimer’s disease and dementia, but the pathways in the brain leading from TBI to dementia are unknown.

HSV-1 is found in over 80% of people; varicella zoster virus (VZV) is found in about 95%. Both viruses are known to enter the brain and lay dormant in neurons and glial cells. Prior evidence indicates that HSV-1 in the brain of APOE4 carriers confers a strong risk for Alzheimer’s disease.

A number of years ago, the team created a 3D model of human brain tissue to study the link between TBI, the viruses, and dementia. The model is 6 mm wide, shaped like a donut, and made of a spongy material of silk protein and collagen saturated with neural stem cells. The cells mature into neurons, communicate with each other, and form a network that mimics the brain environment.

In an earlier study using the model quiescently infected with HSV-1, Cairns and colleagues found that subsequent exposure to VZV created the inflammatory conditions that led to reactivation of HSV-1.

This led them to wonder what would happen if they subjected the brain tissue model to a physical disruption akin to a concussion. Would HSV-1 wake up and start the process of neurodegeneration?

To investigate, they examined the effects of one or more controlled blows to the 3D human brain tissue model in the absence or presence of quiescent HSV-1 infection.

After repeated, mild controlled blows, researchers found that the latently infected 3D brain tissue displayed reactivated HSV-1 and the production and accumulation of amyloid beta and p-tau — which promotes neurodegeneration. The blows also activated gliosis, which is associated with destructive neuroinflammation.

These effects are collectively associated with Alzheimer’s disease, dementia, and chronic traumatic encephalopathy, they pointed out, and were increased with additional injury but were absent in tissue not infected with HSV-1.

“These data suggest that HSV-1 in the brain is pivotal in increasing the risk of Alzheimer’s disease, as other recent studies using cerebral organoids have suggested,” the researchers wrote.

They propose that following brain injury, “whether by infection or mechanical damage, the resulting inflammation induces HSV-1 reactivation in the brain leading to the development of Alzheimer’s disease/dementia and that HSV-1 is a major cause of the disease, especially in APOE4 carriers.”

Future studies should investigate “possible ways of mitigating or stopping the damage caused by head injury, thereby reducing subsequent development of Alzheimer’s disease by implementing efforts to prevent the reactivation of virus in brain such as anti-inflammatory and/or antiviral treatment post-injury,” researchers suggested.

 

Outside Experts Weigh in

Several outside experts offered perspective on the study in a statement from the UK nonprofit Science Media Centre.

Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, London, England, said that, while the study is interesting, there are limitations.

“The increase in Alzheimer’s-like brain changes in these latent virus-containing cells subjected to injury does not resemble the pathology that is found in the brain of people with Alzheimer’s disease,” Spires-Jones noted.

“These experiments were also in cells grown in artificial conditions without important Alzheimer’s-related factors such as age and blood vessel changes. Finally, these experiments were repeated in a small number of experimental replicates (three times per experiment), so these results will need to be confirmed in more relevant biological systems with larger studies to be sure there is a biological link between latent herpes simplex virus type 1, brain injury, and Alzheimer’s pathology,” Spires-Jones cautioned.

Robert Howard, MD, MRCPsych, University College London (UCL) Division of Psychiatry, said the study suggests a possible mechanism for the association between HSV-1, brain injury, and Alzheimer’s disease.

“However, as so often in science, it is very important to bear in mind that association does not mean causation. Much more research will be needed before this can be seriously considered a plausible mechanism for the development of dementia,” Howard cautioned.

“Avoidance of brain injuries, such as those encountered in some contact sports, is already known to be an important way to prevent dementia, and I’m unconvinced that this reflects anything more complicated than mechanical damage causing death of brain cells,” he added.

Jennifer Pocock, PhD, with UCL Queen Square Institute of Neurology, noted the role of microglia, which are activated by mild and repetitive TBI, isn’t addressed in the study.

“This paper seems to suggest that only astrocytes contribute to the reported neuroinflammation in brain tissue. Also, the inclusion of APOE3/4 is not clearly defined. Because of this, the findings are likely to represent an over interpretation for the ‘real world’ as the inclusion of microglia may negate or accentuate them, depending on the severity of the TBI,” Pocock said.

The study was funded by the US Army Research Office and Department of Defense. The authors have declared no relevant conflicts of interest. Spires-Jones and Howard had no relevant disclosures related to this study. Pocock has received research funding from AstraZeneca and Daiichi Sankyo.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Mild traumatic brain injury (TBI) may reactivate latent herpes simplex virus type 1 (HSV-1) in the brain and contribute to neurodegeneration and development of Alzheimer’s disease pathology, a new study suggested.

Using a three-dimensional (3D) human brain tissue model, researchers observed that quiescent HSV-1 can be reactivated by a mechanical jolt mimicking concussion, leading to signature markers of Alzheimer’s disease, including neuroinflammation and production of amyloid beta and phosphorylated tau (p-tau) and gliosis — a phenotype made worse by repeated head injury.

“This opens the question as to whether antiviral drugs or anti-inflammatory agents might be useful as early preventive treatments after head trauma to stop HSV-1 activation in its tracks and lower the risk of Alzheimer’s disease,” lead investigator Dana Cairns, PhD, with the Department of Biomedical Engineering at Tufts University, Medford, Massachusetts, said in a statement.

But outside experts urged caution in drawing any firm conclusions, pending further study.

The study was published online in the journal Science Signaling.

 

HSV-1: A Major Alzheimer’s Disease Risk Factor?

TBI is a major risk factor for Alzheimer’s disease and dementia, but the pathways in the brain leading from TBI to dementia are unknown.

HSV-1 is found in over 80% of people; varicella zoster virus (VZV) is found in about 95%. Both viruses are known to enter the brain and lay dormant in neurons and glial cells. Prior evidence indicates that HSV-1 in the brain of APOE4 carriers confers a strong risk for Alzheimer’s disease.

A number of years ago, the team created a 3D model of human brain tissue to study the link between TBI, the viruses, and dementia. The model is 6 mm wide, shaped like a donut, and made of a spongy material of silk protein and collagen saturated with neural stem cells. The cells mature into neurons, communicate with each other, and form a network that mimics the brain environment.

In an earlier study using the model quiescently infected with HSV-1, Cairns and colleagues found that subsequent exposure to VZV created the inflammatory conditions that led to reactivation of HSV-1.

This led them to wonder what would happen if they subjected the brain tissue model to a physical disruption akin to a concussion. Would HSV-1 wake up and start the process of neurodegeneration?

To investigate, they examined the effects of one or more controlled blows to the 3D human brain tissue model in the absence or presence of quiescent HSV-1 infection.

After repeated, mild controlled blows, researchers found that the latently infected 3D brain tissue displayed reactivated HSV-1 and the production and accumulation of amyloid beta and p-tau — which promotes neurodegeneration. The blows also activated gliosis, which is associated with destructive neuroinflammation.

These effects are collectively associated with Alzheimer’s disease, dementia, and chronic traumatic encephalopathy, they pointed out, and were increased with additional injury but were absent in tissue not infected with HSV-1.

“These data suggest that HSV-1 in the brain is pivotal in increasing the risk of Alzheimer’s disease, as other recent studies using cerebral organoids have suggested,” the researchers wrote.

They propose that following brain injury, “whether by infection or mechanical damage, the resulting inflammation induces HSV-1 reactivation in the brain leading to the development of Alzheimer’s disease/dementia and that HSV-1 is a major cause of the disease, especially in APOE4 carriers.”

Future studies should investigate “possible ways of mitigating or stopping the damage caused by head injury, thereby reducing subsequent development of Alzheimer’s disease by implementing efforts to prevent the reactivation of virus in brain such as anti-inflammatory and/or antiviral treatment post-injury,” researchers suggested.

 

Outside Experts Weigh in

Several outside experts offered perspective on the study in a statement from the UK nonprofit Science Media Centre.

Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, London, England, said that, while the study is interesting, there are limitations.

“The increase in Alzheimer’s-like brain changes in these latent virus-containing cells subjected to injury does not resemble the pathology that is found in the brain of people with Alzheimer’s disease,” Spires-Jones noted.

“These experiments were also in cells grown in artificial conditions without important Alzheimer’s-related factors such as age and blood vessel changes. Finally, these experiments were repeated in a small number of experimental replicates (three times per experiment), so these results will need to be confirmed in more relevant biological systems with larger studies to be sure there is a biological link between latent herpes simplex virus type 1, brain injury, and Alzheimer’s pathology,” Spires-Jones cautioned.

Robert Howard, MD, MRCPsych, University College London (UCL) Division of Psychiatry, said the study suggests a possible mechanism for the association between HSV-1, brain injury, and Alzheimer’s disease.

“However, as so often in science, it is very important to bear in mind that association does not mean causation. Much more research will be needed before this can be seriously considered a plausible mechanism for the development of dementia,” Howard cautioned.

“Avoidance of brain injuries, such as those encountered in some contact sports, is already known to be an important way to prevent dementia, and I’m unconvinced that this reflects anything more complicated than mechanical damage causing death of brain cells,” he added.

Jennifer Pocock, PhD, with UCL Queen Square Institute of Neurology, noted the role of microglia, which are activated by mild and repetitive TBI, isn’t addressed in the study.

“This paper seems to suggest that only astrocytes contribute to the reported neuroinflammation in brain tissue. Also, the inclusion of APOE3/4 is not clearly defined. Because of this, the findings are likely to represent an over interpretation for the ‘real world’ as the inclusion of microglia may negate or accentuate them, depending on the severity of the TBI,” Pocock said.

The study was funded by the US Army Research Office and Department of Defense. The authors have declared no relevant conflicts of interest. Spires-Jones and Howard had no relevant disclosures related to this study. Pocock has received research funding from AstraZeneca and Daiichi Sankyo.

A version of this article appeared on Medscape.com.

Mild traumatic brain injury (TBI) may reactivate latent herpes simplex virus type 1 (HSV-1) in the brain and contribute to neurodegeneration and development of Alzheimer’s disease pathology, a new study suggested.

Using a three-dimensional (3D) human brain tissue model, researchers observed that quiescent HSV-1 can be reactivated by a mechanical jolt mimicking concussion, leading to signature markers of Alzheimer’s disease, including neuroinflammation and production of amyloid beta and phosphorylated tau (p-tau) and gliosis — a phenotype made worse by repeated head injury.

“This opens the question as to whether antiviral drugs or anti-inflammatory agents might be useful as early preventive treatments after head trauma to stop HSV-1 activation in its tracks and lower the risk of Alzheimer’s disease,” lead investigator Dana Cairns, PhD, with the Department of Biomedical Engineering at Tufts University, Medford, Massachusetts, said in a statement.

But outside experts urged caution in drawing any firm conclusions, pending further study.

The study was published online in the journal Science Signaling.

 

HSV-1: A Major Alzheimer’s Disease Risk Factor?

TBI is a major risk factor for Alzheimer’s disease and dementia, but the pathways in the brain leading from TBI to dementia are unknown.

HSV-1 is found in over 80% of people; varicella zoster virus (VZV) is found in about 95%. Both viruses are known to enter the brain and lay dormant in neurons and glial cells. Prior evidence indicates that HSV-1 in the brain of APOE4 carriers confers a strong risk for Alzheimer’s disease.

A number of years ago, the team created a 3D model of human brain tissue to study the link between TBI, the viruses, and dementia. The model is 6 mm wide, shaped like a donut, and made of a spongy material of silk protein and collagen saturated with neural stem cells. The cells mature into neurons, communicate with each other, and form a network that mimics the brain environment.

In an earlier study using the model quiescently infected with HSV-1, Cairns and colleagues found that subsequent exposure to VZV created the inflammatory conditions that led to reactivation of HSV-1.

This led them to wonder what would happen if they subjected the brain tissue model to a physical disruption akin to a concussion. Would HSV-1 wake up and start the process of neurodegeneration?

To investigate, they examined the effects of one or more controlled blows to the 3D human brain tissue model in the absence or presence of quiescent HSV-1 infection.

After repeated, mild controlled blows, researchers found that the latently infected 3D brain tissue displayed reactivated HSV-1 and the production and accumulation of amyloid beta and p-tau — which promotes neurodegeneration. The blows also activated gliosis, which is associated with destructive neuroinflammation.

These effects are collectively associated with Alzheimer’s disease, dementia, and chronic traumatic encephalopathy, they pointed out, and were increased with additional injury but were absent in tissue not infected with HSV-1.

“These data suggest that HSV-1 in the brain is pivotal in increasing the risk of Alzheimer’s disease, as other recent studies using cerebral organoids have suggested,” the researchers wrote.

They propose that following brain injury, “whether by infection or mechanical damage, the resulting inflammation induces HSV-1 reactivation in the brain leading to the development of Alzheimer’s disease/dementia and that HSV-1 is a major cause of the disease, especially in APOE4 carriers.”

Future studies should investigate “possible ways of mitigating or stopping the damage caused by head injury, thereby reducing subsequent development of Alzheimer’s disease by implementing efforts to prevent the reactivation of virus in brain such as anti-inflammatory and/or antiviral treatment post-injury,” researchers suggested.

 

Outside Experts Weigh in

Several outside experts offered perspective on the study in a statement from the UK nonprofit Science Media Centre.

Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, London, England, said that, while the study is interesting, there are limitations.

“The increase in Alzheimer’s-like brain changes in these latent virus-containing cells subjected to injury does not resemble the pathology that is found in the brain of people with Alzheimer’s disease,” Spires-Jones noted.

“These experiments were also in cells grown in artificial conditions without important Alzheimer’s-related factors such as age and blood vessel changes. Finally, these experiments were repeated in a small number of experimental replicates (three times per experiment), so these results will need to be confirmed in more relevant biological systems with larger studies to be sure there is a biological link between latent herpes simplex virus type 1, brain injury, and Alzheimer’s pathology,” Spires-Jones cautioned.

Robert Howard, MD, MRCPsych, University College London (UCL) Division of Psychiatry, said the study suggests a possible mechanism for the association between HSV-1, brain injury, and Alzheimer’s disease.

“However, as so often in science, it is very important to bear in mind that association does not mean causation. Much more research will be needed before this can be seriously considered a plausible mechanism for the development of dementia,” Howard cautioned.

“Avoidance of brain injuries, such as those encountered in some contact sports, is already known to be an important way to prevent dementia, and I’m unconvinced that this reflects anything more complicated than mechanical damage causing death of brain cells,” he added.

Jennifer Pocock, PhD, with UCL Queen Square Institute of Neurology, noted the role of microglia, which are activated by mild and repetitive TBI, isn’t addressed in the study.

“This paper seems to suggest that only astrocytes contribute to the reported neuroinflammation in brain tissue. Also, the inclusion of APOE3/4 is not clearly defined. Because of this, the findings are likely to represent an over interpretation for the ‘real world’ as the inclusion of microglia may negate or accentuate them, depending on the severity of the TBI,” Pocock said.

The study was funded by the US Army Research Office and Department of Defense. The authors have declared no relevant conflicts of interest. Spires-Jones and Howard had no relevant disclosures related to this study. Pocock has received research funding from AstraZeneca and Daiichi Sankyo.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCIENCE SIGNALING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 01/09/2025 - 12:14
Un-Gate On Date
Thu, 01/09/2025 - 12:14
Use ProPublica
CFC Schedule Remove Status
Thu, 01/09/2025 - 12:14
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 01/09/2025 - 12:14

Brain Changes in Youth Who Use Substances: Cause or Effect?

Article Type
Changed
Fri, 01/03/2025 - 12:13

A widely accepted assumption in the addiction field is that neuroanatomical changes observed in young people who use alcohol or other substances are largely the consequence of exposure to these substances.

But a new study suggests that neuroanatomical features in children, including greater whole brain and cortical volumes, are evident before exposure to any substances.

The investigators, led by Alex P. Miller, PhD, assistant professor, Department of Psychiatry, Indiana University, Indianapolis, noted that the findings add to a growing body of work that suggests individual brain structure, along with environmental exposure and genetic risk, may influence risk for substance use disorder. 

The findings were published online in JAMA Network Open.

 

Neuroanatomy a Predisposing Risk Factor?

Earlier research showed that substance use is associated with lower gray matter volume, thinner cortex, and less white matter integrity. While it has been widely thought that these changes were induced by the use of alcohol or illicit drugs, recent longitudinal and genetic studies suggest that the neuroanatomical changes may also be predisposing risk factors for substance use.

To better understand the issue, investigators analyzed data on 9804 children (mean baseline age, 9.9 years; 53% men; 76% White) at 22 US sites enrolled in the Adolescent Brain Cognitive Development (ABCD) Study that’s examining brain and behavioral development from middle childhood to young adulthood.

The researchers collected information on the use of alcohol, nicotine, cannabis, and other illicit substances from in-person interviews at baseline and years 1, 2, and 3, as well as interim phone interviews at 6, 18, and 30 months. MRI scans provided extensive brain structural data, including global and regional cortical volume, thickness, surface area, sulcal depth, and subcortical volume.

Of the total, 3460 participants (35%) initiated substance use before age 15, with 90% reporting alcohol use initiation. There was considerable overlap between initiation of alcohol, nicotine, and cannabis.

The researchers tested whether baseline neuroanatomical variability was associated with any substance use initiation before or up to 3 years following initial neuroimaging scans. Study covariates included baseline age, sex, pubertal status, familial relationship (eg, sibling or twin), and prenatal substance exposures. Researchers didn’t control for sociodemographic characteristics as these could influence associations.

 

Significant Brain Differences

Compared with no substance use initiation, any substance use initiation was associated with larger global neuroanatomical indices, including whole brain (beta = 0.05; P = 2.80 × 10–8), total intracranial (beta = 0.04; P = 3.49 × 10−6), cortical (beta = 0.05; P = 4.31 × 10–8), and subcortical volumes (beta = 0.05; P = 4.39 × 10–8), as well as greater total cortical surface area (beta = 0.04; P = 6.05 × 10–7).

The direction of associations between cortical thickness and substance use initiation was regionally specific; any substance use initiation was characterized by thinner cortex in all frontal regions (eg, rostral middle frontal gyrus, beta = −0.03; P = 6.99 × 10–6), but thicker cortex in all other lobes. It was also associated with larger regional brain volumes, deeper regional sulci, and differences in regional cortical surface area.

The authors noted total cortical thickness peaks at age 1.7 years and steadily declines throughout life. By contrast, subcortical volumes peak at 14.4 years of age and generally remain stable before steep later life declines.

Secondary analyses compared initiation of the three most commonly used substances in early adolescence (alcohol, nicotine, and cannabis) with no substance use.

Findings for alcohol largely mirrored those for any substance use. However, the study uncovered additional significant associations, including greater left lateral occipital volume and bilateral para-hippocampal gyri cortical thickness and less bilateral superior frontal gyri cortical thickness.

Nicotine use was associated with lower right superior frontal gyrus volume and deeper left lateral orbitofrontal cortex sulci. And cannabis use was associated with thinner left precentral gyrus and lower right inferior parietal gyrus and right caudate volumes.

The authors noted results for nicotine and cannabis may not have had adequate statistical power, and small effects suggest these findings aren’t clinically informative for individuals. However, they wrote, “They do inform and challenge current theoretical models of addiction.”

 

Associations Precede Substance Use

A post hoc analysis further challenges current models of addiction. When researchers looked only at the 1203 youth who initiated substance use after the baseline neuroimaging session, they found most associations preceded substance use.

“That regional associations may precede substance use initiation, including less cortical thickness in the right rostral middle frontal gyrus, challenges predominant interpretations that these associations arise largely due to neurotoxic consequences of exposure and increases the plausibility that these features may, at least partially, reflect markers of predispositional risk,” wrote the authors.

A study limitation was that unmeasured confounders and undetected systemic differences in missing data may have influenced associations. Sociodemographic, environmental, and genetic variables that were not included as covariates are likely associated with both neuroanatomical variability and substance use initiation and may moderate associations between them, said the authors.

The ABCD Study provides “a robust and large database of longitudinal data” that goes beyond previous neuroimaging research “to understand the bidirectional relationship between brain structure and substance use,” Miller said in a press release.

“The hope is that these types of studies, in conjunction with other data on environmental exposures and genetic risk, could help change how we think about the development of substance use disorders and inform more accurate models of addiction moving forward,” Miller said.

 

Reevaluating Causal Assumptions

In an accompanying editorial, Felix Pichardo, MA, and Sylia Wilson, PhD, from the Institute of Child Development, University of Minnesota, Minneapolis, suggested that it may be time to “reevaluate the causal assumptions that underlie brain disease models of addiction” and the mechanisms by which it develops, persists, and becomes harmful.

Neurotoxic effects of substances are central to current brain disease models of addiction, wrote Pichardo and Wilson. “Substance exposure is thought to affect cortical and subcortical regions that support interrelated systems, resulting in desensitization of reward-related processing, increased stress that prompts cravings, negative emotions when cravings are unsated, and weakening of cognitive control abilities that leads to repeated returns to use.”

The editorial writers praised the ABCD Study for its large sample size for providing a level of precision, statistical accuracy, and ability to identify both larger and smaller effects, which are critical for addiction research.

Unlike most addiction research that relies on cross-sectional designs, the current study used longitudinal assessments, which is another of its strengths, they noted.

“Longitudinal study designs like in the ABCD Study are fundamental for establishing temporal ordering across constructs, which is important because establishing temporal precedence is a key step in determining causal links and underlying mechanisms.”

The inclusion of several genetically informative components, such as the family study design, nested twin subsamples, and DNA collection, “allows researchers to extend beyond temporal precedence toward increased causal inference and identification of mechanisms,” they added.

The study received support from the National Institutes of Health. The study authors and editorial writers had no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A widely accepted assumption in the addiction field is that neuroanatomical changes observed in young people who use alcohol or other substances are largely the consequence of exposure to these substances.

But a new study suggests that neuroanatomical features in children, including greater whole brain and cortical volumes, are evident before exposure to any substances.

The investigators, led by Alex P. Miller, PhD, assistant professor, Department of Psychiatry, Indiana University, Indianapolis, noted that the findings add to a growing body of work that suggests individual brain structure, along with environmental exposure and genetic risk, may influence risk for substance use disorder. 

The findings were published online in JAMA Network Open.

 

Neuroanatomy a Predisposing Risk Factor?

Earlier research showed that substance use is associated with lower gray matter volume, thinner cortex, and less white matter integrity. While it has been widely thought that these changes were induced by the use of alcohol or illicit drugs, recent longitudinal and genetic studies suggest that the neuroanatomical changes may also be predisposing risk factors for substance use.

To better understand the issue, investigators analyzed data on 9804 children (mean baseline age, 9.9 years; 53% men; 76% White) at 22 US sites enrolled in the Adolescent Brain Cognitive Development (ABCD) Study that’s examining brain and behavioral development from middle childhood to young adulthood.

The researchers collected information on the use of alcohol, nicotine, cannabis, and other illicit substances from in-person interviews at baseline and years 1, 2, and 3, as well as interim phone interviews at 6, 18, and 30 months. MRI scans provided extensive brain structural data, including global and regional cortical volume, thickness, surface area, sulcal depth, and subcortical volume.

Of the total, 3460 participants (35%) initiated substance use before age 15, with 90% reporting alcohol use initiation. There was considerable overlap between initiation of alcohol, nicotine, and cannabis.

The researchers tested whether baseline neuroanatomical variability was associated with any substance use initiation before or up to 3 years following initial neuroimaging scans. Study covariates included baseline age, sex, pubertal status, familial relationship (eg, sibling or twin), and prenatal substance exposures. Researchers didn’t control for sociodemographic characteristics as these could influence associations.

 

Significant Brain Differences

Compared with no substance use initiation, any substance use initiation was associated with larger global neuroanatomical indices, including whole brain (beta = 0.05; P = 2.80 × 10–8), total intracranial (beta = 0.04; P = 3.49 × 10−6), cortical (beta = 0.05; P = 4.31 × 10–8), and subcortical volumes (beta = 0.05; P = 4.39 × 10–8), as well as greater total cortical surface area (beta = 0.04; P = 6.05 × 10–7).

The direction of associations between cortical thickness and substance use initiation was regionally specific; any substance use initiation was characterized by thinner cortex in all frontal regions (eg, rostral middle frontal gyrus, beta = −0.03; P = 6.99 × 10–6), but thicker cortex in all other lobes. It was also associated with larger regional brain volumes, deeper regional sulci, and differences in regional cortical surface area.

The authors noted total cortical thickness peaks at age 1.7 years and steadily declines throughout life. By contrast, subcortical volumes peak at 14.4 years of age and generally remain stable before steep later life declines.

Secondary analyses compared initiation of the three most commonly used substances in early adolescence (alcohol, nicotine, and cannabis) with no substance use.

Findings for alcohol largely mirrored those for any substance use. However, the study uncovered additional significant associations, including greater left lateral occipital volume and bilateral para-hippocampal gyri cortical thickness and less bilateral superior frontal gyri cortical thickness.

Nicotine use was associated with lower right superior frontal gyrus volume and deeper left lateral orbitofrontal cortex sulci. And cannabis use was associated with thinner left precentral gyrus and lower right inferior parietal gyrus and right caudate volumes.

The authors noted results for nicotine and cannabis may not have had adequate statistical power, and small effects suggest these findings aren’t clinically informative for individuals. However, they wrote, “They do inform and challenge current theoretical models of addiction.”

 

Associations Precede Substance Use

A post hoc analysis further challenges current models of addiction. When researchers looked only at the 1203 youth who initiated substance use after the baseline neuroimaging session, they found most associations preceded substance use.

“That regional associations may precede substance use initiation, including less cortical thickness in the right rostral middle frontal gyrus, challenges predominant interpretations that these associations arise largely due to neurotoxic consequences of exposure and increases the plausibility that these features may, at least partially, reflect markers of predispositional risk,” wrote the authors.

A study limitation was that unmeasured confounders and undetected systemic differences in missing data may have influenced associations. Sociodemographic, environmental, and genetic variables that were not included as covariates are likely associated with both neuroanatomical variability and substance use initiation and may moderate associations between them, said the authors.

The ABCD Study provides “a robust and large database of longitudinal data” that goes beyond previous neuroimaging research “to understand the bidirectional relationship between brain structure and substance use,” Miller said in a press release.

“The hope is that these types of studies, in conjunction with other data on environmental exposures and genetic risk, could help change how we think about the development of substance use disorders and inform more accurate models of addiction moving forward,” Miller said.

 

Reevaluating Causal Assumptions

In an accompanying editorial, Felix Pichardo, MA, and Sylia Wilson, PhD, from the Institute of Child Development, University of Minnesota, Minneapolis, suggested that it may be time to “reevaluate the causal assumptions that underlie brain disease models of addiction” and the mechanisms by which it develops, persists, and becomes harmful.

Neurotoxic effects of substances are central to current brain disease models of addiction, wrote Pichardo and Wilson. “Substance exposure is thought to affect cortical and subcortical regions that support interrelated systems, resulting in desensitization of reward-related processing, increased stress that prompts cravings, negative emotions when cravings are unsated, and weakening of cognitive control abilities that leads to repeated returns to use.”

The editorial writers praised the ABCD Study for its large sample size for providing a level of precision, statistical accuracy, and ability to identify both larger and smaller effects, which are critical for addiction research.

Unlike most addiction research that relies on cross-sectional designs, the current study used longitudinal assessments, which is another of its strengths, they noted.

“Longitudinal study designs like in the ABCD Study are fundamental for establishing temporal ordering across constructs, which is important because establishing temporal precedence is a key step in determining causal links and underlying mechanisms.”

The inclusion of several genetically informative components, such as the family study design, nested twin subsamples, and DNA collection, “allows researchers to extend beyond temporal precedence toward increased causal inference and identification of mechanisms,” they added.

The study received support from the National Institutes of Health. The study authors and editorial writers had no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

A widely accepted assumption in the addiction field is that neuroanatomical changes observed in young people who use alcohol or other substances are largely the consequence of exposure to these substances.

But a new study suggests that neuroanatomical features in children, including greater whole brain and cortical volumes, are evident before exposure to any substances.

The investigators, led by Alex P. Miller, PhD, assistant professor, Department of Psychiatry, Indiana University, Indianapolis, noted that the findings add to a growing body of work that suggests individual brain structure, along with environmental exposure and genetic risk, may influence risk for substance use disorder. 

The findings were published online in JAMA Network Open.

 

Neuroanatomy a Predisposing Risk Factor?

Earlier research showed that substance use is associated with lower gray matter volume, thinner cortex, and less white matter integrity. While it has been widely thought that these changes were induced by the use of alcohol or illicit drugs, recent longitudinal and genetic studies suggest that the neuroanatomical changes may also be predisposing risk factors for substance use.

To better understand the issue, investigators analyzed data on 9804 children (mean baseline age, 9.9 years; 53% men; 76% White) at 22 US sites enrolled in the Adolescent Brain Cognitive Development (ABCD) Study that’s examining brain and behavioral development from middle childhood to young adulthood.

The researchers collected information on the use of alcohol, nicotine, cannabis, and other illicit substances from in-person interviews at baseline and years 1, 2, and 3, as well as interim phone interviews at 6, 18, and 30 months. MRI scans provided extensive brain structural data, including global and regional cortical volume, thickness, surface area, sulcal depth, and subcortical volume.

Of the total, 3460 participants (35%) initiated substance use before age 15, with 90% reporting alcohol use initiation. There was considerable overlap between initiation of alcohol, nicotine, and cannabis.

The researchers tested whether baseline neuroanatomical variability was associated with any substance use initiation before or up to 3 years following initial neuroimaging scans. Study covariates included baseline age, sex, pubertal status, familial relationship (eg, sibling or twin), and prenatal substance exposures. Researchers didn’t control for sociodemographic characteristics as these could influence associations.

 

Significant Brain Differences

Compared with no substance use initiation, any substance use initiation was associated with larger global neuroanatomical indices, including whole brain (beta = 0.05; P = 2.80 × 10–8), total intracranial (beta = 0.04; P = 3.49 × 10−6), cortical (beta = 0.05; P = 4.31 × 10–8), and subcortical volumes (beta = 0.05; P = 4.39 × 10–8), as well as greater total cortical surface area (beta = 0.04; P = 6.05 × 10–7).

The direction of associations between cortical thickness and substance use initiation was regionally specific; any substance use initiation was characterized by thinner cortex in all frontal regions (eg, rostral middle frontal gyrus, beta = −0.03; P = 6.99 × 10–6), but thicker cortex in all other lobes. It was also associated with larger regional brain volumes, deeper regional sulci, and differences in regional cortical surface area.

The authors noted total cortical thickness peaks at age 1.7 years and steadily declines throughout life. By contrast, subcortical volumes peak at 14.4 years of age and generally remain stable before steep later life declines.

Secondary analyses compared initiation of the three most commonly used substances in early adolescence (alcohol, nicotine, and cannabis) with no substance use.

Findings for alcohol largely mirrored those for any substance use. However, the study uncovered additional significant associations, including greater left lateral occipital volume and bilateral para-hippocampal gyri cortical thickness and less bilateral superior frontal gyri cortical thickness.

Nicotine use was associated with lower right superior frontal gyrus volume and deeper left lateral orbitofrontal cortex sulci. And cannabis use was associated with thinner left precentral gyrus and lower right inferior parietal gyrus and right caudate volumes.

The authors noted results for nicotine and cannabis may not have had adequate statistical power, and small effects suggest these findings aren’t clinically informative for individuals. However, they wrote, “They do inform and challenge current theoretical models of addiction.”

 

Associations Precede Substance Use

A post hoc analysis further challenges current models of addiction. When researchers looked only at the 1203 youth who initiated substance use after the baseline neuroimaging session, they found most associations preceded substance use.

“That regional associations may precede substance use initiation, including less cortical thickness in the right rostral middle frontal gyrus, challenges predominant interpretations that these associations arise largely due to neurotoxic consequences of exposure and increases the plausibility that these features may, at least partially, reflect markers of predispositional risk,” wrote the authors.

A study limitation was that unmeasured confounders and undetected systemic differences in missing data may have influenced associations. Sociodemographic, environmental, and genetic variables that were not included as covariates are likely associated with both neuroanatomical variability and substance use initiation and may moderate associations between them, said the authors.

The ABCD Study provides “a robust and large database of longitudinal data” that goes beyond previous neuroimaging research “to understand the bidirectional relationship between brain structure and substance use,” Miller said in a press release.

“The hope is that these types of studies, in conjunction with other data on environmental exposures and genetic risk, could help change how we think about the development of substance use disorders and inform more accurate models of addiction moving forward,” Miller said.

 

Reevaluating Causal Assumptions

In an accompanying editorial, Felix Pichardo, MA, and Sylia Wilson, PhD, from the Institute of Child Development, University of Minnesota, Minneapolis, suggested that it may be time to “reevaluate the causal assumptions that underlie brain disease models of addiction” and the mechanisms by which it develops, persists, and becomes harmful.

Neurotoxic effects of substances are central to current brain disease models of addiction, wrote Pichardo and Wilson. “Substance exposure is thought to affect cortical and subcortical regions that support interrelated systems, resulting in desensitization of reward-related processing, increased stress that prompts cravings, negative emotions when cravings are unsated, and weakening of cognitive control abilities that leads to repeated returns to use.”

The editorial writers praised the ABCD Study for its large sample size for providing a level of precision, statistical accuracy, and ability to identify both larger and smaller effects, which are critical for addiction research.

Unlike most addiction research that relies on cross-sectional designs, the current study used longitudinal assessments, which is another of its strengths, they noted.

“Longitudinal study designs like in the ABCD Study are fundamental for establishing temporal ordering across constructs, which is important because establishing temporal precedence is a key step in determining causal links and underlying mechanisms.”

The inclusion of several genetically informative components, such as the family study design, nested twin subsamples, and DNA collection, “allows researchers to extend beyond temporal precedence toward increased causal inference and identification of mechanisms,” they added.

The study received support from the National Institutes of Health. The study authors and editorial writers had no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 01/03/2025 - 12:11
Un-Gate On Date
Fri, 01/03/2025 - 12:11
Use ProPublica
CFC Schedule Remove Status
Fri, 01/03/2025 - 12:11
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 01/03/2025 - 12:11

Nummular Headache Linked to Range of Secondary Causes

Article Type
Changed
Thu, 01/02/2025 - 13:48

A rare coin-shaped headache long viewed as a primary headache disorder frequently has underlying causes, according to new research, and clinicians should refer people who present with it for imaging.

First described in 2003, nummular or coin-shaped headache comprises an intermittent or constant pain limited to a rounded region between 1 and 6 cm in diameter. Classed as a primary headache by the International Classification of Headache Disorders (ICHD-3), it usually occurs in the parietal, or top rear, region of the head.

Despite nummular headache’s classification as a primary disorder, studies have linked some cases of coin-shaped headache to cranial or intracranial lesions. Now, a group of Spanish researchers has revised 20 years’ worth of cases, representing the largest series to date, and found a wide variety of causes, some of which, they say, had not been reported before in connection to this headache type.

For their research, published online in Headache, Antonio Sánchez-Soblechero, MD, and colleagues at the University Hospital Gregorio Marañón in Madrid, Spain, looked at clinical and imaging findings from 131 patients (67% women, median age at onset 52) seen from 2002 to 2022 at their center, seeking to identify any differences among primary and secondary or symptomatic coin-shaped headache cases. All patients underwent cranial MRI, CT, or both.

Altogether, 26% of the nummular headaches (n = 34) were found associated with trauma, vascular malformations, cranial bone disorders, neoplasia, arachnoid cysts, hypertension, aneurysm, or skin disorders including, in one case, a psoriasis plaque. Hypertension, aneurysm, and psoriasis were not previously described as causes of this headache, the authors said.

The definition of a nummular headache includes that secondary causes need to be excluded, according to the ICHD-3. The study authors proposed that “definite” secondary cases should meet ICHD-3 diagnostic criteria for nummular headache as well as for secondary headache, while “probable” cases meet all criteria for the former and all but one of the criteria for the latter. In their study, eight patients met the proposed criteria for “definite,” while the rest were deemed “probable” secondary cases.

Headache symptoms remained similar regardless of etiology, Sánchez-Soblechero and colleagues found, but coin-shaped headaches deemed to have secondary etiologies were significantly more likely to be associated with previous headache, remote head trauma, and longer symptom duration. The authors described treatments, including surgical interventions, for cases with secondary causes.

Preventive treatment was more effective in patients with determined causes for their headaches, Sánchez-Soblechero and colleagues found, with 72% seeing their monthly headache days halved, compared with just 30% of patients in whom a cause was not identified.

“The presence of any previous headache or remote head trauma may suggest a diagnosis of symptomatic nummular headache; however, as certain nummular headache might be an early symptom of intracranial mass lesions, neuroimaging is necessary. Finding the cause of nummular headache is essential to offer the most effective targeted treatment,” the investigators wrote in their analysis.

 

Primary Headache or Secondary?

In an interview, neurologist and headache specialist Nina Riggins, MD, PhD, of VA Palo Alto Health Care in California, praised the new findings as underscoring the importance of a thorough clinical approach.

“What this study shows is applicable to many primary headache disorders, whether migraine or cluster or nummular,” Riggins said. “Secondary headache can look like all of these headache types.”

Understanding what should be done to rule out secondary causes of headache is key for the correct diagnosis, she said. “In cases of coin-shaped headache, one should do a detailed neurological exam, consider imaging, check blood pressure, do blood work, and consider exams to exclude autoimmune psoriasis and other disorders as appropriate.”

Despite the inherent limits of its retrospective, single-center design, the study by Sánchez-Soblechero and colleagues is “extremely helpful in emphasizing that we should not dismiss [nummular headache] because it’s a little area of 1-6 centimeters,” Riggins said. “We absolutely have to make sure that we have ruled out secondary causes.” And while it would be useful to have evidence from prospective studies of nummular headache, “with such a rare headache, it’s hard. That’s why it’s so precious to have a study like this one, with 131 patients.”

Riggins acknowledged that the study emphasized the challenges of classifying and diagnosing nummular headache. The ICHD, last revised in 2018, “is a living, breathing document,” she said. “The idea is that as we learn more about headache disorders over time, this may mean changing some primary headaches to secondary, but we are clearly not there with this research: Most participants did not have a secondary cause for their coin-shaped headache.”

For now, Riggins said, “I think it’s best to keep classification straightforward for primary and secondary headache. It’s helpful for my day-to-day clinic life to have this neat division in place. But we do have to exclude secondary headache whenever possible in order to say that this is primary headache.”

Sánchez-Soblechero and coauthors disclosed no financial conflicts of interest related to their findings. Riggins disclosed consulting work for Gerson Lehrman Group, receiving research support from electroCore, Theranica, and Eli Lilly, and serving on advisory boards for Theranica, Teva Pharmaceuticals, Lundbeck, and Amneal Pharmaceuticals.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A rare coin-shaped headache long viewed as a primary headache disorder frequently has underlying causes, according to new research, and clinicians should refer people who present with it for imaging.

First described in 2003, nummular or coin-shaped headache comprises an intermittent or constant pain limited to a rounded region between 1 and 6 cm in diameter. Classed as a primary headache by the International Classification of Headache Disorders (ICHD-3), it usually occurs in the parietal, or top rear, region of the head.

Despite nummular headache’s classification as a primary disorder, studies have linked some cases of coin-shaped headache to cranial or intracranial lesions. Now, a group of Spanish researchers has revised 20 years’ worth of cases, representing the largest series to date, and found a wide variety of causes, some of which, they say, had not been reported before in connection to this headache type.

For their research, published online in Headache, Antonio Sánchez-Soblechero, MD, and colleagues at the University Hospital Gregorio Marañón in Madrid, Spain, looked at clinical and imaging findings from 131 patients (67% women, median age at onset 52) seen from 2002 to 2022 at their center, seeking to identify any differences among primary and secondary or symptomatic coin-shaped headache cases. All patients underwent cranial MRI, CT, or both.

Altogether, 26% of the nummular headaches (n = 34) were found associated with trauma, vascular malformations, cranial bone disorders, neoplasia, arachnoid cysts, hypertension, aneurysm, or skin disorders including, in one case, a psoriasis plaque. Hypertension, aneurysm, and psoriasis were not previously described as causes of this headache, the authors said.

The definition of a nummular headache includes that secondary causes need to be excluded, according to the ICHD-3. The study authors proposed that “definite” secondary cases should meet ICHD-3 diagnostic criteria for nummular headache as well as for secondary headache, while “probable” cases meet all criteria for the former and all but one of the criteria for the latter. In their study, eight patients met the proposed criteria for “definite,” while the rest were deemed “probable” secondary cases.

Headache symptoms remained similar regardless of etiology, Sánchez-Soblechero and colleagues found, but coin-shaped headaches deemed to have secondary etiologies were significantly more likely to be associated with previous headache, remote head trauma, and longer symptom duration. The authors described treatments, including surgical interventions, for cases with secondary causes.

Preventive treatment was more effective in patients with determined causes for their headaches, Sánchez-Soblechero and colleagues found, with 72% seeing their monthly headache days halved, compared with just 30% of patients in whom a cause was not identified.

“The presence of any previous headache or remote head trauma may suggest a diagnosis of symptomatic nummular headache; however, as certain nummular headache might be an early symptom of intracranial mass lesions, neuroimaging is necessary. Finding the cause of nummular headache is essential to offer the most effective targeted treatment,” the investigators wrote in their analysis.

 

Primary Headache or Secondary?

In an interview, neurologist and headache specialist Nina Riggins, MD, PhD, of VA Palo Alto Health Care in California, praised the new findings as underscoring the importance of a thorough clinical approach.

“What this study shows is applicable to many primary headache disorders, whether migraine or cluster or nummular,” Riggins said. “Secondary headache can look like all of these headache types.”

Understanding what should be done to rule out secondary causes of headache is key for the correct diagnosis, she said. “In cases of coin-shaped headache, one should do a detailed neurological exam, consider imaging, check blood pressure, do blood work, and consider exams to exclude autoimmune psoriasis and other disorders as appropriate.”

Despite the inherent limits of its retrospective, single-center design, the study by Sánchez-Soblechero and colleagues is “extremely helpful in emphasizing that we should not dismiss [nummular headache] because it’s a little area of 1-6 centimeters,” Riggins said. “We absolutely have to make sure that we have ruled out secondary causes.” And while it would be useful to have evidence from prospective studies of nummular headache, “with such a rare headache, it’s hard. That’s why it’s so precious to have a study like this one, with 131 patients.”

Riggins acknowledged that the study emphasized the challenges of classifying and diagnosing nummular headache. The ICHD, last revised in 2018, “is a living, breathing document,” she said. “The idea is that as we learn more about headache disorders over time, this may mean changing some primary headaches to secondary, but we are clearly not there with this research: Most participants did not have a secondary cause for their coin-shaped headache.”

For now, Riggins said, “I think it’s best to keep classification straightforward for primary and secondary headache. It’s helpful for my day-to-day clinic life to have this neat division in place. But we do have to exclude secondary headache whenever possible in order to say that this is primary headache.”

Sánchez-Soblechero and coauthors disclosed no financial conflicts of interest related to their findings. Riggins disclosed consulting work for Gerson Lehrman Group, receiving research support from electroCore, Theranica, and Eli Lilly, and serving on advisory boards for Theranica, Teva Pharmaceuticals, Lundbeck, and Amneal Pharmaceuticals.

A version of this article appeared on Medscape.com.

A rare coin-shaped headache long viewed as a primary headache disorder frequently has underlying causes, according to new research, and clinicians should refer people who present with it for imaging.

First described in 2003, nummular or coin-shaped headache comprises an intermittent or constant pain limited to a rounded region between 1 and 6 cm in diameter. Classed as a primary headache by the International Classification of Headache Disorders (ICHD-3), it usually occurs in the parietal, or top rear, region of the head.

Despite nummular headache’s classification as a primary disorder, studies have linked some cases of coin-shaped headache to cranial or intracranial lesions. Now, a group of Spanish researchers has revised 20 years’ worth of cases, representing the largest series to date, and found a wide variety of causes, some of which, they say, had not been reported before in connection to this headache type.

For their research, published online in Headache, Antonio Sánchez-Soblechero, MD, and colleagues at the University Hospital Gregorio Marañón in Madrid, Spain, looked at clinical and imaging findings from 131 patients (67% women, median age at onset 52) seen from 2002 to 2022 at their center, seeking to identify any differences among primary and secondary or symptomatic coin-shaped headache cases. All patients underwent cranial MRI, CT, or both.

Altogether, 26% of the nummular headaches (n = 34) were found associated with trauma, vascular malformations, cranial bone disorders, neoplasia, arachnoid cysts, hypertension, aneurysm, or skin disorders including, in one case, a psoriasis plaque. Hypertension, aneurysm, and psoriasis were not previously described as causes of this headache, the authors said.

The definition of a nummular headache includes that secondary causes need to be excluded, according to the ICHD-3. The study authors proposed that “definite” secondary cases should meet ICHD-3 diagnostic criteria for nummular headache as well as for secondary headache, while “probable” cases meet all criteria for the former and all but one of the criteria for the latter. In their study, eight patients met the proposed criteria for “definite,” while the rest were deemed “probable” secondary cases.

Headache symptoms remained similar regardless of etiology, Sánchez-Soblechero and colleagues found, but coin-shaped headaches deemed to have secondary etiologies were significantly more likely to be associated with previous headache, remote head trauma, and longer symptom duration. The authors described treatments, including surgical interventions, for cases with secondary causes.

Preventive treatment was more effective in patients with determined causes for their headaches, Sánchez-Soblechero and colleagues found, with 72% seeing their monthly headache days halved, compared with just 30% of patients in whom a cause was not identified.

“The presence of any previous headache or remote head trauma may suggest a diagnosis of symptomatic nummular headache; however, as certain nummular headache might be an early symptom of intracranial mass lesions, neuroimaging is necessary. Finding the cause of nummular headache is essential to offer the most effective targeted treatment,” the investigators wrote in their analysis.

 

Primary Headache or Secondary?

In an interview, neurologist and headache specialist Nina Riggins, MD, PhD, of VA Palo Alto Health Care in California, praised the new findings as underscoring the importance of a thorough clinical approach.

“What this study shows is applicable to many primary headache disorders, whether migraine or cluster or nummular,” Riggins said. “Secondary headache can look like all of these headache types.”

Understanding what should be done to rule out secondary causes of headache is key for the correct diagnosis, she said. “In cases of coin-shaped headache, one should do a detailed neurological exam, consider imaging, check blood pressure, do blood work, and consider exams to exclude autoimmune psoriasis and other disorders as appropriate.”

Despite the inherent limits of its retrospective, single-center design, the study by Sánchez-Soblechero and colleagues is “extremely helpful in emphasizing that we should not dismiss [nummular headache] because it’s a little area of 1-6 centimeters,” Riggins said. “We absolutely have to make sure that we have ruled out secondary causes.” And while it would be useful to have evidence from prospective studies of nummular headache, “with such a rare headache, it’s hard. That’s why it’s so precious to have a study like this one, with 131 patients.”

Riggins acknowledged that the study emphasized the challenges of classifying and diagnosing nummular headache. The ICHD, last revised in 2018, “is a living, breathing document,” she said. “The idea is that as we learn more about headache disorders over time, this may mean changing some primary headaches to secondary, but we are clearly not there with this research: Most participants did not have a secondary cause for their coin-shaped headache.”

For now, Riggins said, “I think it’s best to keep classification straightforward for primary and secondary headache. It’s helpful for my day-to-day clinic life to have this neat division in place. But we do have to exclude secondary headache whenever possible in order to say that this is primary headache.”

Sánchez-Soblechero and coauthors disclosed no financial conflicts of interest related to their findings. Riggins disclosed consulting work for Gerson Lehrman Group, receiving research support from electroCore, Theranica, and Eli Lilly, and serving on advisory boards for Theranica, Teva Pharmaceuticals, Lundbeck, and Amneal Pharmaceuticals.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HEADACHE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 01/02/2025 - 13:46
Un-Gate On Date
Thu, 01/02/2025 - 13:46
Use ProPublica
CFC Schedule Remove Status
Thu, 01/02/2025 - 13:46
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 01/02/2025 - 13:46

Common Herbicide a Player in Neurodegeneration?

Article Type
Changed
Wed, 12/11/2024 - 08:32

Chronic exposure to glyphosate — the most widely used herbicide globally — may be a risk factor for Alzheimer’s disease, new research showed. 

Researchers found that glyphosate exposure even at regulated levels was associated with increased neuroinflammation and accelerated Alzheimer’s disease–like pathology in mice — an effect that persisted 6 months after a recovery period when exposure was stopped.

“More research is needed to understand the consequences of glyphosate exposure to the brain in humans and to understand the appropriate dose of exposure to limit detrimental outcomes,” said co–senior author Ramon Velazquez, PhD, with Arizona State University, Tempe.

The study was published online in The Journal of Neuroinflammation.

 

Persistent Accumulation Within the Brain

Glyphosate is the most heavily applied herbicide in the United States, with roughly 300 million pounds used annually in agricultural communities throughout the United States. It is also used for weed control in parks, residential areas, and personal gardens.

The Environmental Protection Agency (EPA) has determined that glyphosate poses no risks to human health when used as directed. But the World Health Organization’s International Agency for Research on Cancer disagrees, classifying the herbicide as “possibly carcinogenic to humans.”

In addition to the possible cancer risk, multiple reports have also suggested potential harmful effects of glyphosate exposure on the brain. 

In earlier work, Velazquez and colleagues showed that glyphosate crosses the blood-brain barrier and infiltrates the brains of mice, contributing to neuroinflammation and other detrimental effects on brain function. 

In their latest study, they examined the long-term effects of glyphosate exposure on neuroinflammation and Alzheimer’s disease–like pathology using a mouse model.

They dosed 4.5-month-old mice genetically predisposed to Alzheimer’s disease and non-transgenic control mice with either 0, 50, or 500 mg/kg of glyphosate daily for 13 weeks followed by a 6-month recovery period. 

The high dose is similar to levels used in earlier research, and the low dose is close to the limit used to establish the current EPA acceptable dose in humans.

Glyphosate’s metabolite, aminomethylphosphonic acid, was detectable and persisted in mouse brain tissue even 6 months after exposure ceased, the researchers reported. 

Additionally, there was a significant increase in soluble and insoluble fractions of amyloid-beta (Abeta), Abeta42 plaque load and plaque size, and phosphorylated tau at Threonine 181 and Serine 396 in hippocampus and cortex brain tissue from glyphosate-exposed mice, “highlighting an exacerbation of hallmark Alzheimer’s disease–like proteinopathies,” they noted. 

Glyphosate exposure was also associated with significant elevations in both pro- and anti-inflammatory cytokines and chemokines in brain tissue of transgenic and normal mice and in peripheral blood plasma of transgenic mice. 

Glyphosate-exposed transgenic mice also showed heightened anxiety-like behaviors and reduced survival. 

“These findings highlight that many chemicals we regularly encounter, previously considered safe, may pose potential health risks,” co–senior author Patrick Pirrotte, PhD, with the Translational Genomics Research Institute, Phoenix, Arizona, said in a statement.

“However, further research is needed to fully assess the public health impact and identify safer alternatives,” Pirrotte added. 

Funding for the study was provided by the National Institutes on Aging, National Cancer Institute and the Arizona State University (ASU) Biodesign Institute. The authors have declared no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Chronic exposure to glyphosate — the most widely used herbicide globally — may be a risk factor for Alzheimer’s disease, new research showed. 

Researchers found that glyphosate exposure even at regulated levels was associated with increased neuroinflammation and accelerated Alzheimer’s disease–like pathology in mice — an effect that persisted 6 months after a recovery period when exposure was stopped.

“More research is needed to understand the consequences of glyphosate exposure to the brain in humans and to understand the appropriate dose of exposure to limit detrimental outcomes,” said co–senior author Ramon Velazquez, PhD, with Arizona State University, Tempe.

The study was published online in The Journal of Neuroinflammation.

 

Persistent Accumulation Within the Brain

Glyphosate is the most heavily applied herbicide in the United States, with roughly 300 million pounds used annually in agricultural communities throughout the United States. It is also used for weed control in parks, residential areas, and personal gardens.

The Environmental Protection Agency (EPA) has determined that glyphosate poses no risks to human health when used as directed. But the World Health Organization’s International Agency for Research on Cancer disagrees, classifying the herbicide as “possibly carcinogenic to humans.”

In addition to the possible cancer risk, multiple reports have also suggested potential harmful effects of glyphosate exposure on the brain. 

In earlier work, Velazquez and colleagues showed that glyphosate crosses the blood-brain barrier and infiltrates the brains of mice, contributing to neuroinflammation and other detrimental effects on brain function. 

In their latest study, they examined the long-term effects of glyphosate exposure on neuroinflammation and Alzheimer’s disease–like pathology using a mouse model.

They dosed 4.5-month-old mice genetically predisposed to Alzheimer’s disease and non-transgenic control mice with either 0, 50, or 500 mg/kg of glyphosate daily for 13 weeks followed by a 6-month recovery period. 

The high dose is similar to levels used in earlier research, and the low dose is close to the limit used to establish the current EPA acceptable dose in humans.

Glyphosate’s metabolite, aminomethylphosphonic acid, was detectable and persisted in mouse brain tissue even 6 months after exposure ceased, the researchers reported. 

Additionally, there was a significant increase in soluble and insoluble fractions of amyloid-beta (Abeta), Abeta42 plaque load and plaque size, and phosphorylated tau at Threonine 181 and Serine 396 in hippocampus and cortex brain tissue from glyphosate-exposed mice, “highlighting an exacerbation of hallmark Alzheimer’s disease–like proteinopathies,” they noted. 

Glyphosate exposure was also associated with significant elevations in both pro- and anti-inflammatory cytokines and chemokines in brain tissue of transgenic and normal mice and in peripheral blood plasma of transgenic mice. 

Glyphosate-exposed transgenic mice also showed heightened anxiety-like behaviors and reduced survival. 

“These findings highlight that many chemicals we regularly encounter, previously considered safe, may pose potential health risks,” co–senior author Patrick Pirrotte, PhD, with the Translational Genomics Research Institute, Phoenix, Arizona, said in a statement.

“However, further research is needed to fully assess the public health impact and identify safer alternatives,” Pirrotte added. 

Funding for the study was provided by the National Institutes on Aging, National Cancer Institute and the Arizona State University (ASU) Biodesign Institute. The authors have declared no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Chronic exposure to glyphosate — the most widely used herbicide globally — may be a risk factor for Alzheimer’s disease, new research showed. 

Researchers found that glyphosate exposure even at regulated levels was associated with increased neuroinflammation and accelerated Alzheimer’s disease–like pathology in mice — an effect that persisted 6 months after a recovery period when exposure was stopped.

“More research is needed to understand the consequences of glyphosate exposure to the brain in humans and to understand the appropriate dose of exposure to limit detrimental outcomes,” said co–senior author Ramon Velazquez, PhD, with Arizona State University, Tempe.

The study was published online in The Journal of Neuroinflammation.

 

Persistent Accumulation Within the Brain

Glyphosate is the most heavily applied herbicide in the United States, with roughly 300 million pounds used annually in agricultural communities throughout the United States. It is also used for weed control in parks, residential areas, and personal gardens.

The Environmental Protection Agency (EPA) has determined that glyphosate poses no risks to human health when used as directed. But the World Health Organization’s International Agency for Research on Cancer disagrees, classifying the herbicide as “possibly carcinogenic to humans.”

In addition to the possible cancer risk, multiple reports have also suggested potential harmful effects of glyphosate exposure on the brain. 

In earlier work, Velazquez and colleagues showed that glyphosate crosses the blood-brain barrier and infiltrates the brains of mice, contributing to neuroinflammation and other detrimental effects on brain function. 

In their latest study, they examined the long-term effects of glyphosate exposure on neuroinflammation and Alzheimer’s disease–like pathology using a mouse model.

They dosed 4.5-month-old mice genetically predisposed to Alzheimer’s disease and non-transgenic control mice with either 0, 50, or 500 mg/kg of glyphosate daily for 13 weeks followed by a 6-month recovery period. 

The high dose is similar to levels used in earlier research, and the low dose is close to the limit used to establish the current EPA acceptable dose in humans.

Glyphosate’s metabolite, aminomethylphosphonic acid, was detectable and persisted in mouse brain tissue even 6 months after exposure ceased, the researchers reported. 

Additionally, there was a significant increase in soluble and insoluble fractions of amyloid-beta (Abeta), Abeta42 plaque load and plaque size, and phosphorylated tau at Threonine 181 and Serine 396 in hippocampus and cortex brain tissue from glyphosate-exposed mice, “highlighting an exacerbation of hallmark Alzheimer’s disease–like proteinopathies,” they noted. 

Glyphosate exposure was also associated with significant elevations in both pro- and anti-inflammatory cytokines and chemokines in brain tissue of transgenic and normal mice and in peripheral blood plasma of transgenic mice. 

Glyphosate-exposed transgenic mice also showed heightened anxiety-like behaviors and reduced survival. 

“These findings highlight that many chemicals we regularly encounter, previously considered safe, may pose potential health risks,” co–senior author Patrick Pirrotte, PhD, with the Translational Genomics Research Institute, Phoenix, Arizona, said in a statement.

“However, further research is needed to fully assess the public health impact and identify safer alternatives,” Pirrotte added. 

Funding for the study was provided by the National Institutes on Aging, National Cancer Institute and the Arizona State University (ASU) Biodesign Institute. The authors have declared no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF NEUROINFLAMMATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 12/09/2024 - 12:17
Un-Gate On Date
Mon, 12/09/2024 - 12:17
Use ProPublica
CFC Schedule Remove Status
Mon, 12/09/2024 - 12:17
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 12/09/2024 - 12:17

New Cancer Vaccines on the Horizon: Renewed Hope or Hype?

Article Type
Changed
Wed, 12/11/2024 - 08:47

Vaccines for treating and preventing cancer have long been considered a holy grail in oncology.

But aside from a few notable exceptions — including the human papillomavirus (HPV) vaccine, which has dramatically reduced the incidence of HPV-related cancers, and a Bacillus Calmette-Guerin vaccine, which helps prevent early-stage bladder cancer recurrence — most have failed to deliver.

Following a string of disappointments over the past decade, recent advances in the immunotherapy space are bringing renewed hope for progress.

In an American Association for Cancer Research (AACR) series earlier in 2024, Catherine J. Wu, MD, predicted big strides for cancer vaccines, especially for personalized vaccines that target patient-specific neoantigens — the proteins that form on cancer cells — as well as vaccines that can treat diverse tumor types.

“A focus on neoantigens that arise from driver mutations in different tumor types could allow us to make progress in creating off-the-shelf vaccines,” said Wu, the Lavine Family Chair of Preventative Cancer Therapies at Dana-Farber Cancer Institute and a professor of medicine at Harvard Medical School, both in Boston, Massachusetts.

A prime example is a personalized, messenger RNA (mRNA)–based vaccine designed to prevent melanoma recurrence. The mRNA-4157 vaccine encodes up to 34 different patient-specific neoantigens.

“This is one of the most exciting developments in modern cancer therapy,” said Lawrence Young, a virologist and professor of molecular oncology at the University of Warwick, Coventry, England, who commented on the investigational vaccine via the UK-based Science Media Centre.

Other promising options are on the horizon as well. In August, BioNTech announced a phase 1 global trial to study BNT116 — a vaccine to treat non–small cell lung cancer (NSCLC). BNT116, like mRNA-4157, targets specific antigens in the lung cancer cells.

“This technology is the next big phase of cancer treatment,” Siow Ming Lee, MD, a consultant medical oncologist at University College London Hospitals in England, which is leading the UK trial for the lung cancer and melanoma vaccines, told The Guardian. “We are now entering this very exciting new era of mRNA-based immunotherapy clinical trials to investigate the treatment of lung cancer.”

Still, these predictions have a familiar ring. While the prospects are exciting, delivering on them is another story. There are simply no guarantees these strategies will work as hoped.

 

Then: Where We Were

Cancer vaccine research began to ramp up in the 2000s, and in 2006, the first-generation HPV vaccine, Gardasil, was approved. Gardasil prevents infection from four strains of HPV that cause about 80% of cervical cancer cases.

In 2010, the Food and Drug Administration approved sipuleucel-T, the first therapeutic cancer vaccine, which improved overall survival in patients with hormone-refractory prostate cancer.

Researchers predicted this approval would “pave the way for developing innovative, next generation of vaccines with enhanced antitumor potency.”

In a 2015 AACR research forecast report, Drew Pardoll, MD, PhD, co-director of the Cancer Immunology and Hematopoiesis Program at Johns Hopkins University, Baltimore, Maryland, said that “we can expect to see encouraging results from studies using cancer vaccines.”

Despite the excitement surrounding cancer vaccines alongside a few successes, the next decade brought a longer string of late-phase disappointments.

In 2016, the phase 3 ACT IV trial of a therapeutic vaccine to treat glioblastoma multiforme (CDX-110) was terminated after it failed to demonstrate improved survival.

In 2017, a phase 3 trial of the therapeutic pancreatic cancer vaccine, GVAX, was stopped early for lack of efficacy.

That year, an attenuated Listeria monocytogenes vaccine to treat pancreatic cancer and mesothelioma also failed to come to fruition. In late 2017, concerns over listeria infections prompted Aduro Biotech to cancel its listeria-based cancer treatment program.

In 2018, a phase 3 trial of belagenpumatucel-L, a therapeutic NSCLC vaccine, failed to demonstrate a significant improvement in survival and further study was discontinued.

And in 2019, a vaccine targeting MAGE-A3, a cancer-testis antigen present in multiple tumor types, failed to meet endpoints for improved survival in a phase 3 trial, leading to discontinuation of the vaccine program.

But these disappointments and failures are normal parts of medical research and drug development and have allowed for incremental advances that helped fuel renewed interest and hope for cancer vaccines, when the timing was right, explained vaccine pioneer Larry W. Kwak, MD, PhD, deputy director of the Comprehensive Cancer Center at City of Hope, Duarte, California.

When it comes to vaccine progress, timing makes a difference. In 2011, Kwak and colleagues published promising phase 3 trial results on a personalized vaccine. The vaccine was a patient-specific tumor-derived antigen for patients with follicular lymphoma in their first remission following chemotherapy. Patients who received the vaccine demonstrated significantly longer disease-free survival.

But, at the time, personalized vaccines faced strong headwinds due, largely, to high costs, and commercial interest failed to materialize. “That’s been the major hurdle for a long time,” said Kwak.

Now, however, interest has returned alongside advances in technology and research. The big shift has been the emergence of lower-cost rapid-production mRNA and DNA platforms and a better understanding of how vaccines and potent immune stimulants, like checkpoint inhibitors, can work together to improve outcomes, he explained.

“The timing wasn’t right” back then, Kwak noted. “Now, it’s a different environment and a different time.”

 

A Turning Point?

Indeed, a decade later, cancer vaccine development appears to be headed in a more promising direction.

Among key cancer vaccines to watch is the mRNA-4157 vaccine, developed by Merck and Moderna, designed to prevent melanoma recurrence. In a recent phase 2 study, patients receiving the mRNA-4157 vaccine alongside pembrolizumab had nearly half the risk for melanoma recurrence or death at 3 years compared with those receiving pembrolizumab alone. Investigators are now evaluating the vaccine in a global phase 3 study in patients with high-risk, stage IIB to IV melanoma following surgery.

Another one to watch is the BNT116 NSCLC vaccine from BioNTech. This vaccine presents the immune system with NSCLC tumor markers to encourage the body to fight cancer cells expressing those markers while ignoring healthy cells. BioNTech also launched a global clinical trial for its vaccine this year.

Other notables include a pancreatic cancer mRNA vaccine, which has shown promising early results in a small trial of 16 patients. Of 16 patients who received the vaccine alongside chemotherapy and after surgery and immunotherapy, 8 responded. Of these eight, six remained recurrence free at 3 years. Investigators noted that the vaccine appeared to stimulate a durable T-cell response in patients who responded.

Kwak has also continued his work on lymphoma vaccines. In August, his team published promising first-in-human data on the use of personalized neoantigen vaccines as an early intervention in untreated patients with lymphoplasmacytic lymphoma. Among nine asymptomatic patients who received the vaccine, all achieved stable disease or better, with no dose-limiting toxicities. One patient had a minor response, and the median time to progression was greater than 72 months.

“The current setting is more for advanced disease,” Kwak explained. “It’s a tougher task, but combined with checkpoint blockade, it may be potent enough to work.” 

Still, caution is important. Despite early promise, it’s too soon to tell which, if any, of these investigational vaccines will pan out in the long run. Like investigational drugs, cancer vaccines may show big promising initially but then fail in larger trials.

One key to success, according to Kwak, is to design trials so that even negative results will inform next steps.

But, he noted, failures in large clinical trials will “put a chilling effect on cancer vaccine research again.”

“That’s what keeps me up at night,” he said. “We know the science is fundamentally sound and we have seen glimpses over decades of research that cancer vaccines can work, so it’s really just a matter of tweaking things to optimize trial design.”

Companies tend to design trials to test if a vaccine works or not, without trying to understand why, he said.

“What we need to do is design those so that we can learn from negative results,” he said. That’s what he and his colleagues attempted to do in their recent trial. “We didn’t just look at clinical results; we’re interrogating the actual tumor environment to understand what worked and didn’t and how to tweak that for the next trial.”

Kwak and his colleagues found, for instance, that the vaccine had a greater effect on B cell–derived tumor cells than on cells of plasma origin, so “the most rational design for the next iteration is to combine the vaccine with agents that work directly against plasma cells,” he explained.

As for what’s next, Kwak said: “We’re just focused on trying to do good science and understand. We’ve seen glimpses of success. That’s where we are.”

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Vaccines for treating and preventing cancer have long been considered a holy grail in oncology.

But aside from a few notable exceptions — including the human papillomavirus (HPV) vaccine, which has dramatically reduced the incidence of HPV-related cancers, and a Bacillus Calmette-Guerin vaccine, which helps prevent early-stage bladder cancer recurrence — most have failed to deliver.

Following a string of disappointments over the past decade, recent advances in the immunotherapy space are bringing renewed hope for progress.

In an American Association for Cancer Research (AACR) series earlier in 2024, Catherine J. Wu, MD, predicted big strides for cancer vaccines, especially for personalized vaccines that target patient-specific neoantigens — the proteins that form on cancer cells — as well as vaccines that can treat diverse tumor types.

“A focus on neoantigens that arise from driver mutations in different tumor types could allow us to make progress in creating off-the-shelf vaccines,” said Wu, the Lavine Family Chair of Preventative Cancer Therapies at Dana-Farber Cancer Institute and a professor of medicine at Harvard Medical School, both in Boston, Massachusetts.

A prime example is a personalized, messenger RNA (mRNA)–based vaccine designed to prevent melanoma recurrence. The mRNA-4157 vaccine encodes up to 34 different patient-specific neoantigens.

“This is one of the most exciting developments in modern cancer therapy,” said Lawrence Young, a virologist and professor of molecular oncology at the University of Warwick, Coventry, England, who commented on the investigational vaccine via the UK-based Science Media Centre.

Other promising options are on the horizon as well. In August, BioNTech announced a phase 1 global trial to study BNT116 — a vaccine to treat non–small cell lung cancer (NSCLC). BNT116, like mRNA-4157, targets specific antigens in the lung cancer cells.

“This technology is the next big phase of cancer treatment,” Siow Ming Lee, MD, a consultant medical oncologist at University College London Hospitals in England, which is leading the UK trial for the lung cancer and melanoma vaccines, told The Guardian. “We are now entering this very exciting new era of mRNA-based immunotherapy clinical trials to investigate the treatment of lung cancer.”

Still, these predictions have a familiar ring. While the prospects are exciting, delivering on them is another story. There are simply no guarantees these strategies will work as hoped.

 

Then: Where We Were

Cancer vaccine research began to ramp up in the 2000s, and in 2006, the first-generation HPV vaccine, Gardasil, was approved. Gardasil prevents infection from four strains of HPV that cause about 80% of cervical cancer cases.

In 2010, the Food and Drug Administration approved sipuleucel-T, the first therapeutic cancer vaccine, which improved overall survival in patients with hormone-refractory prostate cancer.

Researchers predicted this approval would “pave the way for developing innovative, next generation of vaccines with enhanced antitumor potency.”

In a 2015 AACR research forecast report, Drew Pardoll, MD, PhD, co-director of the Cancer Immunology and Hematopoiesis Program at Johns Hopkins University, Baltimore, Maryland, said that “we can expect to see encouraging results from studies using cancer vaccines.”

Despite the excitement surrounding cancer vaccines alongside a few successes, the next decade brought a longer string of late-phase disappointments.

In 2016, the phase 3 ACT IV trial of a therapeutic vaccine to treat glioblastoma multiforme (CDX-110) was terminated after it failed to demonstrate improved survival.

In 2017, a phase 3 trial of the therapeutic pancreatic cancer vaccine, GVAX, was stopped early for lack of efficacy.

That year, an attenuated Listeria monocytogenes vaccine to treat pancreatic cancer and mesothelioma also failed to come to fruition. In late 2017, concerns over listeria infections prompted Aduro Biotech to cancel its listeria-based cancer treatment program.

In 2018, a phase 3 trial of belagenpumatucel-L, a therapeutic NSCLC vaccine, failed to demonstrate a significant improvement in survival and further study was discontinued.

And in 2019, a vaccine targeting MAGE-A3, a cancer-testis antigen present in multiple tumor types, failed to meet endpoints for improved survival in a phase 3 trial, leading to discontinuation of the vaccine program.

But these disappointments and failures are normal parts of medical research and drug development and have allowed for incremental advances that helped fuel renewed interest and hope for cancer vaccines, when the timing was right, explained vaccine pioneer Larry W. Kwak, MD, PhD, deputy director of the Comprehensive Cancer Center at City of Hope, Duarte, California.

When it comes to vaccine progress, timing makes a difference. In 2011, Kwak and colleagues published promising phase 3 trial results on a personalized vaccine. The vaccine was a patient-specific tumor-derived antigen for patients with follicular lymphoma in their first remission following chemotherapy. Patients who received the vaccine demonstrated significantly longer disease-free survival.

But, at the time, personalized vaccines faced strong headwinds due, largely, to high costs, and commercial interest failed to materialize. “That’s been the major hurdle for a long time,” said Kwak.

Now, however, interest has returned alongside advances in technology and research. The big shift has been the emergence of lower-cost rapid-production mRNA and DNA platforms and a better understanding of how vaccines and potent immune stimulants, like checkpoint inhibitors, can work together to improve outcomes, he explained.

“The timing wasn’t right” back then, Kwak noted. “Now, it’s a different environment and a different time.”

 

A Turning Point?

Indeed, a decade later, cancer vaccine development appears to be headed in a more promising direction.

Among key cancer vaccines to watch is the mRNA-4157 vaccine, developed by Merck and Moderna, designed to prevent melanoma recurrence. In a recent phase 2 study, patients receiving the mRNA-4157 vaccine alongside pembrolizumab had nearly half the risk for melanoma recurrence or death at 3 years compared with those receiving pembrolizumab alone. Investigators are now evaluating the vaccine in a global phase 3 study in patients with high-risk, stage IIB to IV melanoma following surgery.

Another one to watch is the BNT116 NSCLC vaccine from BioNTech. This vaccine presents the immune system with NSCLC tumor markers to encourage the body to fight cancer cells expressing those markers while ignoring healthy cells. BioNTech also launched a global clinical trial for its vaccine this year.

Other notables include a pancreatic cancer mRNA vaccine, which has shown promising early results in a small trial of 16 patients. Of 16 patients who received the vaccine alongside chemotherapy and after surgery and immunotherapy, 8 responded. Of these eight, six remained recurrence free at 3 years. Investigators noted that the vaccine appeared to stimulate a durable T-cell response in patients who responded.

Kwak has also continued his work on lymphoma vaccines. In August, his team published promising first-in-human data on the use of personalized neoantigen vaccines as an early intervention in untreated patients with lymphoplasmacytic lymphoma. Among nine asymptomatic patients who received the vaccine, all achieved stable disease or better, with no dose-limiting toxicities. One patient had a minor response, and the median time to progression was greater than 72 months.

“The current setting is more for advanced disease,” Kwak explained. “It’s a tougher task, but combined with checkpoint blockade, it may be potent enough to work.” 

Still, caution is important. Despite early promise, it’s too soon to tell which, if any, of these investigational vaccines will pan out in the long run. Like investigational drugs, cancer vaccines may show big promising initially but then fail in larger trials.

One key to success, according to Kwak, is to design trials so that even negative results will inform next steps.

But, he noted, failures in large clinical trials will “put a chilling effect on cancer vaccine research again.”

“That’s what keeps me up at night,” he said. “We know the science is fundamentally sound and we have seen glimpses over decades of research that cancer vaccines can work, so it’s really just a matter of tweaking things to optimize trial design.”

Companies tend to design trials to test if a vaccine works or not, without trying to understand why, he said.

“What we need to do is design those so that we can learn from negative results,” he said. That’s what he and his colleagues attempted to do in their recent trial. “We didn’t just look at clinical results; we’re interrogating the actual tumor environment to understand what worked and didn’t and how to tweak that for the next trial.”

Kwak and his colleagues found, for instance, that the vaccine had a greater effect on B cell–derived tumor cells than on cells of plasma origin, so “the most rational design for the next iteration is to combine the vaccine with agents that work directly against plasma cells,” he explained.

As for what’s next, Kwak said: “We’re just focused on trying to do good science and understand. We’ve seen glimpses of success. That’s where we are.”

A version of this article first appeared on Medscape.com.

Vaccines for treating and preventing cancer have long been considered a holy grail in oncology.

But aside from a few notable exceptions — including the human papillomavirus (HPV) vaccine, which has dramatically reduced the incidence of HPV-related cancers, and a Bacillus Calmette-Guerin vaccine, which helps prevent early-stage bladder cancer recurrence — most have failed to deliver.

Following a string of disappointments over the past decade, recent advances in the immunotherapy space are bringing renewed hope for progress.

In an American Association for Cancer Research (AACR) series earlier in 2024, Catherine J. Wu, MD, predicted big strides for cancer vaccines, especially for personalized vaccines that target patient-specific neoantigens — the proteins that form on cancer cells — as well as vaccines that can treat diverse tumor types.

“A focus on neoantigens that arise from driver mutations in different tumor types could allow us to make progress in creating off-the-shelf vaccines,” said Wu, the Lavine Family Chair of Preventative Cancer Therapies at Dana-Farber Cancer Institute and a professor of medicine at Harvard Medical School, both in Boston, Massachusetts.

A prime example is a personalized, messenger RNA (mRNA)–based vaccine designed to prevent melanoma recurrence. The mRNA-4157 vaccine encodes up to 34 different patient-specific neoantigens.

“This is one of the most exciting developments in modern cancer therapy,” said Lawrence Young, a virologist and professor of molecular oncology at the University of Warwick, Coventry, England, who commented on the investigational vaccine via the UK-based Science Media Centre.

Other promising options are on the horizon as well. In August, BioNTech announced a phase 1 global trial to study BNT116 — a vaccine to treat non–small cell lung cancer (NSCLC). BNT116, like mRNA-4157, targets specific antigens in the lung cancer cells.

“This technology is the next big phase of cancer treatment,” Siow Ming Lee, MD, a consultant medical oncologist at University College London Hospitals in England, which is leading the UK trial for the lung cancer and melanoma vaccines, told The Guardian. “We are now entering this very exciting new era of mRNA-based immunotherapy clinical trials to investigate the treatment of lung cancer.”

Still, these predictions have a familiar ring. While the prospects are exciting, delivering on them is another story. There are simply no guarantees these strategies will work as hoped.

 

Then: Where We Were

Cancer vaccine research began to ramp up in the 2000s, and in 2006, the first-generation HPV vaccine, Gardasil, was approved. Gardasil prevents infection from four strains of HPV that cause about 80% of cervical cancer cases.

In 2010, the Food and Drug Administration approved sipuleucel-T, the first therapeutic cancer vaccine, which improved overall survival in patients with hormone-refractory prostate cancer.

Researchers predicted this approval would “pave the way for developing innovative, next generation of vaccines with enhanced antitumor potency.”

In a 2015 AACR research forecast report, Drew Pardoll, MD, PhD, co-director of the Cancer Immunology and Hematopoiesis Program at Johns Hopkins University, Baltimore, Maryland, said that “we can expect to see encouraging results from studies using cancer vaccines.”

Despite the excitement surrounding cancer vaccines alongside a few successes, the next decade brought a longer string of late-phase disappointments.

In 2016, the phase 3 ACT IV trial of a therapeutic vaccine to treat glioblastoma multiforme (CDX-110) was terminated after it failed to demonstrate improved survival.

In 2017, a phase 3 trial of the therapeutic pancreatic cancer vaccine, GVAX, was stopped early for lack of efficacy.

That year, an attenuated Listeria monocytogenes vaccine to treat pancreatic cancer and mesothelioma also failed to come to fruition. In late 2017, concerns over listeria infections prompted Aduro Biotech to cancel its listeria-based cancer treatment program.

In 2018, a phase 3 trial of belagenpumatucel-L, a therapeutic NSCLC vaccine, failed to demonstrate a significant improvement in survival and further study was discontinued.

And in 2019, a vaccine targeting MAGE-A3, a cancer-testis antigen present in multiple tumor types, failed to meet endpoints for improved survival in a phase 3 trial, leading to discontinuation of the vaccine program.

But these disappointments and failures are normal parts of medical research and drug development and have allowed for incremental advances that helped fuel renewed interest and hope for cancer vaccines, when the timing was right, explained vaccine pioneer Larry W. Kwak, MD, PhD, deputy director of the Comprehensive Cancer Center at City of Hope, Duarte, California.

When it comes to vaccine progress, timing makes a difference. In 2011, Kwak and colleagues published promising phase 3 trial results on a personalized vaccine. The vaccine was a patient-specific tumor-derived antigen for patients with follicular lymphoma in their first remission following chemotherapy. Patients who received the vaccine demonstrated significantly longer disease-free survival.

But, at the time, personalized vaccines faced strong headwinds due, largely, to high costs, and commercial interest failed to materialize. “That’s been the major hurdle for a long time,” said Kwak.

Now, however, interest has returned alongside advances in technology and research. The big shift has been the emergence of lower-cost rapid-production mRNA and DNA platforms and a better understanding of how vaccines and potent immune stimulants, like checkpoint inhibitors, can work together to improve outcomes, he explained.

“The timing wasn’t right” back then, Kwak noted. “Now, it’s a different environment and a different time.”

 

A Turning Point?

Indeed, a decade later, cancer vaccine development appears to be headed in a more promising direction.

Among key cancer vaccines to watch is the mRNA-4157 vaccine, developed by Merck and Moderna, designed to prevent melanoma recurrence. In a recent phase 2 study, patients receiving the mRNA-4157 vaccine alongside pembrolizumab had nearly half the risk for melanoma recurrence or death at 3 years compared with those receiving pembrolizumab alone. Investigators are now evaluating the vaccine in a global phase 3 study in patients with high-risk, stage IIB to IV melanoma following surgery.

Another one to watch is the BNT116 NSCLC vaccine from BioNTech. This vaccine presents the immune system with NSCLC tumor markers to encourage the body to fight cancer cells expressing those markers while ignoring healthy cells. BioNTech also launched a global clinical trial for its vaccine this year.

Other notables include a pancreatic cancer mRNA vaccine, which has shown promising early results in a small trial of 16 patients. Of 16 patients who received the vaccine alongside chemotherapy and after surgery and immunotherapy, 8 responded. Of these eight, six remained recurrence free at 3 years. Investigators noted that the vaccine appeared to stimulate a durable T-cell response in patients who responded.

Kwak has also continued his work on lymphoma vaccines. In August, his team published promising first-in-human data on the use of personalized neoantigen vaccines as an early intervention in untreated patients with lymphoplasmacytic lymphoma. Among nine asymptomatic patients who received the vaccine, all achieved stable disease or better, with no dose-limiting toxicities. One patient had a minor response, and the median time to progression was greater than 72 months.

“The current setting is more for advanced disease,” Kwak explained. “It’s a tougher task, but combined with checkpoint blockade, it may be potent enough to work.” 

Still, caution is important. Despite early promise, it’s too soon to tell which, if any, of these investigational vaccines will pan out in the long run. Like investigational drugs, cancer vaccines may show big promising initially but then fail in larger trials.

One key to success, according to Kwak, is to design trials so that even negative results will inform next steps.

But, he noted, failures in large clinical trials will “put a chilling effect on cancer vaccine research again.”

“That’s what keeps me up at night,” he said. “We know the science is fundamentally sound and we have seen glimpses over decades of research that cancer vaccines can work, so it’s really just a matter of tweaking things to optimize trial design.”

Companies tend to design trials to test if a vaccine works or not, without trying to understand why, he said.

“What we need to do is design those so that we can learn from negative results,” he said. That’s what he and his colleagues attempted to do in their recent trial. “We didn’t just look at clinical results; we’re interrogating the actual tumor environment to understand what worked and didn’t and how to tweak that for the next trial.”

Kwak and his colleagues found, for instance, that the vaccine had a greater effect on B cell–derived tumor cells than on cells of plasma origin, so “the most rational design for the next iteration is to combine the vaccine with agents that work directly against plasma cells,” he explained.

As for what’s next, Kwak said: “We’re just focused on trying to do good science and understand. We’ve seen glimpses of success. That’s where we are.”

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 12/06/2024 - 13:33
Un-Gate On Date
Fri, 12/06/2024 - 13:33
Use ProPublica
CFC Schedule Remove Status
Fri, 12/06/2024 - 13:33
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 12/06/2024 - 13:33

Dark Chocolate: A Bittersweet Remedy for Diabetes Risk

Article Type
Changed
Wed, 01/08/2025 - 03:12

TOPLINE:

Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.

METHODOLOGY:

  • Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
  • Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
  • The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
  • Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
  • Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.

TAKEAWAY:

  • During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
  • In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
  • The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
  • Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.

IN PRACTICE:

“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.

SOURCE:

This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.

LIMITATIONS:

The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.

DISCLOSURES:

This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.

METHODOLOGY:

  • Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
  • Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
  • The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
  • Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
  • Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.

TAKEAWAY:

  • During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
  • In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
  • The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
  • Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.

IN PRACTICE:

“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.

SOURCE:

This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.

LIMITATIONS:

The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.

DISCLOSURES:

This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

Consuming five or more servings per week of dark chocolate is associated with a lower risk for type 2 diabetes (T2D), compared with infrequent or no consumption. Conversely, a higher consumption of milk chocolate does not significantly affect the risk for diabetes and may contribute to greater weight gain.

METHODOLOGY:

  • Chocolate is rich in flavanols, natural compounds known to support heart health and lower the risk for T2D. However, the link between chocolate consumption and the risk for T2D is uncertain, with inconsistent research findings that don’t distinguish between dark or milk chocolate.
  • Researchers conducted a prospective cohort study to investigate the associations between dark, milk, and total chocolate consumption and the risk for T2D in three long-term US studies of female nurses and male healthcare professionals with no history of diabetes, cardiovascular disease, or cancer at baseline.
  • The relationship between total chocolate consumption and the risk for diabetes was investigated in 192,208 individuals who reported their chocolate consumption using validated food frequency questionnaires every 4 years from 1986 onward.
  • Information on chocolate subtypes was assessed from 2006/2007 onward in 111,654 participants.
  • Participants self-reported T2D through biennial questionnaires, which was confirmed via supplementary questionnaires collecting data on glucose levels, hemoglobin A1c concentration, symptoms, and treatments; they also self-reported their body weight at baseline and during follow-ups.

TAKEAWAY:

  • During 4,829,175 person-years of follow-up, researchers identified 18,862 individuals with incident T2D in the total chocolate analysis cohort.
  • In the chocolate subtype cohort, 4771 incident T2D cases were identified during 1,270,348 person-years of follow-up. Having at least five servings per week of dark chocolate was associated with a 21% lower risk for T2D (adjusted hazard ratio, 0.79; P for trend = .006), while milk chocolate consumption showed no significant link (P for trend = .75).
  • The risk for T2D decreased by 3% for each additional serving of dark chocolate consumed weekly, indicating a dose-response effect.
  • Compared with individuals who did not change their chocolate intake, those who had an increased milk chocolate intake had greater weight gain over 4-year periods (mean difference, 0.35 kg; 95% CI, 0.27-0.43); dark chocolate showed no significant association with weight change.

IN PRACTICE:

“Even though dark and milk chocolate have similar levels of calories and saturated fat, it appears that the rich polyphenols in dark chocolate might offset the effects of saturated fat and sugar on weight gain and diabetes. It’s an intriguing difference that’s worth exploring more,” corresponding author Qi Sun from the Departments of Nutrition and Epidemiology, Harvard TH Chan School of Public Health, Boston, Massachusetts, said in a press release.

SOURCE:

This study was led by Binkai Liu, Harvard TH Chan School of Public Health. It was published online in The BMJ.

LIMITATIONS:

The relatively limited number of participants in the higher chocolate consumption groups may have reduced the statistical power for detecting modest associations between dark chocolate consumption and the risk for T2D. Additionally, the study population primarily consisted of non-Hispanic White adults older than 50 years at baseline, which, along with their professional backgrounds, may have limited the generalizability of the study findings to other populations with different socioeconomic or personal characteristics. Chocolate consumption in this study was lower than the national average of three servings per week, which may have limited the ability to assess the dose-response relationship at higher intake levels.

DISCLOSURES:

This study was supported by grants from the National Institutes of Health. Some authors reported receiving investigator-initiated grants, being on scientific advisory boards, and receiving research funding from certain institutions.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 12/05/2024 - 15:33
Un-Gate On Date
Thu, 12/05/2024 - 15:33
Use ProPublica
CFC Schedule Remove Status
Thu, 12/05/2024 - 15:33
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 12/05/2024 - 15:33

Microplastics Have Been Found in the Human Brain. Now What?

Article Type
Changed
Wed, 11/27/2024 - 13:45

Microplastics have been found in the lungs, liver, blood, and heart. Now, researchers report they have found the first evidence of the substances in human brains.

In a recent case series study that examined olfactory bulb tissue from deceased individuals, 8 of the 15 decedent brains showed the presence of microplastics, most commonly polypropylene, a plastic typically used in food packaging and water bottles.

Measuring less than 5 mm in size, microplastics are formed over time as plastic materials break down but don’t biodegrade. Exposure to these substances can come through food, air, and skin absorption.

While scientists are learning more about how these substances are absorbed by the body, questions remain about how much exposure is safe, what effect — if any — microplastics could have on brain function, and what clinicians should tell their patients.

 

What Are the Major Health Concerns?

The Plastic Health Council estimates that more than 500 million metric tons of plastic are produced worldwide each year. In addition, it reports that plastic products can contain more than 16,000 chemicals, about a quarter of which have been found to be hazardous to human health and the environment. Microplastics and nanoplastics can enter the body through the air, in food, or absorption through the skin.

A study published in March showed that patients with carotid plaques and the presence of microplastics and nanoplastics were at an increased risk for death or major cardiovascular events.

Other studies have shown a link between these substances and placental inflammation and preterm births, reduced male fertility, and endocrine disruption — as well as accelerated spread of cancer cells in the gut.

There is also evidence suggesting that microplastics may facilitate the development of antibiotic resistance in bacteria and could contribute to the rise in food allergies.

And now, Thais Mauad, MD, PhD, and colleagues have found the substances in the brain.

 

How Is the Brain Affected?

The investigators examined olfactory bulb tissues from 15 deceased Sao Paulo, Brazil, residents ranging in age from 33 to 100 years who underwent routine coroner autopsies. All but three of the participants were men.

Exclusion criteria included having undergone previous neurosurgical interventions. The tissues were analyzed using micro–Fourier transform infrared spectroscopy (µFTIR).

In addition, the researchers practiced a “plastic-free approach” in their analysis, which included using filters and covering glassware and samples with aluminum foil.

Study findings showed microplastics in 8 of the 15 participants — including in the centenarian. In total, there were 16 synthetic polymer particles and fibers detected, with up to four microplastics detected per olfactory bulb. Polypropylene was the most common polymer found (44%), followed by polyamide, nylon, and polyethylene vinyl acetate. These substances are commonly used in a wide range of products, including food packaging, textiles, kitchen utensils, medical devices, and adhesives.

The microplastic particles ranged in length from 5.5 to 26 microns (one millionth of a meter), with a width that ranged from 3 to 25 microns. The mean fiber length and width was 21 and 4 microns, respectively. For comparison, the diameter of one human hair averages about 70 microns, according to the US Food and Drug Administration (FDA).

“To our knowledge, this is the first study in which the presence of microplastics in the human brain was identified and characterized using µFTIR,” the researchers wrote.

 

How Do Microplastics Reach the Brain?

Although the possibility of microplastics crossing the blood-brain barrier has been questioned, senior investigator Mauad, associate professor in the Department of Pathology, the University of Sao Paulo in Brazil, noted that the olfactory pathway could offer an entry route through inhalation of the particles.

This means that “breathing within indoor environments could be a major source of plastic pollution in the brain,” she said in a press release.

“With much smaller nanoplastics entering the body with greater ease, the total level of plastic particles may be much higher. What is worrying is the capacity of such particles to be internalized by cells and alter how our bodies function,” she added.

Mauad said that although questions remain regarding the health implications of their findings, some animal studies have shown that the presence of microplastics in the brain is linked to neurotoxic effects, including oxidative stress.

In addition, exposure to particulate matter has been linked previously to such neurologic conditions as dementia and neurodegenerative conditions such as Parkinson’s disease “seem to have a connection with nasal abnormalities as initial symptoms,” the investigators noted.

While the olfactory pathway appears to be a likely route of exposure the researchers noted that other potential entry routes, including through blood circulation, may also be involved.

The research suggests that inhaling microplastics while indoors may be unavoidable, Mauad said, making it unlikely individuals can eliminate exposure to these substances.

“Everything that surrounds us is plastic. So we can’t really get rid of it,” she said.

 

Are Microplastics Regulated?

The most effective solution would be stricter regulations, Mauad said.

“The industry has chosen to sell many things in plastic, and I think this has to change. We need more policies to decrease plastic production — especially single-use plastic,” she said.

Federal, state, and local regulations for microplastics are “virtually nonexistent,” reported the Interstate Technology and Regulatory Council (ITRC), a state-led coalition that produces documents and trainings related to regulatory issues.

In 2021, the ITRC sent a survey to all US states asking about microplastics regulations. Of the 26 states that responded, only 4 said they had conducted sampling for microplastics. None of the responders indicated they had established any criteria or standards for microplastics, although eight states indicated they had plans to pursue them in the future.

Although federal regulations include the Microbead-Free Waters Act of 2015 and the Save Our Seas Act 2.0, the rules don’t directly pertain to microplastics.

There are also no regulations currently in place regarding microplastics or nanoplastics in food. A report issued in July by the FDA claimed that “the overall scientific evidence does not demonstrate that levels of microplastics or nanoplastics found in foods pose a risk to human health.”

International efforts to regulate microplastics are much further along. First created in 2022, the treaty would forge an international, legally binding agreement.

While it is a step in the right direction, the Plastic Health Council has cautioned about “the omission of measures in draft provisions that fully address the impact of plastic pollution on human health.” The treaty should reduce plastic production, eliminate single-use plastic items, and call for testing of all chemicals in plastics, the council argues.

The final round of negotiations for the UN Global Plastic Treaty is set for completion before the end of the year.

 

What Should Clinicians Know?

Much remains unknown about the potential health effects of microplastic exposure. So how can clinicians respond to questions from concerned patients?

“We don’t yet have enough evidence about the plastic particle itself, like those highlighted in the current study — and even more so when it comes to nanoplastics, which are a thousand times smaller,” said Phoebe Stapleton, PhD, associated professor in the Department of Pharmacology and Toxicology at the Ernest Mario School of Pharmacy at Rutgers University, Piscataway, New Jersey.

“But we do have a lot of evidence about the chemicals that are used to make plastics, and we’ve already seen regulation there from the EPA. That’s one conversation that clinicians could have with patients: about those chemicals,” she added.

Stapleton recommended clinicians stay current on the latest research and be ready to respond should a patient raise the issue. She also noted the importance of exercising caution when interpreting these new findings.

While the study is important — especially because it highlights inhalation as a viable route of entry — exposure through the olfactory area is still just a theory and hasn’t yet been fully proven.

In addition, Stapleton wonders whether there are tissues where these substances are not found. A discovery like that “would be really exciting because that means that that tissue has mechanisms protecting it, and maybe, we could learn more about how to keep microplastics out,” she said.

She would also like to see more studies on specific adverse health effects from microplastics in the body.

Mauad agreed.

“That’s the next set of questions: What are the toxicities or lack thereof in those tissues? That will give us more information as it pertains to human health. It doesn’t feel good to know they’re in our tissues, but we still don’t have a real understanding of what they’re doing when they’re there,” she said.

The current study was funded by the Alexander von Humboldt Foundation and by grants from the Brazilian Research Council and the Soa State Research Agency. It was also funded by the Plastic Soup Foundation — which, together with A Plastic Planet, forms the Plastic Health Council. The investigators and Stapleton reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Microplastics have been found in the lungs, liver, blood, and heart. Now, researchers report they have found the first evidence of the substances in human brains.

In a recent case series study that examined olfactory bulb tissue from deceased individuals, 8 of the 15 decedent brains showed the presence of microplastics, most commonly polypropylene, a plastic typically used in food packaging and water bottles.

Measuring less than 5 mm in size, microplastics are formed over time as plastic materials break down but don’t biodegrade. Exposure to these substances can come through food, air, and skin absorption.

While scientists are learning more about how these substances are absorbed by the body, questions remain about how much exposure is safe, what effect — if any — microplastics could have on brain function, and what clinicians should tell their patients.

 

What Are the Major Health Concerns?

The Plastic Health Council estimates that more than 500 million metric tons of plastic are produced worldwide each year. In addition, it reports that plastic products can contain more than 16,000 chemicals, about a quarter of which have been found to be hazardous to human health and the environment. Microplastics and nanoplastics can enter the body through the air, in food, or absorption through the skin.

A study published in March showed that patients with carotid plaques and the presence of microplastics and nanoplastics were at an increased risk for death or major cardiovascular events.

Other studies have shown a link between these substances and placental inflammation and preterm births, reduced male fertility, and endocrine disruption — as well as accelerated spread of cancer cells in the gut.

There is also evidence suggesting that microplastics may facilitate the development of antibiotic resistance in bacteria and could contribute to the rise in food allergies.

And now, Thais Mauad, MD, PhD, and colleagues have found the substances in the brain.

 

How Is the Brain Affected?

The investigators examined olfactory bulb tissues from 15 deceased Sao Paulo, Brazil, residents ranging in age from 33 to 100 years who underwent routine coroner autopsies. All but three of the participants were men.

Exclusion criteria included having undergone previous neurosurgical interventions. The tissues were analyzed using micro–Fourier transform infrared spectroscopy (µFTIR).

In addition, the researchers practiced a “plastic-free approach” in their analysis, which included using filters and covering glassware and samples with aluminum foil.

Study findings showed microplastics in 8 of the 15 participants — including in the centenarian. In total, there were 16 synthetic polymer particles and fibers detected, with up to four microplastics detected per olfactory bulb. Polypropylene was the most common polymer found (44%), followed by polyamide, nylon, and polyethylene vinyl acetate. These substances are commonly used in a wide range of products, including food packaging, textiles, kitchen utensils, medical devices, and adhesives.

The microplastic particles ranged in length from 5.5 to 26 microns (one millionth of a meter), with a width that ranged from 3 to 25 microns. The mean fiber length and width was 21 and 4 microns, respectively. For comparison, the diameter of one human hair averages about 70 microns, according to the US Food and Drug Administration (FDA).

“To our knowledge, this is the first study in which the presence of microplastics in the human brain was identified and characterized using µFTIR,” the researchers wrote.

 

How Do Microplastics Reach the Brain?

Although the possibility of microplastics crossing the blood-brain barrier has been questioned, senior investigator Mauad, associate professor in the Department of Pathology, the University of Sao Paulo in Brazil, noted that the olfactory pathway could offer an entry route through inhalation of the particles.

This means that “breathing within indoor environments could be a major source of plastic pollution in the brain,” she said in a press release.

“With much smaller nanoplastics entering the body with greater ease, the total level of plastic particles may be much higher. What is worrying is the capacity of such particles to be internalized by cells and alter how our bodies function,” she added.

Mauad said that although questions remain regarding the health implications of their findings, some animal studies have shown that the presence of microplastics in the brain is linked to neurotoxic effects, including oxidative stress.

In addition, exposure to particulate matter has been linked previously to such neurologic conditions as dementia and neurodegenerative conditions such as Parkinson’s disease “seem to have a connection with nasal abnormalities as initial symptoms,” the investigators noted.

While the olfactory pathway appears to be a likely route of exposure the researchers noted that other potential entry routes, including through blood circulation, may also be involved.

The research suggests that inhaling microplastics while indoors may be unavoidable, Mauad said, making it unlikely individuals can eliminate exposure to these substances.

“Everything that surrounds us is plastic. So we can’t really get rid of it,” she said.

 

Are Microplastics Regulated?

The most effective solution would be stricter regulations, Mauad said.

“The industry has chosen to sell many things in plastic, and I think this has to change. We need more policies to decrease plastic production — especially single-use plastic,” she said.

Federal, state, and local regulations for microplastics are “virtually nonexistent,” reported the Interstate Technology and Regulatory Council (ITRC), a state-led coalition that produces documents and trainings related to regulatory issues.

In 2021, the ITRC sent a survey to all US states asking about microplastics regulations. Of the 26 states that responded, only 4 said they had conducted sampling for microplastics. None of the responders indicated they had established any criteria or standards for microplastics, although eight states indicated they had plans to pursue them in the future.

Although federal regulations include the Microbead-Free Waters Act of 2015 and the Save Our Seas Act 2.0, the rules don’t directly pertain to microplastics.

There are also no regulations currently in place regarding microplastics or nanoplastics in food. A report issued in July by the FDA claimed that “the overall scientific evidence does not demonstrate that levels of microplastics or nanoplastics found in foods pose a risk to human health.”

International efforts to regulate microplastics are much further along. First created in 2022, the treaty would forge an international, legally binding agreement.

While it is a step in the right direction, the Plastic Health Council has cautioned about “the omission of measures in draft provisions that fully address the impact of plastic pollution on human health.” The treaty should reduce plastic production, eliminate single-use plastic items, and call for testing of all chemicals in plastics, the council argues.

The final round of negotiations for the UN Global Plastic Treaty is set for completion before the end of the year.

 

What Should Clinicians Know?

Much remains unknown about the potential health effects of microplastic exposure. So how can clinicians respond to questions from concerned patients?

“We don’t yet have enough evidence about the plastic particle itself, like those highlighted in the current study — and even more so when it comes to nanoplastics, which are a thousand times smaller,” said Phoebe Stapleton, PhD, associated professor in the Department of Pharmacology and Toxicology at the Ernest Mario School of Pharmacy at Rutgers University, Piscataway, New Jersey.

“But we do have a lot of evidence about the chemicals that are used to make plastics, and we’ve already seen regulation there from the EPA. That’s one conversation that clinicians could have with patients: about those chemicals,” she added.

Stapleton recommended clinicians stay current on the latest research and be ready to respond should a patient raise the issue. She also noted the importance of exercising caution when interpreting these new findings.

While the study is important — especially because it highlights inhalation as a viable route of entry — exposure through the olfactory area is still just a theory and hasn’t yet been fully proven.

In addition, Stapleton wonders whether there are tissues where these substances are not found. A discovery like that “would be really exciting because that means that that tissue has mechanisms protecting it, and maybe, we could learn more about how to keep microplastics out,” she said.

She would also like to see more studies on specific adverse health effects from microplastics in the body.

Mauad agreed.

“That’s the next set of questions: What are the toxicities or lack thereof in those tissues? That will give us more information as it pertains to human health. It doesn’t feel good to know they’re in our tissues, but we still don’t have a real understanding of what they’re doing when they’re there,” she said.

The current study was funded by the Alexander von Humboldt Foundation and by grants from the Brazilian Research Council and the Soa State Research Agency. It was also funded by the Plastic Soup Foundation — which, together with A Plastic Planet, forms the Plastic Health Council. The investigators and Stapleton reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Microplastics have been found in the lungs, liver, blood, and heart. Now, researchers report they have found the first evidence of the substances in human brains.

In a recent case series study that examined olfactory bulb tissue from deceased individuals, 8 of the 15 decedent brains showed the presence of microplastics, most commonly polypropylene, a plastic typically used in food packaging and water bottles.

Measuring less than 5 mm in size, microplastics are formed over time as plastic materials break down but don’t biodegrade. Exposure to these substances can come through food, air, and skin absorption.

While scientists are learning more about how these substances are absorbed by the body, questions remain about how much exposure is safe, what effect — if any — microplastics could have on brain function, and what clinicians should tell their patients.

 

What Are the Major Health Concerns?

The Plastic Health Council estimates that more than 500 million metric tons of plastic are produced worldwide each year. In addition, it reports that plastic products can contain more than 16,000 chemicals, about a quarter of which have been found to be hazardous to human health and the environment. Microplastics and nanoplastics can enter the body through the air, in food, or absorption through the skin.

A study published in March showed that patients with carotid plaques and the presence of microplastics and nanoplastics were at an increased risk for death or major cardiovascular events.

Other studies have shown a link between these substances and placental inflammation and preterm births, reduced male fertility, and endocrine disruption — as well as accelerated spread of cancer cells in the gut.

There is also evidence suggesting that microplastics may facilitate the development of antibiotic resistance in bacteria and could contribute to the rise in food allergies.

And now, Thais Mauad, MD, PhD, and colleagues have found the substances in the brain.

 

How Is the Brain Affected?

The investigators examined olfactory bulb tissues from 15 deceased Sao Paulo, Brazil, residents ranging in age from 33 to 100 years who underwent routine coroner autopsies. All but three of the participants were men.

Exclusion criteria included having undergone previous neurosurgical interventions. The tissues were analyzed using micro–Fourier transform infrared spectroscopy (µFTIR).

In addition, the researchers practiced a “plastic-free approach” in their analysis, which included using filters and covering glassware and samples with aluminum foil.

Study findings showed microplastics in 8 of the 15 participants — including in the centenarian. In total, there were 16 synthetic polymer particles and fibers detected, with up to four microplastics detected per olfactory bulb. Polypropylene was the most common polymer found (44%), followed by polyamide, nylon, and polyethylene vinyl acetate. These substances are commonly used in a wide range of products, including food packaging, textiles, kitchen utensils, medical devices, and adhesives.

The microplastic particles ranged in length from 5.5 to 26 microns (one millionth of a meter), with a width that ranged from 3 to 25 microns. The mean fiber length and width was 21 and 4 microns, respectively. For comparison, the diameter of one human hair averages about 70 microns, according to the US Food and Drug Administration (FDA).

“To our knowledge, this is the first study in which the presence of microplastics in the human brain was identified and characterized using µFTIR,” the researchers wrote.

 

How Do Microplastics Reach the Brain?

Although the possibility of microplastics crossing the blood-brain barrier has been questioned, senior investigator Mauad, associate professor in the Department of Pathology, the University of Sao Paulo in Brazil, noted that the olfactory pathway could offer an entry route through inhalation of the particles.

This means that “breathing within indoor environments could be a major source of plastic pollution in the brain,” she said in a press release.

“With much smaller nanoplastics entering the body with greater ease, the total level of plastic particles may be much higher. What is worrying is the capacity of such particles to be internalized by cells and alter how our bodies function,” she added.

Mauad said that although questions remain regarding the health implications of their findings, some animal studies have shown that the presence of microplastics in the brain is linked to neurotoxic effects, including oxidative stress.

In addition, exposure to particulate matter has been linked previously to such neurologic conditions as dementia and neurodegenerative conditions such as Parkinson’s disease “seem to have a connection with nasal abnormalities as initial symptoms,” the investigators noted.

While the olfactory pathway appears to be a likely route of exposure the researchers noted that other potential entry routes, including through blood circulation, may also be involved.

The research suggests that inhaling microplastics while indoors may be unavoidable, Mauad said, making it unlikely individuals can eliminate exposure to these substances.

“Everything that surrounds us is plastic. So we can’t really get rid of it,” she said.

 

Are Microplastics Regulated?

The most effective solution would be stricter regulations, Mauad said.

“The industry has chosen to sell many things in plastic, and I think this has to change. We need more policies to decrease plastic production — especially single-use plastic,” she said.

Federal, state, and local regulations for microplastics are “virtually nonexistent,” reported the Interstate Technology and Regulatory Council (ITRC), a state-led coalition that produces documents and trainings related to regulatory issues.

In 2021, the ITRC sent a survey to all US states asking about microplastics regulations. Of the 26 states that responded, only 4 said they had conducted sampling for microplastics. None of the responders indicated they had established any criteria or standards for microplastics, although eight states indicated they had plans to pursue them in the future.

Although federal regulations include the Microbead-Free Waters Act of 2015 and the Save Our Seas Act 2.0, the rules don’t directly pertain to microplastics.

There are also no regulations currently in place regarding microplastics or nanoplastics in food. A report issued in July by the FDA claimed that “the overall scientific evidence does not demonstrate that levels of microplastics or nanoplastics found in foods pose a risk to human health.”

International efforts to regulate microplastics are much further along. First created in 2022, the treaty would forge an international, legally binding agreement.

While it is a step in the right direction, the Plastic Health Council has cautioned about “the omission of measures in draft provisions that fully address the impact of plastic pollution on human health.” The treaty should reduce plastic production, eliminate single-use plastic items, and call for testing of all chemicals in plastics, the council argues.

The final round of negotiations for the UN Global Plastic Treaty is set for completion before the end of the year.

 

What Should Clinicians Know?

Much remains unknown about the potential health effects of microplastic exposure. So how can clinicians respond to questions from concerned patients?

“We don’t yet have enough evidence about the plastic particle itself, like those highlighted in the current study — and even more so when it comes to nanoplastics, which are a thousand times smaller,” said Phoebe Stapleton, PhD, associated professor in the Department of Pharmacology and Toxicology at the Ernest Mario School of Pharmacy at Rutgers University, Piscataway, New Jersey.

“But we do have a lot of evidence about the chemicals that are used to make plastics, and we’ve already seen regulation there from the EPA. That’s one conversation that clinicians could have with patients: about those chemicals,” she added.

Stapleton recommended clinicians stay current on the latest research and be ready to respond should a patient raise the issue. She also noted the importance of exercising caution when interpreting these new findings.

While the study is important — especially because it highlights inhalation as a viable route of entry — exposure through the olfactory area is still just a theory and hasn’t yet been fully proven.

In addition, Stapleton wonders whether there are tissues where these substances are not found. A discovery like that “would be really exciting because that means that that tissue has mechanisms protecting it, and maybe, we could learn more about how to keep microplastics out,” she said.

She would also like to see more studies on specific adverse health effects from microplastics in the body.

Mauad agreed.

“That’s the next set of questions: What are the toxicities or lack thereof in those tissues? That will give us more information as it pertains to human health. It doesn’t feel good to know they’re in our tissues, but we still don’t have a real understanding of what they’re doing when they’re there,” she said.

The current study was funded by the Alexander von Humboldt Foundation and by grants from the Brazilian Research Council and the Soa State Research Agency. It was also funded by the Plastic Soup Foundation — which, together with A Plastic Planet, forms the Plastic Health Council. The investigators and Stapleton reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 11/27/2024 - 13:43
Un-Gate On Date
Wed, 11/27/2024 - 13:43
Use ProPublica
CFC Schedule Remove Status
Wed, 11/27/2024 - 13:43
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 11/27/2024 - 13:43