FDA Approves Topical Anticholinergic for Axillary Hyperhidrosis

Article Type
Changed
Tue, 06/25/2024 - 10:35

The Food and Drug Administration has approved a topical anticholinergic, sofpironium topical gel, 12.45%, for the treatment of primary axillary hyperhidrosis in adults and children aged ≥ 9 years.

According to a press release from Botanix Pharmaceuticals, which developed the product and will market it under the brand name Sofdra, approval was based on results from two phase 3 studies that enrolled 710 patients with primary axillary hyperhidrosis. In the trials, patients treated with sofpironium topical gel, 12.45%, experienced “clinically and statistically meaningful changes” from baseline in the Gravimetric Sweat Production and the Hyperhidrosis Disease Severity Measure–Axillary seven-item score, according to the company.

Botanix plans to enable qualified patients to gain early access to the product in the third quarter of 2024, with commercial sales expected in the fourth quarter of 2024.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The Food and Drug Administration has approved a topical anticholinergic, sofpironium topical gel, 12.45%, for the treatment of primary axillary hyperhidrosis in adults and children aged ≥ 9 years.

According to a press release from Botanix Pharmaceuticals, which developed the product and will market it under the brand name Sofdra, approval was based on results from two phase 3 studies that enrolled 710 patients with primary axillary hyperhidrosis. In the trials, patients treated with sofpironium topical gel, 12.45%, experienced “clinically and statistically meaningful changes” from baseline in the Gravimetric Sweat Production and the Hyperhidrosis Disease Severity Measure–Axillary seven-item score, according to the company.

Botanix plans to enable qualified patients to gain early access to the product in the third quarter of 2024, with commercial sales expected in the fourth quarter of 2024.
 

A version of this article first appeared on Medscape.com.

The Food and Drug Administration has approved a topical anticholinergic, sofpironium topical gel, 12.45%, for the treatment of primary axillary hyperhidrosis in adults and children aged ≥ 9 years.

According to a press release from Botanix Pharmaceuticals, which developed the product and will market it under the brand name Sofdra, approval was based on results from two phase 3 studies that enrolled 710 patients with primary axillary hyperhidrosis. In the trials, patients treated with sofpironium topical gel, 12.45%, experienced “clinically and statistically meaningful changes” from baseline in the Gravimetric Sweat Production and the Hyperhidrosis Disease Severity Measure–Axillary seven-item score, according to the company.

Botanix plans to enable qualified patients to gain early access to the product in the third quarter of 2024, with commercial sales expected in the fourth quarter of 2024.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Should ctDNA guide clinical decisions in GI cancers?

Article Type
Changed
Wed, 08/28/2024 - 12:40

CHICAGO – Circulating tumor DNA (ctDNA), or DNA shed from tumors that is detected in the bloodstream, has shown increasing promise as a prognostic tool in gastrointestinal cancers, allowing investigators to make real-time assessments of treatment response and the likelihood of recurrence.

Depending on the type of assay and analysis used, ctDNA can provide a wealth of information about cancer genetic variants. ctDNA assays can be used for primary screening, to track tumor burden, or to detect minimal residual disease (MRD) after cancer surgery.

However, ctDNA’s role in guiding clinical decisions is still being defined. Australian investigators presented research showing that a negative ctDNA finding can be used to avoid unnecessary chemotherapy in postoperative stage II colon cancer patients without affecting survival outcomes, at the annual meeting of the American Society of Clinical Oncology (ASCO), in Chicago.

The same group also presented exploratory findings showing that positive ctDNA is a significant predictor of recurrence in people with early-stage pancreatic cancer following surgery. However, the investigators concluded, ctDNA status should not be used to inform treatment decisions concerning duration of adjuvant chemotherapy in these patients.
 

DYNAMIC Trial Results

Jeanne Tie, MD, of the Peter MacCallum Cancer Centre in Melbourne, presented 5-year survival results at ASCO from the DYNAMIC randomized controlled trial, whose 2-year findings had already shown ctDNA to be helpful in stratifying stage II colon cancer patients for adjuvant chemotherapy or no treatment.

Because surgery is curative in 80% of these patients, it is important to identify the minority that will need chemotherapy, Dr. Tie said.

At 5 years’ follow-up, Dr. Tie reported, patients randomized to a ctDNA-guided approach (negative ctDNA post surgery resulted in no treatment, and positive ctDNA led to adjuvant chemotherapy) did not see differences in overall survival compared with conventionally managed patients, who received chemotherapy at the clinician’s discretion.

Among ctDNA-guided patients in the study (n = 302), 5-year overall survival was 93.8%. For conventionally managed patients (n = 153), overall survival was 93.3% at 5 years (hazard ratio [HR], 1.05; 95% CI, 0.47-2.37; P = .887).

Further, the researchers found that a high ctDNA clearance rate was achieved with adjuvant chemotherapy in postoperative patients who were ctDNA positive. And 5-year recurrence rates were markedly lower in patients who achieved ctDNA clearance, compared with those who did not: 85.2% vs 20% (HR, 15.4; 95% CI, 3.91-61.0; P < .001).

“This approach of only treating patients with a positive ctDNA achieved excellent survival outcomes, including in patients with T4 disease. A high ctDNA clearance rate can be achieved with adjuvant chemotherapy, and this in turn was associated with favorable outcomes,” Dr. Tie said during the meeting. “And finally, the precision of the ctDNA approach may be further refined by increasing [the number of genetic variants] tracked and by incorporating ctDNA molecular burden. However, these findings will require further validation.”
 

DYNAMIC-Pancreas Study Results

In a separate presentation during the same session, Belinda Lee, MD, also of the Peter MacCallum Cancer Centre, showed results from the DYNAMIC-Pancreas study, which looked at ctDNA testing a median 5 weeks after surgery in 102 people with early-stage (Eastern Cooperative Oncology Group 0-1) pancreatic cancer. Patients who were ctDNA positive received 6 months of adjuvant chemotherapy of the physician’s choice (FOLFIRINOX or gemcitabine/capecitabine) while those who were ctDNA negative after surgery had the option to de-escalate to 3 months of chemotherapy treatment at the physician’s discretion.

At a median 3 years’ follow-up, Dr. Lee and colleagues found that the median recurrence-free survival was 13 months for patients who were ctDNA positive after surgery and 22 months for those who were ctDNA negative (HR, 0.52; P = .003), showing that positive ctDNA is prognostic of earlier recurrence independent of other factors.

Dr. Lee said that, given the high recurrence risk also seen in ctDNA-negative patients, reducing duration of chemotherapy was not recommended based on ctDNA-negative status.

In an interview, Stacey Cohen, MD, of Fred Hutch Cancer Center in Seattle, Washington, the discussant on the two presentations at ASCO, said that, until these results are further validated in stage II colon cancer patients,t it is unlikely that they will change clinical practice guidelines.

“They did an amazing job,” Dr. Cohen said of the researchers. “They’re at the forefront of the field of actually doing prospective analysis. And yet there are still some gaps that are missing in our understanding.”

The assays used in both studies, Dr. Cohen noted, are used only in research and are not available commercially in the United States. That, plus the fact that physicians were allowed to choose between chemotherapy regimens, made it harder to parse the results.

“Provider choice increases bias,” Dr. Cohen said. “And I think that’s the problem of having two chemo regimens to choose from, or in the case of the colon cancer trial, not selecting whether patients got a single chemotherapy agent or a doublet. These are pretty big differences.”

But the field is moving quickly, “and it is an exciting time to improve patient selection for chemotherapy treatment,” she continued.

Allowing physicians to choose chemotherapy regimens reflected real-world clinical practice, “especially given that this study is designed to test a strategy rather than a specific treatment, said Dr. Tie in an interview. “More work will need to be done to specifically address the question of which chemotherapy regimen is more effective to treat ctDNA-positive disease.”

Dr. Cohen noted that, while evidence is mounting to support the value of ctDNA in colon cancer, there is far less evidence for pancreatic cancer.

Dr. Lee and colleagues’ study “adds to the literature, and I think what it teaches us is that ctDNA remains a prognostic risk factor,” she said. “But we saw that even patients who are negative have a high recurrence risk. So we’re not ready to act on it yet. As with the colon cancer study, different chemotherapy regimens were used, and for different time lengths.”

Whether in colon cancer or pancreatic cancer, ctDNA results, “are highly tied to which assay you’re using and which scenario you’re testing them in,” Dr. Cohen said.

Dr. Tie and colleagues’ study was sponsored by her institution, with additional funding received from the Australian government, the National Institutes of Health, and other foundations. She disclosed speaking and/or consulting fees from Haystack Oncology, Amgen, Novartis, Bristol-Myers Squibb, Merck, AstraZeneca, and others. Dr. Lee’s study was sponsored by the Marcus Foundation. She disclosed receiving honoraria from Roche. Dr. Cohen reported no conflicts of interest.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

CHICAGO – Circulating tumor DNA (ctDNA), or DNA shed from tumors that is detected in the bloodstream, has shown increasing promise as a prognostic tool in gastrointestinal cancers, allowing investigators to make real-time assessments of treatment response and the likelihood of recurrence.

Depending on the type of assay and analysis used, ctDNA can provide a wealth of information about cancer genetic variants. ctDNA assays can be used for primary screening, to track tumor burden, or to detect minimal residual disease (MRD) after cancer surgery.

However, ctDNA’s role in guiding clinical decisions is still being defined. Australian investigators presented research showing that a negative ctDNA finding can be used to avoid unnecessary chemotherapy in postoperative stage II colon cancer patients without affecting survival outcomes, at the annual meeting of the American Society of Clinical Oncology (ASCO), in Chicago.

The same group also presented exploratory findings showing that positive ctDNA is a significant predictor of recurrence in people with early-stage pancreatic cancer following surgery. However, the investigators concluded, ctDNA status should not be used to inform treatment decisions concerning duration of adjuvant chemotherapy in these patients.
 

DYNAMIC Trial Results

Jeanne Tie, MD, of the Peter MacCallum Cancer Centre in Melbourne, presented 5-year survival results at ASCO from the DYNAMIC randomized controlled trial, whose 2-year findings had already shown ctDNA to be helpful in stratifying stage II colon cancer patients for adjuvant chemotherapy or no treatment.

Because surgery is curative in 80% of these patients, it is important to identify the minority that will need chemotherapy, Dr. Tie said.

At 5 years’ follow-up, Dr. Tie reported, patients randomized to a ctDNA-guided approach (negative ctDNA post surgery resulted in no treatment, and positive ctDNA led to adjuvant chemotherapy) did not see differences in overall survival compared with conventionally managed patients, who received chemotherapy at the clinician’s discretion.

Among ctDNA-guided patients in the study (n = 302), 5-year overall survival was 93.8%. For conventionally managed patients (n = 153), overall survival was 93.3% at 5 years (hazard ratio [HR], 1.05; 95% CI, 0.47-2.37; P = .887).

Further, the researchers found that a high ctDNA clearance rate was achieved with adjuvant chemotherapy in postoperative patients who were ctDNA positive. And 5-year recurrence rates were markedly lower in patients who achieved ctDNA clearance, compared with those who did not: 85.2% vs 20% (HR, 15.4; 95% CI, 3.91-61.0; P < .001).

“This approach of only treating patients with a positive ctDNA achieved excellent survival outcomes, including in patients with T4 disease. A high ctDNA clearance rate can be achieved with adjuvant chemotherapy, and this in turn was associated with favorable outcomes,” Dr. Tie said during the meeting. “And finally, the precision of the ctDNA approach may be further refined by increasing [the number of genetic variants] tracked and by incorporating ctDNA molecular burden. However, these findings will require further validation.”
 

DYNAMIC-Pancreas Study Results

In a separate presentation during the same session, Belinda Lee, MD, also of the Peter MacCallum Cancer Centre, showed results from the DYNAMIC-Pancreas study, which looked at ctDNA testing a median 5 weeks after surgery in 102 people with early-stage (Eastern Cooperative Oncology Group 0-1) pancreatic cancer. Patients who were ctDNA positive received 6 months of adjuvant chemotherapy of the physician’s choice (FOLFIRINOX or gemcitabine/capecitabine) while those who were ctDNA negative after surgery had the option to de-escalate to 3 months of chemotherapy treatment at the physician’s discretion.

At a median 3 years’ follow-up, Dr. Lee and colleagues found that the median recurrence-free survival was 13 months for patients who were ctDNA positive after surgery and 22 months for those who were ctDNA negative (HR, 0.52; P = .003), showing that positive ctDNA is prognostic of earlier recurrence independent of other factors.

Dr. Lee said that, given the high recurrence risk also seen in ctDNA-negative patients, reducing duration of chemotherapy was not recommended based on ctDNA-negative status.

In an interview, Stacey Cohen, MD, of Fred Hutch Cancer Center in Seattle, Washington, the discussant on the two presentations at ASCO, said that, until these results are further validated in stage II colon cancer patients,t it is unlikely that they will change clinical practice guidelines.

“They did an amazing job,” Dr. Cohen said of the researchers. “They’re at the forefront of the field of actually doing prospective analysis. And yet there are still some gaps that are missing in our understanding.”

The assays used in both studies, Dr. Cohen noted, are used only in research and are not available commercially in the United States. That, plus the fact that physicians were allowed to choose between chemotherapy regimens, made it harder to parse the results.

“Provider choice increases bias,” Dr. Cohen said. “And I think that’s the problem of having two chemo regimens to choose from, or in the case of the colon cancer trial, not selecting whether patients got a single chemotherapy agent or a doublet. These are pretty big differences.”

But the field is moving quickly, “and it is an exciting time to improve patient selection for chemotherapy treatment,” she continued.

Allowing physicians to choose chemotherapy regimens reflected real-world clinical practice, “especially given that this study is designed to test a strategy rather than a specific treatment, said Dr. Tie in an interview. “More work will need to be done to specifically address the question of which chemotherapy regimen is more effective to treat ctDNA-positive disease.”

Dr. Cohen noted that, while evidence is mounting to support the value of ctDNA in colon cancer, there is far less evidence for pancreatic cancer.

Dr. Lee and colleagues’ study “adds to the literature, and I think what it teaches us is that ctDNA remains a prognostic risk factor,” she said. “But we saw that even patients who are negative have a high recurrence risk. So we’re not ready to act on it yet. As with the colon cancer study, different chemotherapy regimens were used, and for different time lengths.”

Whether in colon cancer or pancreatic cancer, ctDNA results, “are highly tied to which assay you’re using and which scenario you’re testing them in,” Dr. Cohen said.

Dr. Tie and colleagues’ study was sponsored by her institution, with additional funding received from the Australian government, the National Institutes of Health, and other foundations. She disclosed speaking and/or consulting fees from Haystack Oncology, Amgen, Novartis, Bristol-Myers Squibb, Merck, AstraZeneca, and others. Dr. Lee’s study was sponsored by the Marcus Foundation. She disclosed receiving honoraria from Roche. Dr. Cohen reported no conflicts of interest.

CHICAGO – Circulating tumor DNA (ctDNA), or DNA shed from tumors that is detected in the bloodstream, has shown increasing promise as a prognostic tool in gastrointestinal cancers, allowing investigators to make real-time assessments of treatment response and the likelihood of recurrence.

Depending on the type of assay and analysis used, ctDNA can provide a wealth of information about cancer genetic variants. ctDNA assays can be used for primary screening, to track tumor burden, or to detect minimal residual disease (MRD) after cancer surgery.

However, ctDNA’s role in guiding clinical decisions is still being defined. Australian investigators presented research showing that a negative ctDNA finding can be used to avoid unnecessary chemotherapy in postoperative stage II colon cancer patients without affecting survival outcomes, at the annual meeting of the American Society of Clinical Oncology (ASCO), in Chicago.

The same group also presented exploratory findings showing that positive ctDNA is a significant predictor of recurrence in people with early-stage pancreatic cancer following surgery. However, the investigators concluded, ctDNA status should not be used to inform treatment decisions concerning duration of adjuvant chemotherapy in these patients.
 

DYNAMIC Trial Results

Jeanne Tie, MD, of the Peter MacCallum Cancer Centre in Melbourne, presented 5-year survival results at ASCO from the DYNAMIC randomized controlled trial, whose 2-year findings had already shown ctDNA to be helpful in stratifying stage II colon cancer patients for adjuvant chemotherapy or no treatment.

Because surgery is curative in 80% of these patients, it is important to identify the minority that will need chemotherapy, Dr. Tie said.

At 5 years’ follow-up, Dr. Tie reported, patients randomized to a ctDNA-guided approach (negative ctDNA post surgery resulted in no treatment, and positive ctDNA led to adjuvant chemotherapy) did not see differences in overall survival compared with conventionally managed patients, who received chemotherapy at the clinician’s discretion.

Among ctDNA-guided patients in the study (n = 302), 5-year overall survival was 93.8%. For conventionally managed patients (n = 153), overall survival was 93.3% at 5 years (hazard ratio [HR], 1.05; 95% CI, 0.47-2.37; P = .887).

Further, the researchers found that a high ctDNA clearance rate was achieved with adjuvant chemotherapy in postoperative patients who were ctDNA positive. And 5-year recurrence rates were markedly lower in patients who achieved ctDNA clearance, compared with those who did not: 85.2% vs 20% (HR, 15.4; 95% CI, 3.91-61.0; P < .001).

“This approach of only treating patients with a positive ctDNA achieved excellent survival outcomes, including in patients with T4 disease. A high ctDNA clearance rate can be achieved with adjuvant chemotherapy, and this in turn was associated with favorable outcomes,” Dr. Tie said during the meeting. “And finally, the precision of the ctDNA approach may be further refined by increasing [the number of genetic variants] tracked and by incorporating ctDNA molecular burden. However, these findings will require further validation.”
 

DYNAMIC-Pancreas Study Results

In a separate presentation during the same session, Belinda Lee, MD, also of the Peter MacCallum Cancer Centre, showed results from the DYNAMIC-Pancreas study, which looked at ctDNA testing a median 5 weeks after surgery in 102 people with early-stage (Eastern Cooperative Oncology Group 0-1) pancreatic cancer. Patients who were ctDNA positive received 6 months of adjuvant chemotherapy of the physician’s choice (FOLFIRINOX or gemcitabine/capecitabine) while those who were ctDNA negative after surgery had the option to de-escalate to 3 months of chemotherapy treatment at the physician’s discretion.

At a median 3 years’ follow-up, Dr. Lee and colleagues found that the median recurrence-free survival was 13 months for patients who were ctDNA positive after surgery and 22 months for those who were ctDNA negative (HR, 0.52; P = .003), showing that positive ctDNA is prognostic of earlier recurrence independent of other factors.

Dr. Lee said that, given the high recurrence risk also seen in ctDNA-negative patients, reducing duration of chemotherapy was not recommended based on ctDNA-negative status.

In an interview, Stacey Cohen, MD, of Fred Hutch Cancer Center in Seattle, Washington, the discussant on the two presentations at ASCO, said that, until these results are further validated in stage II colon cancer patients,t it is unlikely that they will change clinical practice guidelines.

“They did an amazing job,” Dr. Cohen said of the researchers. “They’re at the forefront of the field of actually doing prospective analysis. And yet there are still some gaps that are missing in our understanding.”

The assays used in both studies, Dr. Cohen noted, are used only in research and are not available commercially in the United States. That, plus the fact that physicians were allowed to choose between chemotherapy regimens, made it harder to parse the results.

“Provider choice increases bias,” Dr. Cohen said. “And I think that’s the problem of having two chemo regimens to choose from, or in the case of the colon cancer trial, not selecting whether patients got a single chemotherapy agent or a doublet. These are pretty big differences.”

But the field is moving quickly, “and it is an exciting time to improve patient selection for chemotherapy treatment,” she continued.

Allowing physicians to choose chemotherapy regimens reflected real-world clinical practice, “especially given that this study is designed to test a strategy rather than a specific treatment, said Dr. Tie in an interview. “More work will need to be done to specifically address the question of which chemotherapy regimen is more effective to treat ctDNA-positive disease.”

Dr. Cohen noted that, while evidence is mounting to support the value of ctDNA in colon cancer, there is far less evidence for pancreatic cancer.

Dr. Lee and colleagues’ study “adds to the literature, and I think what it teaches us is that ctDNA remains a prognostic risk factor,” she said. “But we saw that even patients who are negative have a high recurrence risk. So we’re not ready to act on it yet. As with the colon cancer study, different chemotherapy regimens were used, and for different time lengths.”

Whether in colon cancer or pancreatic cancer, ctDNA results, “are highly tied to which assay you’re using and which scenario you’re testing them in,” Dr. Cohen said.

Dr. Tie and colleagues’ study was sponsored by her institution, with additional funding received from the Australian government, the National Institutes of Health, and other foundations. She disclosed speaking and/or consulting fees from Haystack Oncology, Amgen, Novartis, Bristol-Myers Squibb, Merck, AstraZeneca, and others. Dr. Lee’s study was sponsored by the Marcus Foundation. She disclosed receiving honoraria from Roche. Dr. Cohen reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Rethinking Management of Skin Cancer in Older Patients

Article Type
Changed
Tue, 06/25/2024 - 17:56

WASHINGTON — In 2013, Vishal A. Patel, MD, was completing a fellowship in Mohs surgery and cutaneous oncology at Columbia University Irving Medical Center, New York City, when a study was published showing that most nonmelanoma skin cancers (NMSCs) were treated with surgery, regardless of the patient’s life expectancy. Life expectancy “should enter into treatment decisions,” the authors concluded.

The article got a lot of pushback from the Mohs surgeons,” and the value of surgery in older adults and particularly those with limited life expectancy “became a hot topic,” Dr. Patel recalled at the ElderDerm conference hosted by the Department of Dermatology at George Washington University, Washington, DC, and described as a first-of-its-kind meeting dedicated to improving dermatologic care for older adults.

Christine Kilgore
Dr. Vishal A. Patel (right) director of the cutaneous oncology program at the GW Cancer Center, and Dr. Christina Prather, MD, director and associate professor of geriatrics and palliative medicine, George Washington University.

Today, however, more than a decade later, guidelines still promote surgical therapy as the gold standard across the board, and questions raised by the study are still unaddressed, Dr. Patel, associate professor of dermatology and medicine/oncology at George Washington University, said at the meeting. These questions are becoming increasingly urgent as the incidence of skin cancer, especially NMSC, rises in the older adult population, especially in patients older than 85 years. “It’s a function of our training and our treatment guidelines that we reach for the most definitive treatment, which happens to be the most aggressive, in these patients,” added Dr. Patel, who is also director of the cutaneous oncology program at the GW Cancer Center.

“Sometimes we lose track of what ... we need to do” to provide care that reflects the best interests of the older patient, he continued. “Surgery may be the gold standard for treating the majority of NMSCs ... but is it the [best option] for what our older patients and patients with limited life expectancy need?”

Learning about what truly matters to the patient is a key element of the “age-friendly, whole-person care” that dermatologists must embrace as older adults become an increasingly large subset of their patient population, Christina Prather, MD, director and associate professor of geriatrics and palliative medicine at George Washington University, said at the meeting.

By 2040, projections are that the number of adults aged 85 years and older in the United States will be nearly quadruple the number in 2000, according to one estimate.

“We know that there are less than 6000 practicing geriatricians in the country ... [so the healthcare system] needs more of you who know how to bring an age-friendly approach to care,” Dr. Prather said. Dermatology is among the specialties that need to be “geriatricized.”
 

NMSC Increasing in the Older Population

The incidence of skin cancer is rising faster than that of any other cancer, Dr. Patel said. One window into the epidemiology, he said, comes from recently published data showing that an average of 6.1 million adults were treated each year for skin cancer during 2016-2018 (5.2 million of them for NMSC) — an increase from an average of 5.8 million annually in the 2012-2015 period. The data come from the Medical Expenditure Panel Survey (MEPS), which is conducted by the US Public Health Service through the Agency for Healthcare Research and Quality and the Centers for Disease Control and Prevention.

As a frame of reference, the average number of adults treated each year for nonskin cancers during these periods rose from 10.8 to 11.9 million, according to the 2023 MEPS data. “Skin cancer is about one-third of all cancers combined,” Dr. Patel said.

Not only is the incidence of NMSC significantly higher than that of melanoma but it also shows a more prominent aging trend. This was documented recently in a long-term observational study from Japan, in which researchers looked at the change in the median age of patients with NMSC and melanoma, compared with cancers of other organs, from 1991 to 2020 and found that NMSC had by far the greatest rise in median age, to a median age of 80 years in 2021.

Even more notable, Dr. Patel said, was a particularly marked increase in the number of patients with skin cancer aged 90 years and older. In 2021, this group of older adults accounted for 17% of patients receiving treatment for skin cancer at the Japanese hospital where the data were collected.

The 2013 study that stirred Dr. Patel as a fellow was of 1536 consecutive patients diagnosed with NMSC at two dermatology clinics (a University of California San Francisco–based private clinic and a Veterans Affairs Medical Center clinic) and followed for 6 years. “What’s interesting and worth thinking about is that, regardless of patients’ life expectancy, NMSCs were treated aggressively and surgically, and the choice of surgery was not influenced by the patient’s poor prognosis in a multivariate model” adjusted for tumor and patient characteristics, he said at the meeting.

The researchers defined limited life expectancy as either 85 years or older or having a Charleston Comorbidity Index ≥ 3. Approximately half of the patients with limited life expectancy died within 5 years, none of NMSC. Most patients with limited life expectancy were not often bothered by their tumors, and approximately one in five reported a treatment complication within 2 years. The 5-year tumor recurrence rate was 3.7%.

A more recent study looked at 1181 patients older than 85 years with NMSC referred for Mohs surgery. Almost all patients in the multicenter, prospective cohort study (91.3%) were treated with Mohs.

Treated patients were more likely to have facial tumors and higher functional status than those not treated with Mohs surgery, and the most common reasons provided by surgeons for proceeding with the surgery were a patient desire for a high cure rate (66%), higher functional status for age (57%), and high-risk tumor type (40%). Almost 42% of the referred patients were 89 years or older.

“Granted, [the reasons] are justified indications for surgery,” Dr. Patel said. Yet the study brings up the question of “whether we need to do Mohs surgery this frequently in elderly patients?” In an email after the meeting, he added, “it’s a question we may need to reconsider as the elderly population continues to increase and median age of NMSC rises.”
 

 

 

Underutilized Management Options for NMSC

In his practice, discussions of treatment options are preceded by a thorough discussion of the disease itself. Many lesions are low risk, and helping patients understand risks, as well as understanding what is important to the patient — especially those with limited life expectancy — will guide shared decision-making to choose the best treatment, Dr. Patel said at the meeting.

The dermatologist’s risk assessment — both staging and stratifying risk as it relates to specific outcomes such as recurrence, metastases, or death — takes on added importance in the older patient, he emphasized. “I think we underutilize the risk assessment.”

Also underutilized is the option of shave removal for low-risk squamous cell carcinomas and basal cell carcinomas, Dr. Patel said, noting that, in the National Comprehensive Cancer Network guidelines, “there’s an option for shave removal and nothing more if you have clear margins.”

Alternatively, disc excision with the initial biopsy can often be considered. “Having that intent to treat at the time of biopsy may be all that needs to be done” in older patients with obvious or highly suspicious lesions, he said.

Systemic immunotherapy has joined the treatment armamentarium for advanced basal cell carcinoma and advanced cutaneous squamous cell carcinoma, and if early, ongoing research of intralesional programmed cell death protein 1 inhibitor treatment advances, this could be another option for older adults in the future, Dr. Patel said. Targeting drug delivery directly to the tumor would lower the total dose, decrease systemic exposure, and could be used to avoid surgery for some groups of patients, such as those with limited life expectancy.

A Personal Story, a Word on Melanoma

Dr. Prather recalled when her 97-year-old grandfather had a skin lesion on his forehead removed, and a conversation he had with her mother about whether he really needed to have the procedure because he had cognitive impairment and was on oral anticoagulants.

The clinician “said it absolutely had to go. ... I can’t tell you how much his doctors’ visits and wound care consumed my family’s life for the next few years — for this thing that never quite healed,” she said.

“Was it necessary? The more I’ve learned over time is that it wasn’t,” Dr. Prather added. “We have to take time [with our older patients] and think critically. What is feasible? What makes the most sense? What is the most important thing I need to know about the patient?”

Also important, Dr. Patel noted, is the big-picture consideration of skin cancer treatment costs. The MEPS survey data showing the rising prevalence of skin cancer treatment also documented the economic burden: A nearly 30% increase in the average annual cost of treating NMSC from $5 billion in 2012-2015 to $6.5 billion in 2016-2018. (The average annual costs of treating melanoma decreased slightly.) “Skin cancer is a big drain on our limited resources,” he said.

With melanoma as well, dermatologists must think critically and holistically about the individual patient — and not have “a single view lens of the disease and how we treat the disease,” said Dr. Patel, urging the audience to read a “Sounding Board” article published in The New England Journal of Medicine in 2021. The article argued that there is overdiagnosis of cutaneous melanoma stemming from increased screening, falling clinical thresholds for biopsy, and falling pathological thresholds for labeling morphologic changes as cancer.

“There’s a diagnostic disconnect and a problem of overdiagnosis ... because we’re afraid to miss or make a mistake,” he said. “It leads to the question, do all lesions denoted as skin cancers need aggressive treatment? What does it mean for the patient in front of you?”

Dr. Patel reported receiving honoraria from Regeneron, Almirall, Biofrontera, Sun Pharma, and SkylineDx and serving on the speaker bureau of Regeneron and Almirall. He is chief medical officer for Lazarus AI and is cofounder of the Skin Cancer Outcomes consortium. Dr. Prather disclosed relationships with the National Institutes of Health, AHRQ, The Washington Home Foundation, and the Alzheimer’s Association.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

WASHINGTON — In 2013, Vishal A. Patel, MD, was completing a fellowship in Mohs surgery and cutaneous oncology at Columbia University Irving Medical Center, New York City, when a study was published showing that most nonmelanoma skin cancers (NMSCs) were treated with surgery, regardless of the patient’s life expectancy. Life expectancy “should enter into treatment decisions,” the authors concluded.

The article got a lot of pushback from the Mohs surgeons,” and the value of surgery in older adults and particularly those with limited life expectancy “became a hot topic,” Dr. Patel recalled at the ElderDerm conference hosted by the Department of Dermatology at George Washington University, Washington, DC, and described as a first-of-its-kind meeting dedicated to improving dermatologic care for older adults.

Christine Kilgore
Dr. Vishal A. Patel (right) director of the cutaneous oncology program at the GW Cancer Center, and Dr. Christina Prather, MD, director and associate professor of geriatrics and palliative medicine, George Washington University.

Today, however, more than a decade later, guidelines still promote surgical therapy as the gold standard across the board, and questions raised by the study are still unaddressed, Dr. Patel, associate professor of dermatology and medicine/oncology at George Washington University, said at the meeting. These questions are becoming increasingly urgent as the incidence of skin cancer, especially NMSC, rises in the older adult population, especially in patients older than 85 years. “It’s a function of our training and our treatment guidelines that we reach for the most definitive treatment, which happens to be the most aggressive, in these patients,” added Dr. Patel, who is also director of the cutaneous oncology program at the GW Cancer Center.

“Sometimes we lose track of what ... we need to do” to provide care that reflects the best interests of the older patient, he continued. “Surgery may be the gold standard for treating the majority of NMSCs ... but is it the [best option] for what our older patients and patients with limited life expectancy need?”

Learning about what truly matters to the patient is a key element of the “age-friendly, whole-person care” that dermatologists must embrace as older adults become an increasingly large subset of their patient population, Christina Prather, MD, director and associate professor of geriatrics and palliative medicine at George Washington University, said at the meeting.

By 2040, projections are that the number of adults aged 85 years and older in the United States will be nearly quadruple the number in 2000, according to one estimate.

“We know that there are less than 6000 practicing geriatricians in the country ... [so the healthcare system] needs more of you who know how to bring an age-friendly approach to care,” Dr. Prather said. Dermatology is among the specialties that need to be “geriatricized.”
 

NMSC Increasing in the Older Population

The incidence of skin cancer is rising faster than that of any other cancer, Dr. Patel said. One window into the epidemiology, he said, comes from recently published data showing that an average of 6.1 million adults were treated each year for skin cancer during 2016-2018 (5.2 million of them for NMSC) — an increase from an average of 5.8 million annually in the 2012-2015 period. The data come from the Medical Expenditure Panel Survey (MEPS), which is conducted by the US Public Health Service through the Agency for Healthcare Research and Quality and the Centers for Disease Control and Prevention.

As a frame of reference, the average number of adults treated each year for nonskin cancers during these periods rose from 10.8 to 11.9 million, according to the 2023 MEPS data. “Skin cancer is about one-third of all cancers combined,” Dr. Patel said.

Not only is the incidence of NMSC significantly higher than that of melanoma but it also shows a more prominent aging trend. This was documented recently in a long-term observational study from Japan, in which researchers looked at the change in the median age of patients with NMSC and melanoma, compared with cancers of other organs, from 1991 to 2020 and found that NMSC had by far the greatest rise in median age, to a median age of 80 years in 2021.

Even more notable, Dr. Patel said, was a particularly marked increase in the number of patients with skin cancer aged 90 years and older. In 2021, this group of older adults accounted for 17% of patients receiving treatment for skin cancer at the Japanese hospital where the data were collected.

The 2013 study that stirred Dr. Patel as a fellow was of 1536 consecutive patients diagnosed with NMSC at two dermatology clinics (a University of California San Francisco–based private clinic and a Veterans Affairs Medical Center clinic) and followed for 6 years. “What’s interesting and worth thinking about is that, regardless of patients’ life expectancy, NMSCs were treated aggressively and surgically, and the choice of surgery was not influenced by the patient’s poor prognosis in a multivariate model” adjusted for tumor and patient characteristics, he said at the meeting.

The researchers defined limited life expectancy as either 85 years or older or having a Charleston Comorbidity Index ≥ 3. Approximately half of the patients with limited life expectancy died within 5 years, none of NMSC. Most patients with limited life expectancy were not often bothered by their tumors, and approximately one in five reported a treatment complication within 2 years. The 5-year tumor recurrence rate was 3.7%.

A more recent study looked at 1181 patients older than 85 years with NMSC referred for Mohs surgery. Almost all patients in the multicenter, prospective cohort study (91.3%) were treated with Mohs.

Treated patients were more likely to have facial tumors and higher functional status than those not treated with Mohs surgery, and the most common reasons provided by surgeons for proceeding with the surgery were a patient desire for a high cure rate (66%), higher functional status for age (57%), and high-risk tumor type (40%). Almost 42% of the referred patients were 89 years or older.

“Granted, [the reasons] are justified indications for surgery,” Dr. Patel said. Yet the study brings up the question of “whether we need to do Mohs surgery this frequently in elderly patients?” In an email after the meeting, he added, “it’s a question we may need to reconsider as the elderly population continues to increase and median age of NMSC rises.”
 

 

 

Underutilized Management Options for NMSC

In his practice, discussions of treatment options are preceded by a thorough discussion of the disease itself. Many lesions are low risk, and helping patients understand risks, as well as understanding what is important to the patient — especially those with limited life expectancy — will guide shared decision-making to choose the best treatment, Dr. Patel said at the meeting.

The dermatologist’s risk assessment — both staging and stratifying risk as it relates to specific outcomes such as recurrence, metastases, or death — takes on added importance in the older patient, he emphasized. “I think we underutilize the risk assessment.”

Also underutilized is the option of shave removal for low-risk squamous cell carcinomas and basal cell carcinomas, Dr. Patel said, noting that, in the National Comprehensive Cancer Network guidelines, “there’s an option for shave removal and nothing more if you have clear margins.”

Alternatively, disc excision with the initial biopsy can often be considered. “Having that intent to treat at the time of biopsy may be all that needs to be done” in older patients with obvious or highly suspicious lesions, he said.

Systemic immunotherapy has joined the treatment armamentarium for advanced basal cell carcinoma and advanced cutaneous squamous cell carcinoma, and if early, ongoing research of intralesional programmed cell death protein 1 inhibitor treatment advances, this could be another option for older adults in the future, Dr. Patel said. Targeting drug delivery directly to the tumor would lower the total dose, decrease systemic exposure, and could be used to avoid surgery for some groups of patients, such as those with limited life expectancy.

A Personal Story, a Word on Melanoma

Dr. Prather recalled when her 97-year-old grandfather had a skin lesion on his forehead removed, and a conversation he had with her mother about whether he really needed to have the procedure because he had cognitive impairment and was on oral anticoagulants.

The clinician “said it absolutely had to go. ... I can’t tell you how much his doctors’ visits and wound care consumed my family’s life for the next few years — for this thing that never quite healed,” she said.

“Was it necessary? The more I’ve learned over time is that it wasn’t,” Dr. Prather added. “We have to take time [with our older patients] and think critically. What is feasible? What makes the most sense? What is the most important thing I need to know about the patient?”

Also important, Dr. Patel noted, is the big-picture consideration of skin cancer treatment costs. The MEPS survey data showing the rising prevalence of skin cancer treatment also documented the economic burden: A nearly 30% increase in the average annual cost of treating NMSC from $5 billion in 2012-2015 to $6.5 billion in 2016-2018. (The average annual costs of treating melanoma decreased slightly.) “Skin cancer is a big drain on our limited resources,” he said.

With melanoma as well, dermatologists must think critically and holistically about the individual patient — and not have “a single view lens of the disease and how we treat the disease,” said Dr. Patel, urging the audience to read a “Sounding Board” article published in The New England Journal of Medicine in 2021. The article argued that there is overdiagnosis of cutaneous melanoma stemming from increased screening, falling clinical thresholds for biopsy, and falling pathological thresholds for labeling morphologic changes as cancer.

“There’s a diagnostic disconnect and a problem of overdiagnosis ... because we’re afraid to miss or make a mistake,” he said. “It leads to the question, do all lesions denoted as skin cancers need aggressive treatment? What does it mean for the patient in front of you?”

Dr. Patel reported receiving honoraria from Regeneron, Almirall, Biofrontera, Sun Pharma, and SkylineDx and serving on the speaker bureau of Regeneron and Almirall. He is chief medical officer for Lazarus AI and is cofounder of the Skin Cancer Outcomes consortium. Dr. Prather disclosed relationships with the National Institutes of Health, AHRQ, The Washington Home Foundation, and the Alzheimer’s Association.

A version of this article appeared on Medscape.com.

WASHINGTON — In 2013, Vishal A. Patel, MD, was completing a fellowship in Mohs surgery and cutaneous oncology at Columbia University Irving Medical Center, New York City, when a study was published showing that most nonmelanoma skin cancers (NMSCs) were treated with surgery, regardless of the patient’s life expectancy. Life expectancy “should enter into treatment decisions,” the authors concluded.

The article got a lot of pushback from the Mohs surgeons,” and the value of surgery in older adults and particularly those with limited life expectancy “became a hot topic,” Dr. Patel recalled at the ElderDerm conference hosted by the Department of Dermatology at George Washington University, Washington, DC, and described as a first-of-its-kind meeting dedicated to improving dermatologic care for older adults.

Christine Kilgore
Dr. Vishal A. Patel (right) director of the cutaneous oncology program at the GW Cancer Center, and Dr. Christina Prather, MD, director and associate professor of geriatrics and palliative medicine, George Washington University.

Today, however, more than a decade later, guidelines still promote surgical therapy as the gold standard across the board, and questions raised by the study are still unaddressed, Dr. Patel, associate professor of dermatology and medicine/oncology at George Washington University, said at the meeting. These questions are becoming increasingly urgent as the incidence of skin cancer, especially NMSC, rises in the older adult population, especially in patients older than 85 years. “It’s a function of our training and our treatment guidelines that we reach for the most definitive treatment, which happens to be the most aggressive, in these patients,” added Dr. Patel, who is also director of the cutaneous oncology program at the GW Cancer Center.

“Sometimes we lose track of what ... we need to do” to provide care that reflects the best interests of the older patient, he continued. “Surgery may be the gold standard for treating the majority of NMSCs ... but is it the [best option] for what our older patients and patients with limited life expectancy need?”

Learning about what truly matters to the patient is a key element of the “age-friendly, whole-person care” that dermatologists must embrace as older adults become an increasingly large subset of their patient population, Christina Prather, MD, director and associate professor of geriatrics and palliative medicine at George Washington University, said at the meeting.

By 2040, projections are that the number of adults aged 85 years and older in the United States will be nearly quadruple the number in 2000, according to one estimate.

“We know that there are less than 6000 practicing geriatricians in the country ... [so the healthcare system] needs more of you who know how to bring an age-friendly approach to care,” Dr. Prather said. Dermatology is among the specialties that need to be “geriatricized.”
 

NMSC Increasing in the Older Population

The incidence of skin cancer is rising faster than that of any other cancer, Dr. Patel said. One window into the epidemiology, he said, comes from recently published data showing that an average of 6.1 million adults were treated each year for skin cancer during 2016-2018 (5.2 million of them for NMSC) — an increase from an average of 5.8 million annually in the 2012-2015 period. The data come from the Medical Expenditure Panel Survey (MEPS), which is conducted by the US Public Health Service through the Agency for Healthcare Research and Quality and the Centers for Disease Control and Prevention.

As a frame of reference, the average number of adults treated each year for nonskin cancers during these periods rose from 10.8 to 11.9 million, according to the 2023 MEPS data. “Skin cancer is about one-third of all cancers combined,” Dr. Patel said.

Not only is the incidence of NMSC significantly higher than that of melanoma but it also shows a more prominent aging trend. This was documented recently in a long-term observational study from Japan, in which researchers looked at the change in the median age of patients with NMSC and melanoma, compared with cancers of other organs, from 1991 to 2020 and found that NMSC had by far the greatest rise in median age, to a median age of 80 years in 2021.

Even more notable, Dr. Patel said, was a particularly marked increase in the number of patients with skin cancer aged 90 years and older. In 2021, this group of older adults accounted for 17% of patients receiving treatment for skin cancer at the Japanese hospital where the data were collected.

The 2013 study that stirred Dr. Patel as a fellow was of 1536 consecutive patients diagnosed with NMSC at two dermatology clinics (a University of California San Francisco–based private clinic and a Veterans Affairs Medical Center clinic) and followed for 6 years. “What’s interesting and worth thinking about is that, regardless of patients’ life expectancy, NMSCs were treated aggressively and surgically, and the choice of surgery was not influenced by the patient’s poor prognosis in a multivariate model” adjusted for tumor and patient characteristics, he said at the meeting.

The researchers defined limited life expectancy as either 85 years or older or having a Charleston Comorbidity Index ≥ 3. Approximately half of the patients with limited life expectancy died within 5 years, none of NMSC. Most patients with limited life expectancy were not often bothered by their tumors, and approximately one in five reported a treatment complication within 2 years. The 5-year tumor recurrence rate was 3.7%.

A more recent study looked at 1181 patients older than 85 years with NMSC referred for Mohs surgery. Almost all patients in the multicenter, prospective cohort study (91.3%) were treated with Mohs.

Treated patients were more likely to have facial tumors and higher functional status than those not treated with Mohs surgery, and the most common reasons provided by surgeons for proceeding with the surgery were a patient desire for a high cure rate (66%), higher functional status for age (57%), and high-risk tumor type (40%). Almost 42% of the referred patients were 89 years or older.

“Granted, [the reasons] are justified indications for surgery,” Dr. Patel said. Yet the study brings up the question of “whether we need to do Mohs surgery this frequently in elderly patients?” In an email after the meeting, he added, “it’s a question we may need to reconsider as the elderly population continues to increase and median age of NMSC rises.”
 

 

 

Underutilized Management Options for NMSC

In his practice, discussions of treatment options are preceded by a thorough discussion of the disease itself. Many lesions are low risk, and helping patients understand risks, as well as understanding what is important to the patient — especially those with limited life expectancy — will guide shared decision-making to choose the best treatment, Dr. Patel said at the meeting.

The dermatologist’s risk assessment — both staging and stratifying risk as it relates to specific outcomes such as recurrence, metastases, or death — takes on added importance in the older patient, he emphasized. “I think we underutilize the risk assessment.”

Also underutilized is the option of shave removal for low-risk squamous cell carcinomas and basal cell carcinomas, Dr. Patel said, noting that, in the National Comprehensive Cancer Network guidelines, “there’s an option for shave removal and nothing more if you have clear margins.”

Alternatively, disc excision with the initial biopsy can often be considered. “Having that intent to treat at the time of biopsy may be all that needs to be done” in older patients with obvious or highly suspicious lesions, he said.

Systemic immunotherapy has joined the treatment armamentarium for advanced basal cell carcinoma and advanced cutaneous squamous cell carcinoma, and if early, ongoing research of intralesional programmed cell death protein 1 inhibitor treatment advances, this could be another option for older adults in the future, Dr. Patel said. Targeting drug delivery directly to the tumor would lower the total dose, decrease systemic exposure, and could be used to avoid surgery for some groups of patients, such as those with limited life expectancy.

A Personal Story, a Word on Melanoma

Dr. Prather recalled when her 97-year-old grandfather had a skin lesion on his forehead removed, and a conversation he had with her mother about whether he really needed to have the procedure because he had cognitive impairment and was on oral anticoagulants.

The clinician “said it absolutely had to go. ... I can’t tell you how much his doctors’ visits and wound care consumed my family’s life for the next few years — for this thing that never quite healed,” she said.

“Was it necessary? The more I’ve learned over time is that it wasn’t,” Dr. Prather added. “We have to take time [with our older patients] and think critically. What is feasible? What makes the most sense? What is the most important thing I need to know about the patient?”

Also important, Dr. Patel noted, is the big-picture consideration of skin cancer treatment costs. The MEPS survey data showing the rising prevalence of skin cancer treatment also documented the economic burden: A nearly 30% increase in the average annual cost of treating NMSC from $5 billion in 2012-2015 to $6.5 billion in 2016-2018. (The average annual costs of treating melanoma decreased slightly.) “Skin cancer is a big drain on our limited resources,” he said.

With melanoma as well, dermatologists must think critically and holistically about the individual patient — and not have “a single view lens of the disease and how we treat the disease,” said Dr. Patel, urging the audience to read a “Sounding Board” article published in The New England Journal of Medicine in 2021. The article argued that there is overdiagnosis of cutaneous melanoma stemming from increased screening, falling clinical thresholds for biopsy, and falling pathological thresholds for labeling morphologic changes as cancer.

“There’s a diagnostic disconnect and a problem of overdiagnosis ... because we’re afraid to miss or make a mistake,” he said. “It leads to the question, do all lesions denoted as skin cancers need aggressive treatment? What does it mean for the patient in front of you?”

Dr. Patel reported receiving honoraria from Regeneron, Almirall, Biofrontera, Sun Pharma, and SkylineDx and serving on the speaker bureau of Regeneron and Almirall. He is chief medical officer for Lazarus AI and is cofounder of the Skin Cancer Outcomes consortium. Dr. Prather disclosed relationships with the National Institutes of Health, AHRQ, The Washington Home Foundation, and the Alzheimer’s Association.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Nurse-Led Care for Gout Generates Best Uric Acid Control

Article Type
Changed
Mon, 06/24/2024 - 15:04

— To maintain gout in remission, nurses in a rheumatology service do better than doctors in implementing a straightforward treat-to-target (T2T) strategy, according to a randomized study that showed a consistent advantage across subgroups.

“Our study provides evidence that nurse-led therapy for gout leads to better uric acid control, which is an important consideration with the increasing incidence and the increasing costs of managing this condition,” said Jesper W. Larsen, a registered nurse affiliated with the Department of Rheumatology at North Denmark Regional Hospital, Hjørring, Denmark. He presented the study at the annual European Congress of Rheumatology.

The advantage of nurse-led care was seen across every subgroup evaluated. Moreover, more patients in the nurse-led group than in the usual care group remained on urate-lowering therapy at the end of the 2-year study.

The optimal management of gout is based on the treatment goal of lowering serum uric acid (sUA) to below the physiologic level of 0.36 mmol/L (6 mg/dL), a strategy called T2T that is endorsed by both EULAR and the American College of Rheumatology.

“This target can be reached in most patients with commonly used therapies, including allopurinol, which is relatively inexpensive,” Mr. Larsen said. Given that disease control and sustained remission are largely based on this target, he and his colleagues tested the hypothesis that nurses working in a rheumatology service could provide efficient and cost-effective care.

A total of 286 patients with gout defined by microscopy who were treated between 2015 and 2021 were enrolled in the study. Of these, 100 patients who had been enrolled before the introduction of nurse-led care received and were maintained on usual care, which generally included diagnosis by an orthopedist, an emergency room physician, or an internist, with subsequent treatment and follow-up with a general practitioner.

Of 186 patients treated after nurse-led care was implemented, 72 were transitioned to usual care, and the remaining 114 continued receiving nurse-led care over the next 2 years of follow-up. In the nurse-led care arm, nurses who specialized in rheumatology and were trained in gout management monitored a structured T2T strategy. They were available for consultation, provided patient education, and followed laboratory values, including sUA, which they used to adjust treatments.

Except in the case of complications, “there was no more contact with physicians” once care was transferred to the nurse, Mr. Larsen said. Most of the nurse management was based on sUA laboratory values and performed by telephone.

At 2 years, 112 patients in the nurse-led care group were compared with the 144 in the usual care group. Two of the 114 patients who entered the nurse-care cohort and 28 of the 172 in the usual care cohort died before the study ended.

At 2 years, the proportion of patients maintained at the target sUA was almost twice as great in the nurse-led arm (83% vs 44%). This was also true of patients aged 70 years or older (84% vs 45%), patients with tophi (60% vs 33%), and patients with sUA > 0.5 mmol/L at baseline (84% vs 44%). Nurse-led care also kept a greater proportion of patients at target who entered the study with an estimated glomerular filtration rate < 60 mL/min per 1.73 m2 (84% vs 52%) or were taking diuretics (89% vs 52%). All differences reached statistical significance (P < .05).

The reason for the lower mortality at 2 years in the nurse-led group (4% vs 23%; P < .001) is unclear, according to Mr. Larsen. In addition to considering a selection bias that might have channeled patients with more severe disease to usual care, he and his coinvestigators are also considering whether the lower rates of sUA control in the usual care group might have led to a higher rate of cardiovascular events.

Because of some baseline imbalances, a selection bias cannot be ruled out, but the imbalances did not uniformly favor nurse-led care. For example, the proportion of patients with diabetes (23% vs 13%) or a baseline cancer diagnosis (11% vs 5%) was higher in the nurse-led care group. The proportion of patients with atrial fibrillation (45% vs 35%) or on diuretics (47% vs 33%) at baseline was higher in the usual care group.

The median age of 69 years was the same in the two groups, although the nurse-led group included a higher proportion of men to women (86% vs 76%).

Within a T2T strategy, nurses focused on reaching the target might do a better job than physicians in consistently monitoring and adjusting therapies as needed, but Mr. Larsen also speculated that nurses might offer a more collaborative approach and provide greater support through patient education and regular telephone contact.
 

 

 

Potential Advantages of Nurse-Led Care

Clinicians concerned about nurses missing nuances in disease progression or being slow to recognize complications might be surprised to learn about the advantage of nurse-led care, but Mwidimi Ndosi, PhD, an associate professor in rheumatology nursing at the University of the West of England, Bristol, England, was not.

“There is quite a large literature to show that nursing care is often superior to physician-led patient management in the appropriate circumstances,” Mr. Ndosi said. In this specific instance of gout management, he said that the treatment target is clear, and nurses are often able to devote more time to a specific goal, like T2T, than clinicians balancing more priorities.

Dr. Mwidimi Ndosi


“In this trial, the care was administered by nurse specialists who presumably are skilled in this disease and know their limitations if a consultation with a physician is needed,” he said.

Mr. Ndosi, like Mr. Larsen, considers it likely that nurse-led programs for a T2T gout protocol will be implemented elsewhere. Mr. Ndosi pointed out that patients who are concerned about the quality of nurse-led care are generally convinced of its merits over time.

Because of factors such as nurses’ ability to spend more clinical time with patients and greater willingness to engage in resolving obstacles to self-care, compared with physicians, “there are many studies to show that patients are often more satisfied with care provided by nurses,” he said.

Mr. Larsen and Mr. Ndosi reported no potential conflicts of interest.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

— To maintain gout in remission, nurses in a rheumatology service do better than doctors in implementing a straightforward treat-to-target (T2T) strategy, according to a randomized study that showed a consistent advantage across subgroups.

“Our study provides evidence that nurse-led therapy for gout leads to better uric acid control, which is an important consideration with the increasing incidence and the increasing costs of managing this condition,” said Jesper W. Larsen, a registered nurse affiliated with the Department of Rheumatology at North Denmark Regional Hospital, Hjørring, Denmark. He presented the study at the annual European Congress of Rheumatology.

The advantage of nurse-led care was seen across every subgroup evaluated. Moreover, more patients in the nurse-led group than in the usual care group remained on urate-lowering therapy at the end of the 2-year study.

The optimal management of gout is based on the treatment goal of lowering serum uric acid (sUA) to below the physiologic level of 0.36 mmol/L (6 mg/dL), a strategy called T2T that is endorsed by both EULAR and the American College of Rheumatology.

“This target can be reached in most patients with commonly used therapies, including allopurinol, which is relatively inexpensive,” Mr. Larsen said. Given that disease control and sustained remission are largely based on this target, he and his colleagues tested the hypothesis that nurses working in a rheumatology service could provide efficient and cost-effective care.

A total of 286 patients with gout defined by microscopy who were treated between 2015 and 2021 were enrolled in the study. Of these, 100 patients who had been enrolled before the introduction of nurse-led care received and were maintained on usual care, which generally included diagnosis by an orthopedist, an emergency room physician, or an internist, with subsequent treatment and follow-up with a general practitioner.

Of 186 patients treated after nurse-led care was implemented, 72 were transitioned to usual care, and the remaining 114 continued receiving nurse-led care over the next 2 years of follow-up. In the nurse-led care arm, nurses who specialized in rheumatology and were trained in gout management monitored a structured T2T strategy. They were available for consultation, provided patient education, and followed laboratory values, including sUA, which they used to adjust treatments.

Except in the case of complications, “there was no more contact with physicians” once care was transferred to the nurse, Mr. Larsen said. Most of the nurse management was based on sUA laboratory values and performed by telephone.

At 2 years, 112 patients in the nurse-led care group were compared with the 144 in the usual care group. Two of the 114 patients who entered the nurse-care cohort and 28 of the 172 in the usual care cohort died before the study ended.

At 2 years, the proportion of patients maintained at the target sUA was almost twice as great in the nurse-led arm (83% vs 44%). This was also true of patients aged 70 years or older (84% vs 45%), patients with tophi (60% vs 33%), and patients with sUA > 0.5 mmol/L at baseline (84% vs 44%). Nurse-led care also kept a greater proportion of patients at target who entered the study with an estimated glomerular filtration rate < 60 mL/min per 1.73 m2 (84% vs 52%) or were taking diuretics (89% vs 52%). All differences reached statistical significance (P < .05).

The reason for the lower mortality at 2 years in the nurse-led group (4% vs 23%; P < .001) is unclear, according to Mr. Larsen. In addition to considering a selection bias that might have channeled patients with more severe disease to usual care, he and his coinvestigators are also considering whether the lower rates of sUA control in the usual care group might have led to a higher rate of cardiovascular events.

Because of some baseline imbalances, a selection bias cannot be ruled out, but the imbalances did not uniformly favor nurse-led care. For example, the proportion of patients with diabetes (23% vs 13%) or a baseline cancer diagnosis (11% vs 5%) was higher in the nurse-led care group. The proportion of patients with atrial fibrillation (45% vs 35%) or on diuretics (47% vs 33%) at baseline was higher in the usual care group.

The median age of 69 years was the same in the two groups, although the nurse-led group included a higher proportion of men to women (86% vs 76%).

Within a T2T strategy, nurses focused on reaching the target might do a better job than physicians in consistently monitoring and adjusting therapies as needed, but Mr. Larsen also speculated that nurses might offer a more collaborative approach and provide greater support through patient education and regular telephone contact.
 

 

 

Potential Advantages of Nurse-Led Care

Clinicians concerned about nurses missing nuances in disease progression or being slow to recognize complications might be surprised to learn about the advantage of nurse-led care, but Mwidimi Ndosi, PhD, an associate professor in rheumatology nursing at the University of the West of England, Bristol, England, was not.

“There is quite a large literature to show that nursing care is often superior to physician-led patient management in the appropriate circumstances,” Mr. Ndosi said. In this specific instance of gout management, he said that the treatment target is clear, and nurses are often able to devote more time to a specific goal, like T2T, than clinicians balancing more priorities.

Dr. Mwidimi Ndosi


“In this trial, the care was administered by nurse specialists who presumably are skilled in this disease and know their limitations if a consultation with a physician is needed,” he said.

Mr. Ndosi, like Mr. Larsen, considers it likely that nurse-led programs for a T2T gout protocol will be implemented elsewhere. Mr. Ndosi pointed out that patients who are concerned about the quality of nurse-led care are generally convinced of its merits over time.

Because of factors such as nurses’ ability to spend more clinical time with patients and greater willingness to engage in resolving obstacles to self-care, compared with physicians, “there are many studies to show that patients are often more satisfied with care provided by nurses,” he said.

Mr. Larsen and Mr. Ndosi reported no potential conflicts of interest.

A version of this article first appeared on Medscape.com.

— To maintain gout in remission, nurses in a rheumatology service do better than doctors in implementing a straightforward treat-to-target (T2T) strategy, according to a randomized study that showed a consistent advantage across subgroups.

“Our study provides evidence that nurse-led therapy for gout leads to better uric acid control, which is an important consideration with the increasing incidence and the increasing costs of managing this condition,” said Jesper W. Larsen, a registered nurse affiliated with the Department of Rheumatology at North Denmark Regional Hospital, Hjørring, Denmark. He presented the study at the annual European Congress of Rheumatology.

The advantage of nurse-led care was seen across every subgroup evaluated. Moreover, more patients in the nurse-led group than in the usual care group remained on urate-lowering therapy at the end of the 2-year study.

The optimal management of gout is based on the treatment goal of lowering serum uric acid (sUA) to below the physiologic level of 0.36 mmol/L (6 mg/dL), a strategy called T2T that is endorsed by both EULAR and the American College of Rheumatology.

“This target can be reached in most patients with commonly used therapies, including allopurinol, which is relatively inexpensive,” Mr. Larsen said. Given that disease control and sustained remission are largely based on this target, he and his colleagues tested the hypothesis that nurses working in a rheumatology service could provide efficient and cost-effective care.

A total of 286 patients with gout defined by microscopy who were treated between 2015 and 2021 were enrolled in the study. Of these, 100 patients who had been enrolled before the introduction of nurse-led care received and were maintained on usual care, which generally included diagnosis by an orthopedist, an emergency room physician, or an internist, with subsequent treatment and follow-up with a general practitioner.

Of 186 patients treated after nurse-led care was implemented, 72 were transitioned to usual care, and the remaining 114 continued receiving nurse-led care over the next 2 years of follow-up. In the nurse-led care arm, nurses who specialized in rheumatology and were trained in gout management monitored a structured T2T strategy. They were available for consultation, provided patient education, and followed laboratory values, including sUA, which they used to adjust treatments.

Except in the case of complications, “there was no more contact with physicians” once care was transferred to the nurse, Mr. Larsen said. Most of the nurse management was based on sUA laboratory values and performed by telephone.

At 2 years, 112 patients in the nurse-led care group were compared with the 144 in the usual care group. Two of the 114 patients who entered the nurse-care cohort and 28 of the 172 in the usual care cohort died before the study ended.

At 2 years, the proportion of patients maintained at the target sUA was almost twice as great in the nurse-led arm (83% vs 44%). This was also true of patients aged 70 years or older (84% vs 45%), patients with tophi (60% vs 33%), and patients with sUA > 0.5 mmol/L at baseline (84% vs 44%). Nurse-led care also kept a greater proportion of patients at target who entered the study with an estimated glomerular filtration rate < 60 mL/min per 1.73 m2 (84% vs 52%) or were taking diuretics (89% vs 52%). All differences reached statistical significance (P < .05).

The reason for the lower mortality at 2 years in the nurse-led group (4% vs 23%; P < .001) is unclear, according to Mr. Larsen. In addition to considering a selection bias that might have channeled patients with more severe disease to usual care, he and his coinvestigators are also considering whether the lower rates of sUA control in the usual care group might have led to a higher rate of cardiovascular events.

Because of some baseline imbalances, a selection bias cannot be ruled out, but the imbalances did not uniformly favor nurse-led care. For example, the proportion of patients with diabetes (23% vs 13%) or a baseline cancer diagnosis (11% vs 5%) was higher in the nurse-led care group. The proportion of patients with atrial fibrillation (45% vs 35%) or on diuretics (47% vs 33%) at baseline was higher in the usual care group.

The median age of 69 years was the same in the two groups, although the nurse-led group included a higher proportion of men to women (86% vs 76%).

Within a T2T strategy, nurses focused on reaching the target might do a better job than physicians in consistently monitoring and adjusting therapies as needed, but Mr. Larsen also speculated that nurses might offer a more collaborative approach and provide greater support through patient education and regular telephone contact.
 

 

 

Potential Advantages of Nurse-Led Care

Clinicians concerned about nurses missing nuances in disease progression or being slow to recognize complications might be surprised to learn about the advantage of nurse-led care, but Mwidimi Ndosi, PhD, an associate professor in rheumatology nursing at the University of the West of England, Bristol, England, was not.

“There is quite a large literature to show that nursing care is often superior to physician-led patient management in the appropriate circumstances,” Mr. Ndosi said. In this specific instance of gout management, he said that the treatment target is clear, and nurses are often able to devote more time to a specific goal, like T2T, than clinicians balancing more priorities.

Dr. Mwidimi Ndosi


“In this trial, the care was administered by nurse specialists who presumably are skilled in this disease and know their limitations if a consultation with a physician is needed,” he said.

Mr. Ndosi, like Mr. Larsen, considers it likely that nurse-led programs for a T2T gout protocol will be implemented elsewhere. Mr. Ndosi pointed out that patients who are concerned about the quality of nurse-led care are generally convinced of its merits over time.

Because of factors such as nurses’ ability to spend more clinical time with patients and greater willingness to engage in resolving obstacles to self-care, compared with physicians, “there are many studies to show that patients are often more satisfied with care provided by nurses,” he said.

Mr. Larsen and Mr. Ndosi reported no potential conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EULAR 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Low Hydroxychloroquine Levels in Early Pregnancy Tied to Greater Flares in SLE

Article Type
Changed
Mon, 06/24/2024 - 14:32

 

TOPLINE:

A study reveals that hydroxychloroquine levels during the first trimester in pregnant women with systemic lupus erythematosus (SLE) are linked to severe maternal flares but not to adverse pregnancy outcomes.

METHODOLOGY:

  • Researchers included pregnant women with SLE (median age, 32.1 years; median duration of disease, 8.3 years) who were enrolled in an ongoing French prospective observational study and were receiving hydroxychloroquine.
  • The study assessed hydroxychloroquine blood levels during the first trimester. It defined severe nonadherence as having levels < 200 ng/mL and classified levels < 500 ng/mL as subtherapeutic.
  • Primary outcomes were maternal flares during pregnancy and adverse pregnancy outcomes, including fetal/neonatal death and preterm delivery.

TAKEAWAY:

  • Overall, 32 women experienced at least one flare during the second and third trimester; four had severe flares.
  • The rates of severe maternal SLE flares were significantly associated with hydroxychloroquine levels in the first trimester that were classified as subtherapeutic (8.8% vs 0.7% with above subtherapeutic levels, P = .02) and severely nonadherent (13.3% vs 1.3% with above severely nonadherent levels, P = .04).
  • There was no significant difference in adverse pregnancy outcomes by hydroxychloroquine level, suggesting its specific effect on maternal health rather than fetal health.

IN PRACTICE:

According to the authors, “this study supports hydroxychloroquine blood level assessment in pregnant women with SLE, as a predictor of severe maternal disease activity in pregnancy.”

SOURCE:

The study was led by Gelsomina Alle, MD, Assistance Publique-Hôpitaux de Paris, Paris, France. It was published online in Rheumatology.

LIMITATIONS:

The study’s sample size limited the ability to perform multivariate analyses for severe flares. Patients had to have an ongoing pregnancy at 12 weeks to be included, potentially excluding those with early pregnancy loss. The study only observed first-trimester hydroxychloroquine levels, not accounting for adherence variations throughout pregnancy.

DISCLOSURES:

The study funding source was not disclosed. Several authors declared financial relationships with pharmaceutical companies, including research support and consulting fees.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A study reveals that hydroxychloroquine levels during the first trimester in pregnant women with systemic lupus erythematosus (SLE) are linked to severe maternal flares but not to adverse pregnancy outcomes.

METHODOLOGY:

  • Researchers included pregnant women with SLE (median age, 32.1 years; median duration of disease, 8.3 years) who were enrolled in an ongoing French prospective observational study and were receiving hydroxychloroquine.
  • The study assessed hydroxychloroquine blood levels during the first trimester. It defined severe nonadherence as having levels < 200 ng/mL and classified levels < 500 ng/mL as subtherapeutic.
  • Primary outcomes were maternal flares during pregnancy and adverse pregnancy outcomes, including fetal/neonatal death and preterm delivery.

TAKEAWAY:

  • Overall, 32 women experienced at least one flare during the second and third trimester; four had severe flares.
  • The rates of severe maternal SLE flares were significantly associated with hydroxychloroquine levels in the first trimester that were classified as subtherapeutic (8.8% vs 0.7% with above subtherapeutic levels, P = .02) and severely nonadherent (13.3% vs 1.3% with above severely nonadherent levels, P = .04).
  • There was no significant difference in adverse pregnancy outcomes by hydroxychloroquine level, suggesting its specific effect on maternal health rather than fetal health.

IN PRACTICE:

According to the authors, “this study supports hydroxychloroquine blood level assessment in pregnant women with SLE, as a predictor of severe maternal disease activity in pregnancy.”

SOURCE:

The study was led by Gelsomina Alle, MD, Assistance Publique-Hôpitaux de Paris, Paris, France. It was published online in Rheumatology.

LIMITATIONS:

The study’s sample size limited the ability to perform multivariate analyses for severe flares. Patients had to have an ongoing pregnancy at 12 weeks to be included, potentially excluding those with early pregnancy loss. The study only observed first-trimester hydroxychloroquine levels, not accounting for adherence variations throughout pregnancy.

DISCLOSURES:

The study funding source was not disclosed. Several authors declared financial relationships with pharmaceutical companies, including research support and consulting fees.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

A study reveals that hydroxychloroquine levels during the first trimester in pregnant women with systemic lupus erythematosus (SLE) are linked to severe maternal flares but not to adverse pregnancy outcomes.

METHODOLOGY:

  • Researchers included pregnant women with SLE (median age, 32.1 years; median duration of disease, 8.3 years) who were enrolled in an ongoing French prospective observational study and were receiving hydroxychloroquine.
  • The study assessed hydroxychloroquine blood levels during the first trimester. It defined severe nonadherence as having levels < 200 ng/mL and classified levels < 500 ng/mL as subtherapeutic.
  • Primary outcomes were maternal flares during pregnancy and adverse pregnancy outcomes, including fetal/neonatal death and preterm delivery.

TAKEAWAY:

  • Overall, 32 women experienced at least one flare during the second and third trimester; four had severe flares.
  • The rates of severe maternal SLE flares were significantly associated with hydroxychloroquine levels in the first trimester that were classified as subtherapeutic (8.8% vs 0.7% with above subtherapeutic levels, P = .02) and severely nonadherent (13.3% vs 1.3% with above severely nonadherent levels, P = .04).
  • There was no significant difference in adverse pregnancy outcomes by hydroxychloroquine level, suggesting its specific effect on maternal health rather than fetal health.

IN PRACTICE:

According to the authors, “this study supports hydroxychloroquine blood level assessment in pregnant women with SLE, as a predictor of severe maternal disease activity in pregnancy.”

SOURCE:

The study was led by Gelsomina Alle, MD, Assistance Publique-Hôpitaux de Paris, Paris, France. It was published online in Rheumatology.

LIMITATIONS:

The study’s sample size limited the ability to perform multivariate analyses for severe flares. Patients had to have an ongoing pregnancy at 12 weeks to be included, potentially excluding those with early pregnancy loss. The study only observed first-trimester hydroxychloroquine levels, not accounting for adherence variations throughout pregnancy.

DISCLOSURES:

The study funding source was not disclosed. Several authors declared financial relationships with pharmaceutical companies, including research support and consulting fees.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Sex-Related Differences Found in IgG4-Related Disease Epidemiology

Article Type
Changed
Mon, 06/24/2024 - 14:27

 

TOPLINE:

Men with immunoglobulin G4 (IgG4)-related disease exhibit significantly lower serum lipase levels and a greater likelihood of organ involvement than women, highlighting significant sex-dependent differences in disease manifestations.

METHODOLOGY:

  • Researchers conducted a retrospective study of 328 patients (69% men) diagnosed with IgG4-related disease at the Massachusetts General Hospital – Rheumatology Clinic, Boston, who met the American College of Rheumatology–European Alliance of Associations for Rheumatology (ACR-EULAR) classification criteria between January 2008 and May 2023.
  • Among the 328 patients, 69% were men and 31% were women, with a significant male-to-female ratio of 2.2:1.0. Men were typically older at diagnosis (median age, 63.7 vs 58.2 years).
  • Data on serum lipase levels, renal involvement, and other clinical and laboratory parameters were collected.

TAKEAWAY:

  • Men had higher baseline ACR-EULAR scores, indicating more severe disease (median score of 35.0 vs 29.5; P = .0010).
  • Male patients demonstrated a median baseline serum lipase concentration of 24.5 U/L, significantly lower than the 33.5 U/L observed in women.
  • Pancreatic (50% vs 26%) or renal (36% vs 18%) involvement was more common in men.
  • Men exhibited higher IgG4 levels (P = .0050) and active B-cell responses in the blood (P = .0095).

IN PRACTICE:

According to the authors, this work confirms “the impression of an important sex disparity among patients with IgG4-related disease, with most patients being male, and male patients demonstrating strong tendencies toward more severe disease than female patients.”

SOURCE:

The study was led by Isha Jha, MD, Massachusetts General Hospital, Boston. It was published online on May 30, 2024, in The Lancet Rheumatology

LIMITATIONS:

The study’s retrospective design may limit the ability to establish causality between sex differences and IgG4-related disease manifestations. A relatively small percentage of patients were assessed before receiving any immunosuppressive treatment, potentially influencing the observed clinical parameters.

DISCLOSURES:

This work was supported by the National Institutes of Health/National Institute of Allergy and Infectious Diseases, the Rheumatology Research Foundation, and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some authors declared financial ties outside this work.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Men with immunoglobulin G4 (IgG4)-related disease exhibit significantly lower serum lipase levels and a greater likelihood of organ involvement than women, highlighting significant sex-dependent differences in disease manifestations.

METHODOLOGY:

  • Researchers conducted a retrospective study of 328 patients (69% men) diagnosed with IgG4-related disease at the Massachusetts General Hospital – Rheumatology Clinic, Boston, who met the American College of Rheumatology–European Alliance of Associations for Rheumatology (ACR-EULAR) classification criteria between January 2008 and May 2023.
  • Among the 328 patients, 69% were men and 31% were women, with a significant male-to-female ratio of 2.2:1.0. Men were typically older at diagnosis (median age, 63.7 vs 58.2 years).
  • Data on serum lipase levels, renal involvement, and other clinical and laboratory parameters were collected.

TAKEAWAY:

  • Men had higher baseline ACR-EULAR scores, indicating more severe disease (median score of 35.0 vs 29.5; P = .0010).
  • Male patients demonstrated a median baseline serum lipase concentration of 24.5 U/L, significantly lower than the 33.5 U/L observed in women.
  • Pancreatic (50% vs 26%) or renal (36% vs 18%) involvement was more common in men.
  • Men exhibited higher IgG4 levels (P = .0050) and active B-cell responses in the blood (P = .0095).

IN PRACTICE:

According to the authors, this work confirms “the impression of an important sex disparity among patients with IgG4-related disease, with most patients being male, and male patients demonstrating strong tendencies toward more severe disease than female patients.”

SOURCE:

The study was led by Isha Jha, MD, Massachusetts General Hospital, Boston. It was published online on May 30, 2024, in The Lancet Rheumatology

LIMITATIONS:

The study’s retrospective design may limit the ability to establish causality between sex differences and IgG4-related disease manifestations. A relatively small percentage of patients were assessed before receiving any immunosuppressive treatment, potentially influencing the observed clinical parameters.

DISCLOSURES:

This work was supported by the National Institutes of Health/National Institute of Allergy and Infectious Diseases, the Rheumatology Research Foundation, and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some authors declared financial ties outside this work.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Men with immunoglobulin G4 (IgG4)-related disease exhibit significantly lower serum lipase levels and a greater likelihood of organ involvement than women, highlighting significant sex-dependent differences in disease manifestations.

METHODOLOGY:

  • Researchers conducted a retrospective study of 328 patients (69% men) diagnosed with IgG4-related disease at the Massachusetts General Hospital – Rheumatology Clinic, Boston, who met the American College of Rheumatology–European Alliance of Associations for Rheumatology (ACR-EULAR) classification criteria between January 2008 and May 2023.
  • Among the 328 patients, 69% were men and 31% were women, with a significant male-to-female ratio of 2.2:1.0. Men were typically older at diagnosis (median age, 63.7 vs 58.2 years).
  • Data on serum lipase levels, renal involvement, and other clinical and laboratory parameters were collected.

TAKEAWAY:

  • Men had higher baseline ACR-EULAR scores, indicating more severe disease (median score of 35.0 vs 29.5; P = .0010).
  • Male patients demonstrated a median baseline serum lipase concentration of 24.5 U/L, significantly lower than the 33.5 U/L observed in women.
  • Pancreatic (50% vs 26%) or renal (36% vs 18%) involvement was more common in men.
  • Men exhibited higher IgG4 levels (P = .0050) and active B-cell responses in the blood (P = .0095).

IN PRACTICE:

According to the authors, this work confirms “the impression of an important sex disparity among patients with IgG4-related disease, with most patients being male, and male patients demonstrating strong tendencies toward more severe disease than female patients.”

SOURCE:

The study was led by Isha Jha, MD, Massachusetts General Hospital, Boston. It was published online on May 30, 2024, in The Lancet Rheumatology

LIMITATIONS:

The study’s retrospective design may limit the ability to establish causality between sex differences and IgG4-related disease manifestations. A relatively small percentage of patients were assessed before receiving any immunosuppressive treatment, potentially influencing the observed clinical parameters.

DISCLOSURES:

This work was supported by the National Institutes of Health/National Institute of Allergy and Infectious Diseases, the Rheumatology Research Foundation, and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some authors declared financial ties outside this work.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Why Do Investigational OA Drugs Need Better Trial Endpoints? Lorecivivint Serves as an Example

Article Type
Changed
Mon, 06/24/2024 - 14:20

 

— The hypothesis that pivotal clinical trials for osteoarthritis (OA)-modifying therapies are not using appropriate designs or endpoints appears to be consistent with the recent failure of the phase 3 trial of the investigational agent lorecivivint, according to experts tackling this issue.

For the elusive target of disease-modifying OA drugs (DMOADs), “there have been a lot of developments in the last few years but so far a lot of disappointments,” said Francis Berenbaum, MD, PhD, head of the department of rheumatology, Saint-Antoine Hospital, Paris, France.

Disagreement on the target most likely to favorably alter the natural history of disease might be the key issue. Dr. Berenbaum considers it essential to determine which changes in the joint signify a favorable drug effect and will lead to what regulatory agencies consider a clinically meaningful benefit. These include improved function and long-term preservation of the joint, as well as symptom control.

Ted Bosworth/Medscape Medical News
Dr. Francis Berenbaum


Of primary targets to modify the course of OA, cartilage is not one of them, according to Dr. Berenbaum, who spoke in a session on DMOADs and regenerative OA therapies at the annual European Congress of Rheumatology.
 

OA Is Not a Cartilage-Only Disease

“There is now a big consensus that osteoarthritis is not a cartilage-only disease,” he said. Rather, he addressed the inadequate appreciation of the “whole joint” pathology that underlies OA. He called for a fundamental “paradigm change” to work toward a disease-modifying effect that produces benefit on a hard endpoint.

There are multiple steps needed to work toward this goal after a consensus is reached on a meaningful surrogate endpoint, Dr. Berenbaum said. While symptom reduction is a good start, he called for evidence of disease attenuation or a regenerative effect on an important surrogate such as improved integrity of synovial tissue and improved bone health. Such surrogates are necessary to guide DMOAD development but not sufficient. The proof that a therapy is a DMOAD depends on a favorable effect on a hard endpoint. In the case of the knee, freedom from joint replacement is an example, Dr. Berenbaum said.

Philip G. Conaghan, MBBS, PhD, director of rheumatic and musculoskeletal medicine, University of Leeds, England, agreed with this general premise. Speaking just before Dr. Berenbaum in the same session, Dr. Conaghan traced this history of the effort to create DMOADs and updated those in clinical trials.

Dr. Philip G. Conaghan


In his talk, he listed some of the many disappointments, including those which have targeted cartilage thickness, before updating the numerous ongoing development programs. There are many targets that appear viable, but none are in final stages of testing.

In remarks to this news organization, he said he generally agreed with Dr. Berenbaum about the need for greater rigor for developing drugs to meet regulatory criteria for disease-modifying effects.

Of the drugs he reviewed, Dr. Conaghan identified lorecivivint, an intra-articular CLK/DYRK inhibitor that’s thought to modulate Wnt and inflammatory pathways, as the only drug with DMOAD potential to go to a multicenter phase 3 trial so far. The drug’s negative outcome in phase 3 was particularly disappointing after the substantial promise shown in a phase 2b study published in 2021.

In the phase 3 study, lorecivivint, relative to placebo, did not achieve a significant improvement in the primary endpoint of improved medial joint space width (JSW) in the target knee as assessed at the end of a 48-week, double-blind trial.
 

 

 

New Follow-Up Data Support DMOAD Activity

Yet, additional extension data from the phase 3 lorecivivint trial presented in the EULAR DMOAD session challenge the conclusion even if they do not change the results.

The new data presented at EULAR is the second of two sets of extension data. The first, reported earlier, involved an analysis at 96 weeks or 48 weeks after the double-blind trial. At the beginning of this extension, lorecivivint-start patients had received a second intraarticular injection of 0.07 mg, while placebo patients were crossed over to receive their first injection.

Over the course of this first extension, the gradual loss in medial JSW observed from baseline to the end of the initial 48 weeks had plateaued in those treated with lorecivivint, but the decline continued in the placebo group. As a result, the lorecivivint-start patients had a numerical but not a statistically significant advantage for medial JSW over the placebo-switch group, according to Yusuf Yazici, MD, chief medical officer of Biosplice Therapeutics, San Diego, which developed lorecivivint.

Ted Bosworth/Medscape Medical News
Dr. Yusuf Yazici


In a second open-label extension described by Dr. Yazici at EULAR 2024, a third injection was administered to the lorecivivint-start patients and a second injection to the placebo-start patients. After 52 more weeks of follow-up, there were now 3 years of follow-up in the lorecivivint-start group and 2 years of follow-up in the placebo-start group.

At the end of this second extension, lorecivivint-start patients had a median increase in JSW that was approaching the baseline level at study entry. Although the placebo-start group had experienced a decline in the medial JSW at the end of the first extension when they had received one injection, JSW had also improved in the direction of baseline after a second injection with 2 years of follow-up. The advantage of three injections of lorecivivint over 3 years had now reached statistical significance (P = .031) despite the improvement seen in the placebo-start group following two injections over 2 years.
 

At 3 Years, Benefit Is Finally Potentially Significant

If placebo-treated patients had not received a second shot of lorecivivint and progressed at the rate seen before their second shot, the hypothetical trajectory would have provided lorecivivint with a highly statistically significant advantage (P < .001), said Dr. Yazici, displaying a hypothetical graph.

Along with improvements in pain and function associated with lorecivivint relative to placebo at 6 months, 12 months, and 24 months, the structural improvements in 3 years now suggest that “long-term treatment with lorecivivint has the potential to be a DMOAD for knee OA,” Dr. Yazici said.

While Dr. Berenbaum did not comment on this speculation, he did note the potential need for long-term studies to prove a disease-modifying effect in OA. This is the rationale for identifying surrogates.

To illustrate this point, Dr. Berenbaum made an analogy between OA and cardiovascular disease. In cardiovascular disease, surrogates of disease-modifying therapies, such as control of hypertension or hyperlipidemia, are accepted by regulatory agencies on the basis of their proven association with hard endpoints, such as myocardial infarction, stroke, or cardiovascular death. Like joint failure, these events can take years or decades to arise.

“For trials in OA, we need to agree on these surrogates,” Dr. Berenbaum said, although he acknowledged that they would then have to be validated. Noting that the US Food and Drug Administration has now identified OA as a serious disease for which accelerated drug approvals will be considered to address an unmet need, Dr. Berenbaum suggested there is an even greater impetus for improving strategies for DMOAD development.

Dr. Berenbaum reported financial relationships with Grünenthal, GlaxoSmithKline, Eli Lilly, Novartis, Pfizer, and Servier. Dr. Conaghan reported financial relationships with AbbVie, AstraZeneca, Eli Lilly, Galapagos, GlaxoSmithKline, Grünenthal, Janssen, Levicept, Merck, Novartis, Pfizer, Regeneron, Stryker, and UCB. Dr. Yazici is an employee of Biosplice Therapeutics, which provided funding for the OAS-07 trial.
 

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

— The hypothesis that pivotal clinical trials for osteoarthritis (OA)-modifying therapies are not using appropriate designs or endpoints appears to be consistent with the recent failure of the phase 3 trial of the investigational agent lorecivivint, according to experts tackling this issue.

For the elusive target of disease-modifying OA drugs (DMOADs), “there have been a lot of developments in the last few years but so far a lot of disappointments,” said Francis Berenbaum, MD, PhD, head of the department of rheumatology, Saint-Antoine Hospital, Paris, France.

Disagreement on the target most likely to favorably alter the natural history of disease might be the key issue. Dr. Berenbaum considers it essential to determine which changes in the joint signify a favorable drug effect and will lead to what regulatory agencies consider a clinically meaningful benefit. These include improved function and long-term preservation of the joint, as well as symptom control.

Ted Bosworth/Medscape Medical News
Dr. Francis Berenbaum


Of primary targets to modify the course of OA, cartilage is not one of them, according to Dr. Berenbaum, who spoke in a session on DMOADs and regenerative OA therapies at the annual European Congress of Rheumatology.
 

OA Is Not a Cartilage-Only Disease

“There is now a big consensus that osteoarthritis is not a cartilage-only disease,” he said. Rather, he addressed the inadequate appreciation of the “whole joint” pathology that underlies OA. He called for a fundamental “paradigm change” to work toward a disease-modifying effect that produces benefit on a hard endpoint.

There are multiple steps needed to work toward this goal after a consensus is reached on a meaningful surrogate endpoint, Dr. Berenbaum said. While symptom reduction is a good start, he called for evidence of disease attenuation or a regenerative effect on an important surrogate such as improved integrity of synovial tissue and improved bone health. Such surrogates are necessary to guide DMOAD development but not sufficient. The proof that a therapy is a DMOAD depends on a favorable effect on a hard endpoint. In the case of the knee, freedom from joint replacement is an example, Dr. Berenbaum said.

Philip G. Conaghan, MBBS, PhD, director of rheumatic and musculoskeletal medicine, University of Leeds, England, agreed with this general premise. Speaking just before Dr. Berenbaum in the same session, Dr. Conaghan traced this history of the effort to create DMOADs and updated those in clinical trials.

Dr. Philip G. Conaghan


In his talk, he listed some of the many disappointments, including those which have targeted cartilage thickness, before updating the numerous ongoing development programs. There are many targets that appear viable, but none are in final stages of testing.

In remarks to this news organization, he said he generally agreed with Dr. Berenbaum about the need for greater rigor for developing drugs to meet regulatory criteria for disease-modifying effects.

Of the drugs he reviewed, Dr. Conaghan identified lorecivivint, an intra-articular CLK/DYRK inhibitor that’s thought to modulate Wnt and inflammatory pathways, as the only drug with DMOAD potential to go to a multicenter phase 3 trial so far. The drug’s negative outcome in phase 3 was particularly disappointing after the substantial promise shown in a phase 2b study published in 2021.

In the phase 3 study, lorecivivint, relative to placebo, did not achieve a significant improvement in the primary endpoint of improved medial joint space width (JSW) in the target knee as assessed at the end of a 48-week, double-blind trial.
 

 

 

New Follow-Up Data Support DMOAD Activity

Yet, additional extension data from the phase 3 lorecivivint trial presented in the EULAR DMOAD session challenge the conclusion even if they do not change the results.

The new data presented at EULAR is the second of two sets of extension data. The first, reported earlier, involved an analysis at 96 weeks or 48 weeks after the double-blind trial. At the beginning of this extension, lorecivivint-start patients had received a second intraarticular injection of 0.07 mg, while placebo patients were crossed over to receive their first injection.

Over the course of this first extension, the gradual loss in medial JSW observed from baseline to the end of the initial 48 weeks had plateaued in those treated with lorecivivint, but the decline continued in the placebo group. As a result, the lorecivivint-start patients had a numerical but not a statistically significant advantage for medial JSW over the placebo-switch group, according to Yusuf Yazici, MD, chief medical officer of Biosplice Therapeutics, San Diego, which developed lorecivivint.

Ted Bosworth/Medscape Medical News
Dr. Yusuf Yazici


In a second open-label extension described by Dr. Yazici at EULAR 2024, a third injection was administered to the lorecivivint-start patients and a second injection to the placebo-start patients. After 52 more weeks of follow-up, there were now 3 years of follow-up in the lorecivivint-start group and 2 years of follow-up in the placebo-start group.

At the end of this second extension, lorecivivint-start patients had a median increase in JSW that was approaching the baseline level at study entry. Although the placebo-start group had experienced a decline in the medial JSW at the end of the first extension when they had received one injection, JSW had also improved in the direction of baseline after a second injection with 2 years of follow-up. The advantage of three injections of lorecivivint over 3 years had now reached statistical significance (P = .031) despite the improvement seen in the placebo-start group following two injections over 2 years.
 

At 3 Years, Benefit Is Finally Potentially Significant

If placebo-treated patients had not received a second shot of lorecivivint and progressed at the rate seen before their second shot, the hypothetical trajectory would have provided lorecivivint with a highly statistically significant advantage (P < .001), said Dr. Yazici, displaying a hypothetical graph.

Along with improvements in pain and function associated with lorecivivint relative to placebo at 6 months, 12 months, and 24 months, the structural improvements in 3 years now suggest that “long-term treatment with lorecivivint has the potential to be a DMOAD for knee OA,” Dr. Yazici said.

While Dr. Berenbaum did not comment on this speculation, he did note the potential need for long-term studies to prove a disease-modifying effect in OA. This is the rationale for identifying surrogates.

To illustrate this point, Dr. Berenbaum made an analogy between OA and cardiovascular disease. In cardiovascular disease, surrogates of disease-modifying therapies, such as control of hypertension or hyperlipidemia, are accepted by regulatory agencies on the basis of their proven association with hard endpoints, such as myocardial infarction, stroke, or cardiovascular death. Like joint failure, these events can take years or decades to arise.

“For trials in OA, we need to agree on these surrogates,” Dr. Berenbaum said, although he acknowledged that they would then have to be validated. Noting that the US Food and Drug Administration has now identified OA as a serious disease for which accelerated drug approvals will be considered to address an unmet need, Dr. Berenbaum suggested there is an even greater impetus for improving strategies for DMOAD development.

Dr. Berenbaum reported financial relationships with Grünenthal, GlaxoSmithKline, Eli Lilly, Novartis, Pfizer, and Servier. Dr. Conaghan reported financial relationships with AbbVie, AstraZeneca, Eli Lilly, Galapagos, GlaxoSmithKline, Grünenthal, Janssen, Levicept, Merck, Novartis, Pfizer, Regeneron, Stryker, and UCB. Dr. Yazici is an employee of Biosplice Therapeutics, which provided funding for the OAS-07 trial.
 

A version of this article first appeared on Medscape.com.

 

— The hypothesis that pivotal clinical trials for osteoarthritis (OA)-modifying therapies are not using appropriate designs or endpoints appears to be consistent with the recent failure of the phase 3 trial of the investigational agent lorecivivint, according to experts tackling this issue.

For the elusive target of disease-modifying OA drugs (DMOADs), “there have been a lot of developments in the last few years but so far a lot of disappointments,” said Francis Berenbaum, MD, PhD, head of the department of rheumatology, Saint-Antoine Hospital, Paris, France.

Disagreement on the target most likely to favorably alter the natural history of disease might be the key issue. Dr. Berenbaum considers it essential to determine which changes in the joint signify a favorable drug effect and will lead to what regulatory agencies consider a clinically meaningful benefit. These include improved function and long-term preservation of the joint, as well as symptom control.

Ted Bosworth/Medscape Medical News
Dr. Francis Berenbaum


Of primary targets to modify the course of OA, cartilage is not one of them, according to Dr. Berenbaum, who spoke in a session on DMOADs and regenerative OA therapies at the annual European Congress of Rheumatology.
 

OA Is Not a Cartilage-Only Disease

“There is now a big consensus that osteoarthritis is not a cartilage-only disease,” he said. Rather, he addressed the inadequate appreciation of the “whole joint” pathology that underlies OA. He called for a fundamental “paradigm change” to work toward a disease-modifying effect that produces benefit on a hard endpoint.

There are multiple steps needed to work toward this goal after a consensus is reached on a meaningful surrogate endpoint, Dr. Berenbaum said. While symptom reduction is a good start, he called for evidence of disease attenuation or a regenerative effect on an important surrogate such as improved integrity of synovial tissue and improved bone health. Such surrogates are necessary to guide DMOAD development but not sufficient. The proof that a therapy is a DMOAD depends on a favorable effect on a hard endpoint. In the case of the knee, freedom from joint replacement is an example, Dr. Berenbaum said.

Philip G. Conaghan, MBBS, PhD, director of rheumatic and musculoskeletal medicine, University of Leeds, England, agreed with this general premise. Speaking just before Dr. Berenbaum in the same session, Dr. Conaghan traced this history of the effort to create DMOADs and updated those in clinical trials.

Dr. Philip G. Conaghan


In his talk, he listed some of the many disappointments, including those which have targeted cartilage thickness, before updating the numerous ongoing development programs. There are many targets that appear viable, but none are in final stages of testing.

In remarks to this news organization, he said he generally agreed with Dr. Berenbaum about the need for greater rigor for developing drugs to meet regulatory criteria for disease-modifying effects.

Of the drugs he reviewed, Dr. Conaghan identified lorecivivint, an intra-articular CLK/DYRK inhibitor that’s thought to modulate Wnt and inflammatory pathways, as the only drug with DMOAD potential to go to a multicenter phase 3 trial so far. The drug’s negative outcome in phase 3 was particularly disappointing after the substantial promise shown in a phase 2b study published in 2021.

In the phase 3 study, lorecivivint, relative to placebo, did not achieve a significant improvement in the primary endpoint of improved medial joint space width (JSW) in the target knee as assessed at the end of a 48-week, double-blind trial.
 

 

 

New Follow-Up Data Support DMOAD Activity

Yet, additional extension data from the phase 3 lorecivivint trial presented in the EULAR DMOAD session challenge the conclusion even if they do not change the results.

The new data presented at EULAR is the second of two sets of extension data. The first, reported earlier, involved an analysis at 96 weeks or 48 weeks after the double-blind trial. At the beginning of this extension, lorecivivint-start patients had received a second intraarticular injection of 0.07 mg, while placebo patients were crossed over to receive their first injection.

Over the course of this first extension, the gradual loss in medial JSW observed from baseline to the end of the initial 48 weeks had plateaued in those treated with lorecivivint, but the decline continued in the placebo group. As a result, the lorecivivint-start patients had a numerical but not a statistically significant advantage for medial JSW over the placebo-switch group, according to Yusuf Yazici, MD, chief medical officer of Biosplice Therapeutics, San Diego, which developed lorecivivint.

Ted Bosworth/Medscape Medical News
Dr. Yusuf Yazici


In a second open-label extension described by Dr. Yazici at EULAR 2024, a third injection was administered to the lorecivivint-start patients and a second injection to the placebo-start patients. After 52 more weeks of follow-up, there were now 3 years of follow-up in the lorecivivint-start group and 2 years of follow-up in the placebo-start group.

At the end of this second extension, lorecivivint-start patients had a median increase in JSW that was approaching the baseline level at study entry. Although the placebo-start group had experienced a decline in the medial JSW at the end of the first extension when they had received one injection, JSW had also improved in the direction of baseline after a second injection with 2 years of follow-up. The advantage of three injections of lorecivivint over 3 years had now reached statistical significance (P = .031) despite the improvement seen in the placebo-start group following two injections over 2 years.
 

At 3 Years, Benefit Is Finally Potentially Significant

If placebo-treated patients had not received a second shot of lorecivivint and progressed at the rate seen before their second shot, the hypothetical trajectory would have provided lorecivivint with a highly statistically significant advantage (P < .001), said Dr. Yazici, displaying a hypothetical graph.

Along with improvements in pain and function associated with lorecivivint relative to placebo at 6 months, 12 months, and 24 months, the structural improvements in 3 years now suggest that “long-term treatment with lorecivivint has the potential to be a DMOAD for knee OA,” Dr. Yazici said.

While Dr. Berenbaum did not comment on this speculation, he did note the potential need for long-term studies to prove a disease-modifying effect in OA. This is the rationale for identifying surrogates.

To illustrate this point, Dr. Berenbaum made an analogy between OA and cardiovascular disease. In cardiovascular disease, surrogates of disease-modifying therapies, such as control of hypertension or hyperlipidemia, are accepted by regulatory agencies on the basis of their proven association with hard endpoints, such as myocardial infarction, stroke, or cardiovascular death. Like joint failure, these events can take years or decades to arise.

“For trials in OA, we need to agree on these surrogates,” Dr. Berenbaum said, although he acknowledged that they would then have to be validated. Noting that the US Food and Drug Administration has now identified OA as a serious disease for which accelerated drug approvals will be considered to address an unmet need, Dr. Berenbaum suggested there is an even greater impetus for improving strategies for DMOAD development.

Dr. Berenbaum reported financial relationships with Grünenthal, GlaxoSmithKline, Eli Lilly, Novartis, Pfizer, and Servier. Dr. Conaghan reported financial relationships with AbbVie, AstraZeneca, Eli Lilly, Galapagos, GlaxoSmithKline, Grünenthal, Janssen, Levicept, Merck, Novartis, Pfizer, Regeneron, Stryker, and UCB. Dr. Yazici is an employee of Biosplice Therapeutics, which provided funding for the OAS-07 trial.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EULAR 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Akira Endo, the Father of Statins, Dies

Article Type
Changed
Mon, 06/24/2024 - 13:53

Akira Endo, PhD, the Japanese microbiologist and biochemist known as the father of statins, died at the age of 90 on June 5. His research led to the discovery and rise of a class of drugs that revolutionized the prevention and treatment of cardiovascular diseases. This scientific journey began over half a century ago.

Inspired by Alexander Fleming

Born into a family of farmers in northern Japan, Dr. Endo was fascinated by natural sciences from a young age and showed a particular interest in fungi and molds. At the age of 10, he already knew he wanted to become a scientist.

He studied in Japan and the United States, conducting research at the Albert Einstein College of Medicine in New York City. He was struck by the high number of elderly and overweight individuals in the United States and realized the importance of developing a drug to combat cholesterol. It was upon his return to Japan, when he joined the Sankyo laboratory, that the development of statins began.

Inspired by Alexander Fleming, who discovered penicillin in the mold Penicillium, he hypothesized that fungi could produce antibiotics inhibiting 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase, the enzyme that produces cholesterol precursors.

After a year of research on nearly 3800 strains, his team found a known substance, citrinin, that strongly inhibited HMG-CoA reductase and lowered serum cholesterol levels in rats. The research was halted because of its toxicity to the rodents’ kidneys. “Nevertheless, the experience with citrinin gave us hope and courage to quickly discover much more effective active substances,” said Dr. Endo in an article dedicated to the discovery of statins.
 

First Statin Discovered

In the summer of 1972, researchers discovered a second active culture broth, Penicillium citrinum Pen-51, which was isolated from a sample of rice collected in a grain store in Kyoto.

In July 1973, they isolated three active metabolites from this mold, one of which was compactin, which had structural similarities to HMG-CoA, the substrate of the HMG-CoA reductase reaction.

In 1976, they published two articles reporting the discovery and characterization of compactin (mevastatin), the first statin.
 

Several Setbacks

Unfortunately, when Sankyo biologists assessed the effectiveness of compactin by giving rats a diet supplemented with compactin for 7 days, no reduction in serum cholesterol was observed.

Only later did an unpublished study show that the statin significantly decreased plasma cholesterol after a month of treatment in laying hens. The hypocholesterolemic effects of compactin were then demonstrated in dogs and monkeys.

However, researchers faced a second challenge in April 1977. Microcrystalline structures were detected in the liver cells of rats that had been fed extremely high amounts of compactin (over 500 mg/kg per day for 5 weeks). Initially deemed toxic, the structures were ultimately found to be nontoxic.

A phase 2 trial began in the summer of 1979 with very encouraging preliminary results, but in August 1980, clinical development of compactin was halted, as the drug was suspected of causing lymphomas in dogs given very high doses: 100 or 200 mg/kg per day for 2 years.

This suspicion also led to the termination of trials on another statin, the closely related lovastatin, which was discovered simultaneously from different fungi by the Merck laboratory and Dr. Endo in February 1979.
 

 

 

First Statin Marketed

Subsequently, dramatic reductions in cholesterol levels observed in patients prompted Merck to conduct large-scale clinical trials of lovastatin in high-risk patients and long-term toxicity studies in dogs in 1984.

It was confirmed that the drug significantly reduced cholesterol levels and was well tolerated. No tumors were detected.

Lovastatin received approval from the Food and Drug Administration to become the first marketed statin in September 1987.

Dr. Endo received numerous awards for his work, including the Albert Lasker Award for Clinical Medical Research in 2008 and the Outstanding Achievement Award from the International Atherosclerosis Society in 2009.

This story was translated from the Medscape French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Akira Endo, PhD, the Japanese microbiologist and biochemist known as the father of statins, died at the age of 90 on June 5. His research led to the discovery and rise of a class of drugs that revolutionized the prevention and treatment of cardiovascular diseases. This scientific journey began over half a century ago.

Inspired by Alexander Fleming

Born into a family of farmers in northern Japan, Dr. Endo was fascinated by natural sciences from a young age and showed a particular interest in fungi and molds. At the age of 10, he already knew he wanted to become a scientist.

He studied in Japan and the United States, conducting research at the Albert Einstein College of Medicine in New York City. He was struck by the high number of elderly and overweight individuals in the United States and realized the importance of developing a drug to combat cholesterol. It was upon his return to Japan, when he joined the Sankyo laboratory, that the development of statins began.

Inspired by Alexander Fleming, who discovered penicillin in the mold Penicillium, he hypothesized that fungi could produce antibiotics inhibiting 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase, the enzyme that produces cholesterol precursors.

After a year of research on nearly 3800 strains, his team found a known substance, citrinin, that strongly inhibited HMG-CoA reductase and lowered serum cholesterol levels in rats. The research was halted because of its toxicity to the rodents’ kidneys. “Nevertheless, the experience with citrinin gave us hope and courage to quickly discover much more effective active substances,” said Dr. Endo in an article dedicated to the discovery of statins.
 

First Statin Discovered

In the summer of 1972, researchers discovered a second active culture broth, Penicillium citrinum Pen-51, which was isolated from a sample of rice collected in a grain store in Kyoto.

In July 1973, they isolated three active metabolites from this mold, one of which was compactin, which had structural similarities to HMG-CoA, the substrate of the HMG-CoA reductase reaction.

In 1976, they published two articles reporting the discovery and characterization of compactin (mevastatin), the first statin.
 

Several Setbacks

Unfortunately, when Sankyo biologists assessed the effectiveness of compactin by giving rats a diet supplemented with compactin for 7 days, no reduction in serum cholesterol was observed.

Only later did an unpublished study show that the statin significantly decreased plasma cholesterol after a month of treatment in laying hens. The hypocholesterolemic effects of compactin were then demonstrated in dogs and monkeys.

However, researchers faced a second challenge in April 1977. Microcrystalline structures were detected in the liver cells of rats that had been fed extremely high amounts of compactin (over 500 mg/kg per day for 5 weeks). Initially deemed toxic, the structures were ultimately found to be nontoxic.

A phase 2 trial began in the summer of 1979 with very encouraging preliminary results, but in August 1980, clinical development of compactin was halted, as the drug was suspected of causing lymphomas in dogs given very high doses: 100 or 200 mg/kg per day for 2 years.

This suspicion also led to the termination of trials on another statin, the closely related lovastatin, which was discovered simultaneously from different fungi by the Merck laboratory and Dr. Endo in February 1979.
 

 

 

First Statin Marketed

Subsequently, dramatic reductions in cholesterol levels observed in patients prompted Merck to conduct large-scale clinical trials of lovastatin in high-risk patients and long-term toxicity studies in dogs in 1984.

It was confirmed that the drug significantly reduced cholesterol levels and was well tolerated. No tumors were detected.

Lovastatin received approval from the Food and Drug Administration to become the first marketed statin in September 1987.

Dr. Endo received numerous awards for his work, including the Albert Lasker Award for Clinical Medical Research in 2008 and the Outstanding Achievement Award from the International Atherosclerosis Society in 2009.

This story was translated from the Medscape French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article appeared on Medscape.com.

Akira Endo, PhD, the Japanese microbiologist and biochemist known as the father of statins, died at the age of 90 on June 5. His research led to the discovery and rise of a class of drugs that revolutionized the prevention and treatment of cardiovascular diseases. This scientific journey began over half a century ago.

Inspired by Alexander Fleming

Born into a family of farmers in northern Japan, Dr. Endo was fascinated by natural sciences from a young age and showed a particular interest in fungi and molds. At the age of 10, he already knew he wanted to become a scientist.

He studied in Japan and the United States, conducting research at the Albert Einstein College of Medicine in New York City. He was struck by the high number of elderly and overweight individuals in the United States and realized the importance of developing a drug to combat cholesterol. It was upon his return to Japan, when he joined the Sankyo laboratory, that the development of statins began.

Inspired by Alexander Fleming, who discovered penicillin in the mold Penicillium, he hypothesized that fungi could produce antibiotics inhibiting 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase, the enzyme that produces cholesterol precursors.

After a year of research on nearly 3800 strains, his team found a known substance, citrinin, that strongly inhibited HMG-CoA reductase and lowered serum cholesterol levels in rats. The research was halted because of its toxicity to the rodents’ kidneys. “Nevertheless, the experience with citrinin gave us hope and courage to quickly discover much more effective active substances,” said Dr. Endo in an article dedicated to the discovery of statins.
 

First Statin Discovered

In the summer of 1972, researchers discovered a second active culture broth, Penicillium citrinum Pen-51, which was isolated from a sample of rice collected in a grain store in Kyoto.

In July 1973, they isolated three active metabolites from this mold, one of which was compactin, which had structural similarities to HMG-CoA, the substrate of the HMG-CoA reductase reaction.

In 1976, they published two articles reporting the discovery and characterization of compactin (mevastatin), the first statin.
 

Several Setbacks

Unfortunately, when Sankyo biologists assessed the effectiveness of compactin by giving rats a diet supplemented with compactin for 7 days, no reduction in serum cholesterol was observed.

Only later did an unpublished study show that the statin significantly decreased plasma cholesterol after a month of treatment in laying hens. The hypocholesterolemic effects of compactin were then demonstrated in dogs and monkeys.

However, researchers faced a second challenge in April 1977. Microcrystalline structures were detected in the liver cells of rats that had been fed extremely high amounts of compactin (over 500 mg/kg per day for 5 weeks). Initially deemed toxic, the structures were ultimately found to be nontoxic.

A phase 2 trial began in the summer of 1979 with very encouraging preliminary results, but in August 1980, clinical development of compactin was halted, as the drug was suspected of causing lymphomas in dogs given very high doses: 100 or 200 mg/kg per day for 2 years.

This suspicion also led to the termination of trials on another statin, the closely related lovastatin, which was discovered simultaneously from different fungi by the Merck laboratory and Dr. Endo in February 1979.
 

 

 

First Statin Marketed

Subsequently, dramatic reductions in cholesterol levels observed in patients prompted Merck to conduct large-scale clinical trials of lovastatin in high-risk patients and long-term toxicity studies in dogs in 1984.

It was confirmed that the drug significantly reduced cholesterol levels and was well tolerated. No tumors were detected.

Lovastatin received approval from the Food and Drug Administration to become the first marketed statin in September 1987.

Dr. Endo received numerous awards for his work, including the Albert Lasker Award for Clinical Medical Research in 2008 and the Outstanding Achievement Award from the International Atherosclerosis Society in 2009.

This story was translated from the Medscape French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ghrelin Paradox: Unlocking New Avenues in Obesity Management

Article Type
Changed
Mon, 06/24/2024 - 13:28

Despite their best efforts, 80% of people who lose weight regain it and many end up heavier within 5 years. Why? Our bodies fight back, revving up hunger while slowing metabolism after weight loss. In ongoing obesity discussions, ghrelin is in the spotlight as the “hunger hormone” playing a crucial role in driving appetite and facilitating weight gain. 

Weight loss interventions, such as diet or gastric bypass surgery, may trigger an increase in ghrelin levels, potentially fueling long-term weight gain. Consequently, ghrelin remains a focal point of research into innovative antiobesity treatments. 

Ghrelin, a hormone produced in the stomach, is often called the “hunger hormone.” Ghrelin is a circulating orexigenic gut hormone with growth hormone–releasing activity. In the intricate balance of energy, central and peripheral peptides such as ghrelin, leptin, adiponectin, and insulin play crucial roles. They regulate hunger, fullness, and metabolic rates, shaping our body weight outcomes. 

Since the discovery of ghrelin, in 1999, research in mice and people has focused on its effect on regulating appetite and implications for long-term weight control. When hunger strikes, ghrelin levels surge, sending signals to the brain that ramp up the appetite. Following a meal, ghrelin decreases, indicating fullness. 

Studies have found that people who were injected with subcutaneous ghrelin experienced a 46% increase in hunger and ate 28% more at their next meal than those who didn’t receive a ghrelin injection.

We might expect high levels of ghrelin in individuals with obesity, but this is not the case. In fact, ghrelin levels are typically lower in individuals with obesity than in leaner individuals. This finding might seem to contradict the idea that obesity is due to high levels of the hunger hormone

Excess weight could increase sensitivity to ghrelin, where more receptors lead to higher hunger stimulation with less ghrelin. Beyond hunger, ghrelin can also lead us to eat for comfort, as when stressed or anxious. Ghrelin and synthetic ghrelin mimetics increase body weight and fat mass by activating receptors in the arcuate nucleus of the hypothalamus (Müller et al.Bany Bakar et al.). There, it also activates the brain’s reward pathways, making us crave food even when we are not hungry. This connection between ghrelin and emotional eating can contribute to stress-induced obesity. 

In my clinical practice, I have seen individuals gain maximum weight when they are under more stress and are sleep-deprived. This is because ghrelin levels increased in these scenarios. This elevation of ghrelin in high-stress, low-sleep situations affects weight gain in women during the postpartum period and menopause

Evidence also suggests that certain foods affect ghrelin levels. After a person eats carbohydrates, their ghrelin levels initially decrease quickly, but this is followed by a rise in ghrelin, leading them to become hungry again. In contrast, protein intake helps suppress ghrelin levels for longer. Hence, we advise patients to increase protein intake while reducing their carb intake, or to always eat protein along with carbs.

It makes sense that when individuals with obesity lose weight by fasting or caloric restriction and try to maintain that weight loss, their bodies tend to produce more ghrelin. This effect might explain why people who lose weight often find it hard to keep it off: Rising ghrelin levels after weight loss might drive them to eat more and regain weight. 

Two prominent weight loss surgeries, sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB), have opposite effects on ghrelin levels, reflecting their distinct mechanisms for weight loss. SG involves removal of the gastric fundus, where ghrelin is produced, resulting in a significant decrease in ghrelin levels; RYGB operates through malabsorption without directly affecting ghrelin production. Despite these differing approaches, both techniques demonstrate remarkable weight loss efficacy. Research comparing the two procedures reveals that SG leads to decreased fasting plasma ghrelin levels, whereas RYGB prompts an increase, highlighting the additional appetite-reducing mechanism of SG through ghrelin suppression. This contrast underscores the intricate role of ghrelin in appetite regulation and suggests that its manipulation can significantly influence weight loss outcomes.

With the effect of ghrelin in stimulating appetite being established, other studies have explored the relationship between ghrelin and insulin resistance. A meta-analysis by researchers at Qingdao University, Qingdao, China, found that circulating ghrelin levels were negatively correlated with insulin resistance in individuals with obesity and normal fasting glucose levels. The findings suggest that the role of ghrelin in obesity might extend beyond appetite regulation to influence metabolic pathways and that ghrelin may be a marker for predicting obesity.

Researchers are exploring potential therapeutic targets focusing on ghrelin modulation. Although selective neutralization of ghrelin has not yielded consistent results in rodent models, the interplay between ghrelin and LEAP2— a hormone that attaches to the same brain receptors — could be an area of interest for future obesity treatments.

Could ghrelin be the key to tackling obesity? Blocking ghrelin pharmacologically might be a strategy to keep weight off after weight loss, and it could help prevent the typical rebound effect seen with diets and withdrawal of medications. Considering the high rates of weight regain after diet-induced weight loss and withdrawal of weight loss medications, targeting ghrelin might be the missing link in long-term obesity treatment. It could be a valuable approach to improving long-term outcomes for obesity. However, these blockers might have significant side effects, given that ghrelin affects not only hunger but also the brain’s reward and pleasure centers. Therefore, caution will be needed in developing such medications owing to their potential impact on mood and mental health.

With ghrelin playing roles in hunger, reward pathways, and energy regulation, understanding this hormone is crucial in the fight against obesity. Stay tuned for future research that could shed light on the underlying mechanisms at play and hopefully results in clinical action steps.

Dimpi Desai, MD, is a professor in the Department of Medicine, Division of Endocrinology, Gerontology, and Metabolism, Stanford University, Stanford, California, and has disclosed no relevant financial relationships. Ashni Dharia, MD, is a resident in the Department of Internal Medicine, Allegheny General Hospital, Pittsburgh, Pennsylvania.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Despite their best efforts, 80% of people who lose weight regain it and many end up heavier within 5 years. Why? Our bodies fight back, revving up hunger while slowing metabolism after weight loss. In ongoing obesity discussions, ghrelin is in the spotlight as the “hunger hormone” playing a crucial role in driving appetite and facilitating weight gain. 

Weight loss interventions, such as diet or gastric bypass surgery, may trigger an increase in ghrelin levels, potentially fueling long-term weight gain. Consequently, ghrelin remains a focal point of research into innovative antiobesity treatments. 

Ghrelin, a hormone produced in the stomach, is often called the “hunger hormone.” Ghrelin is a circulating orexigenic gut hormone with growth hormone–releasing activity. In the intricate balance of energy, central and peripheral peptides such as ghrelin, leptin, adiponectin, and insulin play crucial roles. They regulate hunger, fullness, and metabolic rates, shaping our body weight outcomes. 

Since the discovery of ghrelin, in 1999, research in mice and people has focused on its effect on regulating appetite and implications for long-term weight control. When hunger strikes, ghrelin levels surge, sending signals to the brain that ramp up the appetite. Following a meal, ghrelin decreases, indicating fullness. 

Studies have found that people who were injected with subcutaneous ghrelin experienced a 46% increase in hunger and ate 28% more at their next meal than those who didn’t receive a ghrelin injection.

We might expect high levels of ghrelin in individuals with obesity, but this is not the case. In fact, ghrelin levels are typically lower in individuals with obesity than in leaner individuals. This finding might seem to contradict the idea that obesity is due to high levels of the hunger hormone

Excess weight could increase sensitivity to ghrelin, where more receptors lead to higher hunger stimulation with less ghrelin. Beyond hunger, ghrelin can also lead us to eat for comfort, as when stressed or anxious. Ghrelin and synthetic ghrelin mimetics increase body weight and fat mass by activating receptors in the arcuate nucleus of the hypothalamus (Müller et al.Bany Bakar et al.). There, it also activates the brain’s reward pathways, making us crave food even when we are not hungry. This connection between ghrelin and emotional eating can contribute to stress-induced obesity. 

In my clinical practice, I have seen individuals gain maximum weight when they are under more stress and are sleep-deprived. This is because ghrelin levels increased in these scenarios. This elevation of ghrelin in high-stress, low-sleep situations affects weight gain in women during the postpartum period and menopause

Evidence also suggests that certain foods affect ghrelin levels. After a person eats carbohydrates, their ghrelin levels initially decrease quickly, but this is followed by a rise in ghrelin, leading them to become hungry again. In contrast, protein intake helps suppress ghrelin levels for longer. Hence, we advise patients to increase protein intake while reducing their carb intake, or to always eat protein along with carbs.

It makes sense that when individuals with obesity lose weight by fasting or caloric restriction and try to maintain that weight loss, their bodies tend to produce more ghrelin. This effect might explain why people who lose weight often find it hard to keep it off: Rising ghrelin levels after weight loss might drive them to eat more and regain weight. 

Two prominent weight loss surgeries, sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB), have opposite effects on ghrelin levels, reflecting their distinct mechanisms for weight loss. SG involves removal of the gastric fundus, where ghrelin is produced, resulting in a significant decrease in ghrelin levels; RYGB operates through malabsorption without directly affecting ghrelin production. Despite these differing approaches, both techniques demonstrate remarkable weight loss efficacy. Research comparing the two procedures reveals that SG leads to decreased fasting plasma ghrelin levels, whereas RYGB prompts an increase, highlighting the additional appetite-reducing mechanism of SG through ghrelin suppression. This contrast underscores the intricate role of ghrelin in appetite regulation and suggests that its manipulation can significantly influence weight loss outcomes.

With the effect of ghrelin in stimulating appetite being established, other studies have explored the relationship between ghrelin and insulin resistance. A meta-analysis by researchers at Qingdao University, Qingdao, China, found that circulating ghrelin levels were negatively correlated with insulin resistance in individuals with obesity and normal fasting glucose levels. The findings suggest that the role of ghrelin in obesity might extend beyond appetite regulation to influence metabolic pathways and that ghrelin may be a marker for predicting obesity.

Researchers are exploring potential therapeutic targets focusing on ghrelin modulation. Although selective neutralization of ghrelin has not yielded consistent results in rodent models, the interplay between ghrelin and LEAP2— a hormone that attaches to the same brain receptors — could be an area of interest for future obesity treatments.

Could ghrelin be the key to tackling obesity? Blocking ghrelin pharmacologically might be a strategy to keep weight off after weight loss, and it could help prevent the typical rebound effect seen with diets and withdrawal of medications. Considering the high rates of weight regain after diet-induced weight loss and withdrawal of weight loss medications, targeting ghrelin might be the missing link in long-term obesity treatment. It could be a valuable approach to improving long-term outcomes for obesity. However, these blockers might have significant side effects, given that ghrelin affects not only hunger but also the brain’s reward and pleasure centers. Therefore, caution will be needed in developing such medications owing to their potential impact on mood and mental health.

With ghrelin playing roles in hunger, reward pathways, and energy regulation, understanding this hormone is crucial in the fight against obesity. Stay tuned for future research that could shed light on the underlying mechanisms at play and hopefully results in clinical action steps.

Dimpi Desai, MD, is a professor in the Department of Medicine, Division of Endocrinology, Gerontology, and Metabolism, Stanford University, Stanford, California, and has disclosed no relevant financial relationships. Ashni Dharia, MD, is a resident in the Department of Internal Medicine, Allegheny General Hospital, Pittsburgh, Pennsylvania.

A version of this article appeared on Medscape.com.

Despite their best efforts, 80% of people who lose weight regain it and many end up heavier within 5 years. Why? Our bodies fight back, revving up hunger while slowing metabolism after weight loss. In ongoing obesity discussions, ghrelin is in the spotlight as the “hunger hormone” playing a crucial role in driving appetite and facilitating weight gain. 

Weight loss interventions, such as diet or gastric bypass surgery, may trigger an increase in ghrelin levels, potentially fueling long-term weight gain. Consequently, ghrelin remains a focal point of research into innovative antiobesity treatments. 

Ghrelin, a hormone produced in the stomach, is often called the “hunger hormone.” Ghrelin is a circulating orexigenic gut hormone with growth hormone–releasing activity. In the intricate balance of energy, central and peripheral peptides such as ghrelin, leptin, adiponectin, and insulin play crucial roles. They regulate hunger, fullness, and metabolic rates, shaping our body weight outcomes. 

Since the discovery of ghrelin, in 1999, research in mice and people has focused on its effect on regulating appetite and implications for long-term weight control. When hunger strikes, ghrelin levels surge, sending signals to the brain that ramp up the appetite. Following a meal, ghrelin decreases, indicating fullness. 

Studies have found that people who were injected with subcutaneous ghrelin experienced a 46% increase in hunger and ate 28% more at their next meal than those who didn’t receive a ghrelin injection.

We might expect high levels of ghrelin in individuals with obesity, but this is not the case. In fact, ghrelin levels are typically lower in individuals with obesity than in leaner individuals. This finding might seem to contradict the idea that obesity is due to high levels of the hunger hormone

Excess weight could increase sensitivity to ghrelin, where more receptors lead to higher hunger stimulation with less ghrelin. Beyond hunger, ghrelin can also lead us to eat for comfort, as when stressed or anxious. Ghrelin and synthetic ghrelin mimetics increase body weight and fat mass by activating receptors in the arcuate nucleus of the hypothalamus (Müller et al.Bany Bakar et al.). There, it also activates the brain’s reward pathways, making us crave food even when we are not hungry. This connection between ghrelin and emotional eating can contribute to stress-induced obesity. 

In my clinical practice, I have seen individuals gain maximum weight when they are under more stress and are sleep-deprived. This is because ghrelin levels increased in these scenarios. This elevation of ghrelin in high-stress, low-sleep situations affects weight gain in women during the postpartum period and menopause

Evidence also suggests that certain foods affect ghrelin levels. After a person eats carbohydrates, their ghrelin levels initially decrease quickly, but this is followed by a rise in ghrelin, leading them to become hungry again. In contrast, protein intake helps suppress ghrelin levels for longer. Hence, we advise patients to increase protein intake while reducing their carb intake, or to always eat protein along with carbs.

It makes sense that when individuals with obesity lose weight by fasting or caloric restriction and try to maintain that weight loss, their bodies tend to produce more ghrelin. This effect might explain why people who lose weight often find it hard to keep it off: Rising ghrelin levels after weight loss might drive them to eat more and regain weight. 

Two prominent weight loss surgeries, sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB), have opposite effects on ghrelin levels, reflecting their distinct mechanisms for weight loss. SG involves removal of the gastric fundus, where ghrelin is produced, resulting in a significant decrease in ghrelin levels; RYGB operates through malabsorption without directly affecting ghrelin production. Despite these differing approaches, both techniques demonstrate remarkable weight loss efficacy. Research comparing the two procedures reveals that SG leads to decreased fasting plasma ghrelin levels, whereas RYGB prompts an increase, highlighting the additional appetite-reducing mechanism of SG through ghrelin suppression. This contrast underscores the intricate role of ghrelin in appetite regulation and suggests that its manipulation can significantly influence weight loss outcomes.

With the effect of ghrelin in stimulating appetite being established, other studies have explored the relationship between ghrelin and insulin resistance. A meta-analysis by researchers at Qingdao University, Qingdao, China, found that circulating ghrelin levels were negatively correlated with insulin resistance in individuals with obesity and normal fasting glucose levels. The findings suggest that the role of ghrelin in obesity might extend beyond appetite regulation to influence metabolic pathways and that ghrelin may be a marker for predicting obesity.

Researchers are exploring potential therapeutic targets focusing on ghrelin modulation. Although selective neutralization of ghrelin has not yielded consistent results in rodent models, the interplay between ghrelin and LEAP2— a hormone that attaches to the same brain receptors — could be an area of interest for future obesity treatments.

Could ghrelin be the key to tackling obesity? Blocking ghrelin pharmacologically might be a strategy to keep weight off after weight loss, and it could help prevent the typical rebound effect seen with diets and withdrawal of medications. Considering the high rates of weight regain after diet-induced weight loss and withdrawal of weight loss medications, targeting ghrelin might be the missing link in long-term obesity treatment. It could be a valuable approach to improving long-term outcomes for obesity. However, these blockers might have significant side effects, given that ghrelin affects not only hunger but also the brain’s reward and pleasure centers. Therefore, caution will be needed in developing such medications owing to their potential impact on mood and mental health.

With ghrelin playing roles in hunger, reward pathways, and energy regulation, understanding this hormone is crucial in the fight against obesity. Stay tuned for future research that could shed light on the underlying mechanisms at play and hopefully results in clinical action steps.

Dimpi Desai, MD, is a professor in the Department of Medicine, Division of Endocrinology, Gerontology, and Metabolism, Stanford University, Stanford, California, and has disclosed no relevant financial relationships. Ashni Dharia, MD, is a resident in the Department of Internal Medicine, Allegheny General Hospital, Pittsburgh, Pennsylvania.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Clues on How Blast Exposure May Lead to Alzheimer’s Disease

Article Type
Changed
Mon, 06/24/2024 - 13:22

In October 2023, Robert Card — a grenade instructor in the Army Reserve — shot and killed 18 people in Maine, before turning the gun on himself. As reported by The New York Times, his family said that he had become increasingly erratic and violent during the months before the rampage.

A postmortem conducted by the Chronic Traumatic Encephalopathy (CTE) Center at Boston University found “significant evidence of traumatic brain injuries” [TBIs] and “significant degeneration, axonal and myelin loss, inflammation, and small blood vessel injury” in the white matter, the center’s director, Ann McKee, MD, said in a press release. “These findings align with our previous studies on the effects of blast injury in humans and experimental models.”

Members of the military, such as Mr. Card, are exposed to blasts from repeated firing of heavy weapons not only during combat but also during training.

New data suggest that repeated blast exposure may impair the brain’s waste clearance system, leading to biomarker changes indicative of preclinical Alzheimer’s disease 20 years earlier than typical. A higher index of suspicion for dementia or Alzheimer’s disease may be warranted in patients with a history of blast exposure or subconcussive brain injury who present with cognitive issues, according to experts interviewed.

In 2022, the US Department of Defense (DOD) launched its Warfighter Brain Health Initiative with the aim of “optimizing service member brain health and countering traumatic brain injuries.”

In April 2024, the Blast Overpressure Safety Act was introduced in the Senate to require the DOD to enact better blast screening, tracking, prevention, and treatment. The DOD initiated 26 blast overpressure studies.

Heather Snyder, PhD, Alzheimer’s Association vice president of Medical and Scientific Relations, said that an important component of that research involves “the need to study the difference between TBI-caused dementia and dementia caused independently” and “the need to study biomarkers to better understand the long-term consequences of TBI.”
 

What Is the Underlying Biology?

Dr. Snyder was the lead author of a white paper produced by the Alzheimer’s Association in 2018 on military-related risk factors for Alzheimer’s disease and related dementias. “There is a lot of work trying to understand the effect of pure blast waves on the brain, as opposed to the actual impact of the injury,” she said.

The white paper speculated that blast exposure may be analogous to subconcussive brain injury in athletes where there are no obvious immediate clinical symptoms or neurological dysfunction but which can cause cumulative injury and functional impairment over time.

“We are also trying to understand the underlying biology around brain changes, such as accumulation of tau and amyloid and other specific markers related to brain changes in Alzheimer’s disease,” said Dr. Snyder, chair of the Peer Reviewed Alzheimer’s Research Program Programmatic Panel for Alzheimer’s Disease/Alzheimer’s Disease and Related Dementias and TBI.
 

Common Biomarker Signatures

A recent study in Neurology comparing 51 veterans with mild TBI (mTBI) with 85 veterans and civilians with no lifetime history of TBI is among the first to explore these biomarker changes in human beings.

“Our findings suggest that chronic neuropathologic processes associated with blast mTBI share properties in common with pathogenic processes that are precursors to Alzheimer’s disease onset,” said coauthor Elaine R. Peskind, MD, professor of psychiatry and behavioral sciences, University of Washington, Seattle.

The largely male participants were a mean age of 34 years and underwent standardized clinical and neuropsychological testing as well as lumbar puncture to collect cerebrospinal fluid (CSF). The mTBI group had experienced at least one war zone blast or combined blast/impact that met criteria for mTBI, but 91% had more than one blast mTBI, and the study took place over 13 years.

The researchers found that the mTBI group “had biomarker signatures in common with the earliest stages of Alzheimer’s disease,” said Dr. Peskind.

For example, at age 50, they had lower mean levels of CSF amyloid beta 42 (Abeta42), the earliest marker of brain parenchymal Abeta deposition, compared with the control group (154 pg/mL and 1864 pg/mL lower, respectively).

High CSF phosphorylated tau181 (p-tau181) and total tau are established biomarkers for Alzheimer’s disease. However, levels of these biomarkers remained “relatively constant with age” in participants with mTBI but were higher in older ages for the non-TBI group.

The mTBI group also showed worse cognitive performance at older ages (P < .08). Poorer verbal memory and verbal fluency performance were associated with lower CSF Abeta42 in older participants (P ≤ .05).

In Alzheimer’s disease, a reduction in CSF Abeta42 may occur up to 20 years before the onset of clinical symptoms, according to Dr. Peskind. “But what we don’t know from this study is what this means, as total tau protein and p-tau181 in the CSF were also low, which isn’t entirely typical in the picture of preclinical Alzheimer’s disease,” she said. However, changes in total tau and p-tau181 lag behind changes in Abeta42.
 

 

 

Is Impaired Clearance the Culprit?

Coauthor Jeffrey Iliff, PhD, professor, University of Washington Department of Psychiatry and Behavioral Sciences and University of Washington Department of Neurology, Seattle, elaborated.

“In the setting of Alzheimer’s disease, a signature of the disease is reduced CSF Abeta42, which is thought to reflect that much of the amyloid gets ‘stuck’ in the brain in the form of amyloid plaques,” he said. “There are usually higher levels of phosphorylated tau and total tau, which are thought to reflect the presence of tau tangles and degeneration of neurons in the brain. But in this study, all of those were lowered, which is not exactly an Alzheimer’s disease profile.”

Dr. Iliff, associate director for research, VA Northwest Mental Illness Research, Education, and Clinical Center at VA Puget Sound Health Care System, Seattle, suggested that the culprit may be impairment in the brain’s glymphatic system. “Recently described biological research supports [the concept of] clearance of waste out of the brain during sleep via the glymphatic system, with amyloid and tau being cleared from the brain interstitium during sleep.”

A recent hypothesis is that blast TBI impairs that process. “This is why we see less of those proteins in the CSF. They’re not being cleared, which might contribute downstream to the clumping up of protein in the brain,” he suggested.

The evidence base corroborating that hypothesis is in its infancy; however, new research conducted by Dr. Iliff and his colleagues sheds light on this potential mechanism.

In blast TBI, energy from the explosion and resulting overpressure wave are “transmitted through the brain, which causes tissues of different densities — such as gray and white matter — to accelerate at different rates,” according to Dr. Iliff. This results in the shearing and stretching of brain tissue, leading to a “diffuse pattern of tissue damage.”

It is known that blast TBI has clinical overlap and associations with posttraumatic stress disorder (PTSD), depression, and persistent neurobehavioral symptoms; that veterans with a history of TBI are more than twice as likely to die by suicide than veterans with no TBI history; and that TBI may increase the risk for Alzheimer’s disease and related dementing disorders, as well as CTE.

The missing link may be the glymphatic system — a “brain-wide network of perivascular pathways, along which CSF and interstitial fluid (ISF) exchange, supporting the clearance of interstitial solutes, including amyloid-beta.”

Dr. Iliff and his group previously found that glymphatic function is “markedly and chronically impaired” following impact TBI in mice and that this impairment is associated with the mislocalization of astroglial aquaporin 4 (AQP4), a water channel that lines perivascular spaces and plays a role in healthy glymphatic exchange.

In their new study, the researchers examined both the expression and the localization of AQP4 in the human postmortem frontal cortex and found “distinct laminar differences” in AQP4 expression following blast exposure. They observed similar changes as well as impairment of glymphatic function, which emerged 28 days following blast injury in a mouse model of repetitive blast mTBI.

And in a cohort of veterans with blast mTBI, blast exposure was found to be associated with an increased burden of frontal cortical MRI-visible perivascular spaces — a “putative neuroimaging marker” of glymphatic perivascular dysfunction.

The earlier Neurology study “showed impairment of biomarkers in the CSF, but the new study showed ‘why’ or ‘how’ these biomarkers are impaired, which is via impairment of the glymphatic clearance process,” Dr. Iliff explained.
 

 

 

Veterans Especially Vulnerable

Dr. Peskind, co-director of the VA Northwest Mental Illness Research, Education and Clinical Center, VA Puget Sound Health Care System, noted that while the veterans in the earlier study had at least one TBI, the average number was 20, and it was more common to have more than 50 mTBIs than to have a single one.

“These were highly exposed combat vets,” she said. “And that number doesn’t even account for subconcussive exposure to blasts, which now appear to cause detectable brain damage, even in the absence of a diagnosable TBI.”

The Maine shooter, Mr. Card, had not seen combat and was not assessed for TBI during a psychiatric hospitalization, according to The New York Times.

Dr. Peskind added that this type of blast damage is likely specific to individuals in the military. “It isn’t the sound that causes the damage,” she explained. “It’s the blast wave, the pressure wave, and there aren’t a lot of other occupations that have those types of occupational exposures.”

Dr. Snyder added that the majority of blast TBIs have been studied in military personnel, and she is not aware of studies that have looked at blast injuries in other industries, such as demolition or mining, to see if they have the same type of biologic consequences.

Dr. Snyder hopes that the researchers will follow the participants in the Neurology study and continue looking at specific markers related to Alzheimer’s disease brain changes. What the research so far shows “is that, at an earlier age, we’re starting to see those markers changing, suggesting that the underlying biology in people with mild blast TBI is similar to the underlying biology in Alzheimer’s disease as well.”

Michael Alosco, PhD, associate professor and vice chair of research, department of neurology, Boston University Chobanian & Avedisian School of Medicine, called the issue of blast exposure and TBI “a very complex and nuanced topic,” especially because TBI is “considered a risk factor of Alzheimer’s disease” and “different types of TBIs could trigger distinct pathophysiologic processes; however, the long-term impact of repetitive blast TBIs on neurodegenerative disease changes remains unknown.”

He coauthored an editorial on the earlier Neurology study that noted its limitations, such as a small sample size and lack of consideration of lifestyle and health factors but acknowledged that the “findings provide preliminary evidence that repetitive blast exposures might influence beta-amyloid accumulation.”
 

Clinical Implications

For Dr. Peskind, the “inflection point” was seeing lower CSF Abeta42, about 20 years earlier than ages 60 and 70, which is more typical in cognitively normal community volunteers.

But she described herself as “loath to say that veterans or service members have a 20-year acceleration of risk of Alzheimer’s disease,” adding, “I don’t want to scare the heck out of our service members of veterans.” Although “this is what we fear, we’re not ready to say it for sure yet because we need to do more work. Nevertheless, it does increase the index of suspicion.”

The clinical take-home messages are not unique to service members or veterans or people with a history of head injuries or a genetic predisposition to Alzheimer’s disease, she emphasized. “If anyone of any age or occupation comes in with cognitive issues, such as [impaired] memory or executive function, they deserve a workup for dementing disorders.” Frontotemporal dementia, for example, can present earlier than Alzheimer’s disease typically does.

Common comorbidities with TBI are PTSD and obstructive sleep apnea (OSA), which can also cause cognitive issues and are also risk factors for dementia.

Dr. Iliff agreed. “If you see a veteran with a history of PTSD, a history of blast TBI, and a history of OSA or some combination of those three, I recommend having a higher index of suspicion [for potential dementia] than for an average person without any of these, even at a younger age than one would ordinarily expect.”

Of all of these factors, the only truly directly modifiable one is sleep disruption, including that caused by OSA or sleep disorders related to PTSD, he added. “Epidemiologic data suggest a connection particularly between midlife sleep disruption and the risk of dementia and Alzheimer’s disease, and so it’s worth thinking about sleep as a modifiable risk factor even as early as the 40s and 50s, whether the patient is or isn’t a veteran.”

Dr. Peskind recommended asking patients, “Do they snore? Do they thrash about during sleep? Do they have trauma nightmares? This will inform the type of intervention required.”

Dr. Alosco added that there is no known “safe” threshold of exposure to blasts, and that thresholds are “unclear, particularly at the individual level.” In American football, there is a dose-response relationship between years of play and risk for later-life neurological disorder. “The best way to mitigate risk is to limit cumulative exposure,” he said.

The study by Li and colleagues was funded by grant funding from the Department of Veterans Affairs Rehabilitation Research and Development Service and the University of Washington Friends of Alzheimer’s Research. Other sources of funding to individual researchers are listed in the original paper. The study by Braun and colleagues was supported by the National Heart, Lung and Blood Institute; the Department of Veterans Affairs Rehabilitation Research and Development Service; and the National Institute on Aging. The white paper included studies that received funding from numerous sources, including the National Institutes of Health and the DOD. Dr. Iliff serves as the chair of the Scientific Advisory Board for Applied Cognition Inc., from which he receives compensation and in which he holds an equity stake. In the last year, he served as a paid consultant to Gryphon Biosciences. Dr. Peskind has served as a paid consultant to the companies Genentech, Roche, and Alpha Cognition. Dr. Alosco was supported by grant funding from the NIH; he received research support from Rainwater Charitable Foundation Inc., and Life Molecular Imaging Inc.; he has received a single honorarium from the Michael J. Fox Foundation for services unrelated to this editorial; and he received royalties from Oxford University Press Inc. The other authors’ disclosures are listed in the original papers.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

In October 2023, Robert Card — a grenade instructor in the Army Reserve — shot and killed 18 people in Maine, before turning the gun on himself. As reported by The New York Times, his family said that he had become increasingly erratic and violent during the months before the rampage.

A postmortem conducted by the Chronic Traumatic Encephalopathy (CTE) Center at Boston University found “significant evidence of traumatic brain injuries” [TBIs] and “significant degeneration, axonal and myelin loss, inflammation, and small blood vessel injury” in the white matter, the center’s director, Ann McKee, MD, said in a press release. “These findings align with our previous studies on the effects of blast injury in humans and experimental models.”

Members of the military, such as Mr. Card, are exposed to blasts from repeated firing of heavy weapons not only during combat but also during training.

New data suggest that repeated blast exposure may impair the brain’s waste clearance system, leading to biomarker changes indicative of preclinical Alzheimer’s disease 20 years earlier than typical. A higher index of suspicion for dementia or Alzheimer’s disease may be warranted in patients with a history of blast exposure or subconcussive brain injury who present with cognitive issues, according to experts interviewed.

In 2022, the US Department of Defense (DOD) launched its Warfighter Brain Health Initiative with the aim of “optimizing service member brain health and countering traumatic brain injuries.”

In April 2024, the Blast Overpressure Safety Act was introduced in the Senate to require the DOD to enact better blast screening, tracking, prevention, and treatment. The DOD initiated 26 blast overpressure studies.

Heather Snyder, PhD, Alzheimer’s Association vice president of Medical and Scientific Relations, said that an important component of that research involves “the need to study the difference between TBI-caused dementia and dementia caused independently” and “the need to study biomarkers to better understand the long-term consequences of TBI.”
 

What Is the Underlying Biology?

Dr. Snyder was the lead author of a white paper produced by the Alzheimer’s Association in 2018 on military-related risk factors for Alzheimer’s disease and related dementias. “There is a lot of work trying to understand the effect of pure blast waves on the brain, as opposed to the actual impact of the injury,” she said.

The white paper speculated that blast exposure may be analogous to subconcussive brain injury in athletes where there are no obvious immediate clinical symptoms or neurological dysfunction but which can cause cumulative injury and functional impairment over time.

“We are also trying to understand the underlying biology around brain changes, such as accumulation of tau and amyloid and other specific markers related to brain changes in Alzheimer’s disease,” said Dr. Snyder, chair of the Peer Reviewed Alzheimer’s Research Program Programmatic Panel for Alzheimer’s Disease/Alzheimer’s Disease and Related Dementias and TBI.
 

Common Biomarker Signatures

A recent study in Neurology comparing 51 veterans with mild TBI (mTBI) with 85 veterans and civilians with no lifetime history of TBI is among the first to explore these biomarker changes in human beings.

“Our findings suggest that chronic neuropathologic processes associated with blast mTBI share properties in common with pathogenic processes that are precursors to Alzheimer’s disease onset,” said coauthor Elaine R. Peskind, MD, professor of psychiatry and behavioral sciences, University of Washington, Seattle.

The largely male participants were a mean age of 34 years and underwent standardized clinical and neuropsychological testing as well as lumbar puncture to collect cerebrospinal fluid (CSF). The mTBI group had experienced at least one war zone blast or combined blast/impact that met criteria for mTBI, but 91% had more than one blast mTBI, and the study took place over 13 years.

The researchers found that the mTBI group “had biomarker signatures in common with the earliest stages of Alzheimer’s disease,” said Dr. Peskind.

For example, at age 50, they had lower mean levels of CSF amyloid beta 42 (Abeta42), the earliest marker of brain parenchymal Abeta deposition, compared with the control group (154 pg/mL and 1864 pg/mL lower, respectively).

High CSF phosphorylated tau181 (p-tau181) and total tau are established biomarkers for Alzheimer’s disease. However, levels of these biomarkers remained “relatively constant with age” in participants with mTBI but were higher in older ages for the non-TBI group.

The mTBI group also showed worse cognitive performance at older ages (P < .08). Poorer verbal memory and verbal fluency performance were associated with lower CSF Abeta42 in older participants (P ≤ .05).

In Alzheimer’s disease, a reduction in CSF Abeta42 may occur up to 20 years before the onset of clinical symptoms, according to Dr. Peskind. “But what we don’t know from this study is what this means, as total tau protein and p-tau181 in the CSF were also low, which isn’t entirely typical in the picture of preclinical Alzheimer’s disease,” she said. However, changes in total tau and p-tau181 lag behind changes in Abeta42.
 

 

 

Is Impaired Clearance the Culprit?

Coauthor Jeffrey Iliff, PhD, professor, University of Washington Department of Psychiatry and Behavioral Sciences and University of Washington Department of Neurology, Seattle, elaborated.

“In the setting of Alzheimer’s disease, a signature of the disease is reduced CSF Abeta42, which is thought to reflect that much of the amyloid gets ‘stuck’ in the brain in the form of amyloid plaques,” he said. “There are usually higher levels of phosphorylated tau and total tau, which are thought to reflect the presence of tau tangles and degeneration of neurons in the brain. But in this study, all of those were lowered, which is not exactly an Alzheimer’s disease profile.”

Dr. Iliff, associate director for research, VA Northwest Mental Illness Research, Education, and Clinical Center at VA Puget Sound Health Care System, Seattle, suggested that the culprit may be impairment in the brain’s glymphatic system. “Recently described biological research supports [the concept of] clearance of waste out of the brain during sleep via the glymphatic system, with amyloid and tau being cleared from the brain interstitium during sleep.”

A recent hypothesis is that blast TBI impairs that process. “This is why we see less of those proteins in the CSF. They’re not being cleared, which might contribute downstream to the clumping up of protein in the brain,” he suggested.

The evidence base corroborating that hypothesis is in its infancy; however, new research conducted by Dr. Iliff and his colleagues sheds light on this potential mechanism.

In blast TBI, energy from the explosion and resulting overpressure wave are “transmitted through the brain, which causes tissues of different densities — such as gray and white matter — to accelerate at different rates,” according to Dr. Iliff. This results in the shearing and stretching of brain tissue, leading to a “diffuse pattern of tissue damage.”

It is known that blast TBI has clinical overlap and associations with posttraumatic stress disorder (PTSD), depression, and persistent neurobehavioral symptoms; that veterans with a history of TBI are more than twice as likely to die by suicide than veterans with no TBI history; and that TBI may increase the risk for Alzheimer’s disease and related dementing disorders, as well as CTE.

The missing link may be the glymphatic system — a “brain-wide network of perivascular pathways, along which CSF and interstitial fluid (ISF) exchange, supporting the clearance of interstitial solutes, including amyloid-beta.”

Dr. Iliff and his group previously found that glymphatic function is “markedly and chronically impaired” following impact TBI in mice and that this impairment is associated with the mislocalization of astroglial aquaporin 4 (AQP4), a water channel that lines perivascular spaces and plays a role in healthy glymphatic exchange.

In their new study, the researchers examined both the expression and the localization of AQP4 in the human postmortem frontal cortex and found “distinct laminar differences” in AQP4 expression following blast exposure. They observed similar changes as well as impairment of glymphatic function, which emerged 28 days following blast injury in a mouse model of repetitive blast mTBI.

And in a cohort of veterans with blast mTBI, blast exposure was found to be associated with an increased burden of frontal cortical MRI-visible perivascular spaces — a “putative neuroimaging marker” of glymphatic perivascular dysfunction.

The earlier Neurology study “showed impairment of biomarkers in the CSF, but the new study showed ‘why’ or ‘how’ these biomarkers are impaired, which is via impairment of the glymphatic clearance process,” Dr. Iliff explained.
 

 

 

Veterans Especially Vulnerable

Dr. Peskind, co-director of the VA Northwest Mental Illness Research, Education and Clinical Center, VA Puget Sound Health Care System, noted that while the veterans in the earlier study had at least one TBI, the average number was 20, and it was more common to have more than 50 mTBIs than to have a single one.

“These were highly exposed combat vets,” she said. “And that number doesn’t even account for subconcussive exposure to blasts, which now appear to cause detectable brain damage, even in the absence of a diagnosable TBI.”

The Maine shooter, Mr. Card, had not seen combat and was not assessed for TBI during a psychiatric hospitalization, according to The New York Times.

Dr. Peskind added that this type of blast damage is likely specific to individuals in the military. “It isn’t the sound that causes the damage,” she explained. “It’s the blast wave, the pressure wave, and there aren’t a lot of other occupations that have those types of occupational exposures.”

Dr. Snyder added that the majority of blast TBIs have been studied in military personnel, and she is not aware of studies that have looked at blast injuries in other industries, such as demolition or mining, to see if they have the same type of biologic consequences.

Dr. Snyder hopes that the researchers will follow the participants in the Neurology study and continue looking at specific markers related to Alzheimer’s disease brain changes. What the research so far shows “is that, at an earlier age, we’re starting to see those markers changing, suggesting that the underlying biology in people with mild blast TBI is similar to the underlying biology in Alzheimer’s disease as well.”

Michael Alosco, PhD, associate professor and vice chair of research, department of neurology, Boston University Chobanian & Avedisian School of Medicine, called the issue of blast exposure and TBI “a very complex and nuanced topic,” especially because TBI is “considered a risk factor of Alzheimer’s disease” and “different types of TBIs could trigger distinct pathophysiologic processes; however, the long-term impact of repetitive blast TBIs on neurodegenerative disease changes remains unknown.”

He coauthored an editorial on the earlier Neurology study that noted its limitations, such as a small sample size and lack of consideration of lifestyle and health factors but acknowledged that the “findings provide preliminary evidence that repetitive blast exposures might influence beta-amyloid accumulation.”
 

Clinical Implications

For Dr. Peskind, the “inflection point” was seeing lower CSF Abeta42, about 20 years earlier than ages 60 and 70, which is more typical in cognitively normal community volunteers.

But she described herself as “loath to say that veterans or service members have a 20-year acceleration of risk of Alzheimer’s disease,” adding, “I don’t want to scare the heck out of our service members of veterans.” Although “this is what we fear, we’re not ready to say it for sure yet because we need to do more work. Nevertheless, it does increase the index of suspicion.”

The clinical take-home messages are not unique to service members or veterans or people with a history of head injuries or a genetic predisposition to Alzheimer’s disease, she emphasized. “If anyone of any age or occupation comes in with cognitive issues, such as [impaired] memory or executive function, they deserve a workup for dementing disorders.” Frontotemporal dementia, for example, can present earlier than Alzheimer’s disease typically does.

Common comorbidities with TBI are PTSD and obstructive sleep apnea (OSA), which can also cause cognitive issues and are also risk factors for dementia.

Dr. Iliff agreed. “If you see a veteran with a history of PTSD, a history of blast TBI, and a history of OSA or some combination of those three, I recommend having a higher index of suspicion [for potential dementia] than for an average person without any of these, even at a younger age than one would ordinarily expect.”

Of all of these factors, the only truly directly modifiable one is sleep disruption, including that caused by OSA or sleep disorders related to PTSD, he added. “Epidemiologic data suggest a connection particularly between midlife sleep disruption and the risk of dementia and Alzheimer’s disease, and so it’s worth thinking about sleep as a modifiable risk factor even as early as the 40s and 50s, whether the patient is or isn’t a veteran.”

Dr. Peskind recommended asking patients, “Do they snore? Do they thrash about during sleep? Do they have trauma nightmares? This will inform the type of intervention required.”

Dr. Alosco added that there is no known “safe” threshold of exposure to blasts, and that thresholds are “unclear, particularly at the individual level.” In American football, there is a dose-response relationship between years of play and risk for later-life neurological disorder. “The best way to mitigate risk is to limit cumulative exposure,” he said.

The study by Li and colleagues was funded by grant funding from the Department of Veterans Affairs Rehabilitation Research and Development Service and the University of Washington Friends of Alzheimer’s Research. Other sources of funding to individual researchers are listed in the original paper. The study by Braun and colleagues was supported by the National Heart, Lung and Blood Institute; the Department of Veterans Affairs Rehabilitation Research and Development Service; and the National Institute on Aging. The white paper included studies that received funding from numerous sources, including the National Institutes of Health and the DOD. Dr. Iliff serves as the chair of the Scientific Advisory Board for Applied Cognition Inc., from which he receives compensation and in which he holds an equity stake. In the last year, he served as a paid consultant to Gryphon Biosciences. Dr. Peskind has served as a paid consultant to the companies Genentech, Roche, and Alpha Cognition. Dr. Alosco was supported by grant funding from the NIH; he received research support from Rainwater Charitable Foundation Inc., and Life Molecular Imaging Inc.; he has received a single honorarium from the Michael J. Fox Foundation for services unrelated to this editorial; and he received royalties from Oxford University Press Inc. The other authors’ disclosures are listed in the original papers.
 

A version of this article appeared on Medscape.com.

In October 2023, Robert Card — a grenade instructor in the Army Reserve — shot and killed 18 people in Maine, before turning the gun on himself. As reported by The New York Times, his family said that he had become increasingly erratic and violent during the months before the rampage.

A postmortem conducted by the Chronic Traumatic Encephalopathy (CTE) Center at Boston University found “significant evidence of traumatic brain injuries” [TBIs] and “significant degeneration, axonal and myelin loss, inflammation, and small blood vessel injury” in the white matter, the center’s director, Ann McKee, MD, said in a press release. “These findings align with our previous studies on the effects of blast injury in humans and experimental models.”

Members of the military, such as Mr. Card, are exposed to blasts from repeated firing of heavy weapons not only during combat but also during training.

New data suggest that repeated blast exposure may impair the brain’s waste clearance system, leading to biomarker changes indicative of preclinical Alzheimer’s disease 20 years earlier than typical. A higher index of suspicion for dementia or Alzheimer’s disease may be warranted in patients with a history of blast exposure or subconcussive brain injury who present with cognitive issues, according to experts interviewed.

In 2022, the US Department of Defense (DOD) launched its Warfighter Brain Health Initiative with the aim of “optimizing service member brain health and countering traumatic brain injuries.”

In April 2024, the Blast Overpressure Safety Act was introduced in the Senate to require the DOD to enact better blast screening, tracking, prevention, and treatment. The DOD initiated 26 blast overpressure studies.

Heather Snyder, PhD, Alzheimer’s Association vice president of Medical and Scientific Relations, said that an important component of that research involves “the need to study the difference between TBI-caused dementia and dementia caused independently” and “the need to study biomarkers to better understand the long-term consequences of TBI.”
 

What Is the Underlying Biology?

Dr. Snyder was the lead author of a white paper produced by the Alzheimer’s Association in 2018 on military-related risk factors for Alzheimer’s disease and related dementias. “There is a lot of work trying to understand the effect of pure blast waves on the brain, as opposed to the actual impact of the injury,” she said.

The white paper speculated that blast exposure may be analogous to subconcussive brain injury in athletes where there are no obvious immediate clinical symptoms or neurological dysfunction but which can cause cumulative injury and functional impairment over time.

“We are also trying to understand the underlying biology around brain changes, such as accumulation of tau and amyloid and other specific markers related to brain changes in Alzheimer’s disease,” said Dr. Snyder, chair of the Peer Reviewed Alzheimer’s Research Program Programmatic Panel for Alzheimer’s Disease/Alzheimer’s Disease and Related Dementias and TBI.
 

Common Biomarker Signatures

A recent study in Neurology comparing 51 veterans with mild TBI (mTBI) with 85 veterans and civilians with no lifetime history of TBI is among the first to explore these biomarker changes in human beings.

“Our findings suggest that chronic neuropathologic processes associated with blast mTBI share properties in common with pathogenic processes that are precursors to Alzheimer’s disease onset,” said coauthor Elaine R. Peskind, MD, professor of psychiatry and behavioral sciences, University of Washington, Seattle.

The largely male participants were a mean age of 34 years and underwent standardized clinical and neuropsychological testing as well as lumbar puncture to collect cerebrospinal fluid (CSF). The mTBI group had experienced at least one war zone blast or combined blast/impact that met criteria for mTBI, but 91% had more than one blast mTBI, and the study took place over 13 years.

The researchers found that the mTBI group “had biomarker signatures in common with the earliest stages of Alzheimer’s disease,” said Dr. Peskind.

For example, at age 50, they had lower mean levels of CSF amyloid beta 42 (Abeta42), the earliest marker of brain parenchymal Abeta deposition, compared with the control group (154 pg/mL and 1864 pg/mL lower, respectively).

High CSF phosphorylated tau181 (p-tau181) and total tau are established biomarkers for Alzheimer’s disease. However, levels of these biomarkers remained “relatively constant with age” in participants with mTBI but were higher in older ages for the non-TBI group.

The mTBI group also showed worse cognitive performance at older ages (P < .08). Poorer verbal memory and verbal fluency performance were associated with lower CSF Abeta42 in older participants (P ≤ .05).

In Alzheimer’s disease, a reduction in CSF Abeta42 may occur up to 20 years before the onset of clinical symptoms, according to Dr. Peskind. “But what we don’t know from this study is what this means, as total tau protein and p-tau181 in the CSF were also low, which isn’t entirely typical in the picture of preclinical Alzheimer’s disease,” she said. However, changes in total tau and p-tau181 lag behind changes in Abeta42.
 

 

 

Is Impaired Clearance the Culprit?

Coauthor Jeffrey Iliff, PhD, professor, University of Washington Department of Psychiatry and Behavioral Sciences and University of Washington Department of Neurology, Seattle, elaborated.

“In the setting of Alzheimer’s disease, a signature of the disease is reduced CSF Abeta42, which is thought to reflect that much of the amyloid gets ‘stuck’ in the brain in the form of amyloid plaques,” he said. “There are usually higher levels of phosphorylated tau and total tau, which are thought to reflect the presence of tau tangles and degeneration of neurons in the brain. But in this study, all of those were lowered, which is not exactly an Alzheimer’s disease profile.”

Dr. Iliff, associate director for research, VA Northwest Mental Illness Research, Education, and Clinical Center at VA Puget Sound Health Care System, Seattle, suggested that the culprit may be impairment in the brain’s glymphatic system. “Recently described biological research supports [the concept of] clearance of waste out of the brain during sleep via the glymphatic system, with amyloid and tau being cleared from the brain interstitium during sleep.”

A recent hypothesis is that blast TBI impairs that process. “This is why we see less of those proteins in the CSF. They’re not being cleared, which might contribute downstream to the clumping up of protein in the brain,” he suggested.

The evidence base corroborating that hypothesis is in its infancy; however, new research conducted by Dr. Iliff and his colleagues sheds light on this potential mechanism.

In blast TBI, energy from the explosion and resulting overpressure wave are “transmitted through the brain, which causes tissues of different densities — such as gray and white matter — to accelerate at different rates,” according to Dr. Iliff. This results in the shearing and stretching of brain tissue, leading to a “diffuse pattern of tissue damage.”

It is known that blast TBI has clinical overlap and associations with posttraumatic stress disorder (PTSD), depression, and persistent neurobehavioral symptoms; that veterans with a history of TBI are more than twice as likely to die by suicide than veterans with no TBI history; and that TBI may increase the risk for Alzheimer’s disease and related dementing disorders, as well as CTE.

The missing link may be the glymphatic system — a “brain-wide network of perivascular pathways, along which CSF and interstitial fluid (ISF) exchange, supporting the clearance of interstitial solutes, including amyloid-beta.”

Dr. Iliff and his group previously found that glymphatic function is “markedly and chronically impaired” following impact TBI in mice and that this impairment is associated with the mislocalization of astroglial aquaporin 4 (AQP4), a water channel that lines perivascular spaces and plays a role in healthy glymphatic exchange.

In their new study, the researchers examined both the expression and the localization of AQP4 in the human postmortem frontal cortex and found “distinct laminar differences” in AQP4 expression following blast exposure. They observed similar changes as well as impairment of glymphatic function, which emerged 28 days following blast injury in a mouse model of repetitive blast mTBI.

And in a cohort of veterans with blast mTBI, blast exposure was found to be associated with an increased burden of frontal cortical MRI-visible perivascular spaces — a “putative neuroimaging marker” of glymphatic perivascular dysfunction.

The earlier Neurology study “showed impairment of biomarkers in the CSF, but the new study showed ‘why’ or ‘how’ these biomarkers are impaired, which is via impairment of the glymphatic clearance process,” Dr. Iliff explained.
 

 

 

Veterans Especially Vulnerable

Dr. Peskind, co-director of the VA Northwest Mental Illness Research, Education and Clinical Center, VA Puget Sound Health Care System, noted that while the veterans in the earlier study had at least one TBI, the average number was 20, and it was more common to have more than 50 mTBIs than to have a single one.

“These were highly exposed combat vets,” she said. “And that number doesn’t even account for subconcussive exposure to blasts, which now appear to cause detectable brain damage, even in the absence of a diagnosable TBI.”

The Maine shooter, Mr. Card, had not seen combat and was not assessed for TBI during a psychiatric hospitalization, according to The New York Times.

Dr. Peskind added that this type of blast damage is likely specific to individuals in the military. “It isn’t the sound that causes the damage,” she explained. “It’s the blast wave, the pressure wave, and there aren’t a lot of other occupations that have those types of occupational exposures.”

Dr. Snyder added that the majority of blast TBIs have been studied in military personnel, and she is not aware of studies that have looked at blast injuries in other industries, such as demolition or mining, to see if they have the same type of biologic consequences.

Dr. Snyder hopes that the researchers will follow the participants in the Neurology study and continue looking at specific markers related to Alzheimer’s disease brain changes. What the research so far shows “is that, at an earlier age, we’re starting to see those markers changing, suggesting that the underlying biology in people with mild blast TBI is similar to the underlying biology in Alzheimer’s disease as well.”

Michael Alosco, PhD, associate professor and vice chair of research, department of neurology, Boston University Chobanian & Avedisian School of Medicine, called the issue of blast exposure and TBI “a very complex and nuanced topic,” especially because TBI is “considered a risk factor of Alzheimer’s disease” and “different types of TBIs could trigger distinct pathophysiologic processes; however, the long-term impact of repetitive blast TBIs on neurodegenerative disease changes remains unknown.”

He coauthored an editorial on the earlier Neurology study that noted its limitations, such as a small sample size and lack of consideration of lifestyle and health factors but acknowledged that the “findings provide preliminary evidence that repetitive blast exposures might influence beta-amyloid accumulation.”
 

Clinical Implications

For Dr. Peskind, the “inflection point” was seeing lower CSF Abeta42, about 20 years earlier than ages 60 and 70, which is more typical in cognitively normal community volunteers.

But she described herself as “loath to say that veterans or service members have a 20-year acceleration of risk of Alzheimer’s disease,” adding, “I don’t want to scare the heck out of our service members of veterans.” Although “this is what we fear, we’re not ready to say it for sure yet because we need to do more work. Nevertheless, it does increase the index of suspicion.”

The clinical take-home messages are not unique to service members or veterans or people with a history of head injuries or a genetic predisposition to Alzheimer’s disease, she emphasized. “If anyone of any age or occupation comes in with cognitive issues, such as [impaired] memory or executive function, they deserve a workup for dementing disorders.” Frontotemporal dementia, for example, can present earlier than Alzheimer’s disease typically does.

Common comorbidities with TBI are PTSD and obstructive sleep apnea (OSA), which can also cause cognitive issues and are also risk factors for dementia.

Dr. Iliff agreed. “If you see a veteran with a history of PTSD, a history of blast TBI, and a history of OSA or some combination of those three, I recommend having a higher index of suspicion [for potential dementia] than for an average person without any of these, even at a younger age than one would ordinarily expect.”

Of all of these factors, the only truly directly modifiable one is sleep disruption, including that caused by OSA or sleep disorders related to PTSD, he added. “Epidemiologic data suggest a connection particularly between midlife sleep disruption and the risk of dementia and Alzheimer’s disease, and so it’s worth thinking about sleep as a modifiable risk factor even as early as the 40s and 50s, whether the patient is or isn’t a veteran.”

Dr. Peskind recommended asking patients, “Do they snore? Do they thrash about during sleep? Do they have trauma nightmares? This will inform the type of intervention required.”

Dr. Alosco added that there is no known “safe” threshold of exposure to blasts, and that thresholds are “unclear, particularly at the individual level.” In American football, there is a dose-response relationship between years of play and risk for later-life neurological disorder. “The best way to mitigate risk is to limit cumulative exposure,” he said.

The study by Li and colleagues was funded by grant funding from the Department of Veterans Affairs Rehabilitation Research and Development Service and the University of Washington Friends of Alzheimer’s Research. Other sources of funding to individual researchers are listed in the original paper. The study by Braun and colleagues was supported by the National Heart, Lung and Blood Institute; the Department of Veterans Affairs Rehabilitation Research and Development Service; and the National Institute on Aging. The white paper included studies that received funding from numerous sources, including the National Institutes of Health and the DOD. Dr. Iliff serves as the chair of the Scientific Advisory Board for Applied Cognition Inc., from which he receives compensation and in which he holds an equity stake. In the last year, he served as a paid consultant to Gryphon Biosciences. Dr. Peskind has served as a paid consultant to the companies Genentech, Roche, and Alpha Cognition. Dr. Alosco was supported by grant funding from the NIH; he received research support from Rainwater Charitable Foundation Inc., and Life Molecular Imaging Inc.; he has received a single honorarium from the Michael J. Fox Foundation for services unrelated to this editorial; and he received royalties from Oxford University Press Inc. The other authors’ disclosures are listed in the original papers.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article