User login
Paclitaxel Matches Cisplatin HIPEC in Ovarian Cancer
TOPLINE:
Patients with advanced ovarian cancer undergoing interval cytoreductive surgery who received paclitaxel-based hyperthermic intraperitoneal chemotherapy (HIPEC) during surgery appeared to have comparable overall survival and disease-free survival rates to those who received cisplatin-based HIPEC.
METHODOLOGY:
- Although the use of HIPEC remains controversial, cisplatin-based HIPEC during cytoreductive surgery may benefit patients with advanced ovarian cancer; however, there is less evidence for paclitaxel-based HIPEC, typically used in patients who are frail or intolerant to platinum agents.
- To compare the two regimens, researchers analyzed data from the National Registry of Peritoneal Carcinomatosis, which included 846 patients (mean age, 59 years) who underwent interval cytoreductive surgery with either cisplatin-based HIPEC (n = 325) or paclitaxel-based HIPEC (n = 521). After propensity score matching, there were 199 patients per group (total = 398).
- HIPEC was administered post-surgery with cisplatin (75-100 mg/m2 for 90 minutes) or paclitaxel (120 mg/m2 for 60 minutes), both at 42-43 °C.
TAKEAWAY:
- Using cisplatin as the reference group, the median overall survival was not significantly different between the two options (hazard ratio [HR], 0.74; P = .16); however, the median overall survival was 82 months in the paclitaxel group vs 58 months in the cisplatin group.
- Disease-free survival was also not significantly different between the 2 groups, with a median of 20 months in the cisplatin group and 21 months in the paclitaxel groups (HR, 0.95; 95% CI, 0.72-1.25; P = .70).
- Overall survival was comparable during the first 20 months of follow-up and disease-free survival was equivalent during the first 15 months of follow-up, based on a predefined equivalence margin of 0.1.
- Paclitaxel-based HIPEC was not associated with increased morbidity (odds ratio, 1.32; P = .06).
IN PRACTICE:
“Our study suggests that cisplatin and paclitaxel are two safe and effective drugs to be used for HIPEC in [interval cytoreductive surgery] for advanced ovarian cancer. As cisplatin is the preferred drug according to strong evidence, paclitaxel could be a valuable alternative for patients with any contraindication to cisplatin, with similar oncological and perioperative outcomes,” the authors wrote.
SOURCE:
This study, led by Salud González Sánchez, MD, Reina Sofía University Hospital in Córdoba, Spain, was published online in JAMA Network Open.
LIMITATIONS:
The retrospective design of this study limited causal inference. The BRCA mutation status was not captured in the national registry. Additionally, the matching procedure resulted in a moderate sample size, which could have led to residual confounding.
DISCLOSURES:
The authors did not declare any funding information and reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Patients with advanced ovarian cancer undergoing interval cytoreductive surgery who received paclitaxel-based hyperthermic intraperitoneal chemotherapy (HIPEC) during surgery appeared to have comparable overall survival and disease-free survival rates to those who received cisplatin-based HIPEC.
METHODOLOGY:
- Although the use of HIPEC remains controversial, cisplatin-based HIPEC during cytoreductive surgery may benefit patients with advanced ovarian cancer; however, there is less evidence for paclitaxel-based HIPEC, typically used in patients who are frail or intolerant to platinum agents.
- To compare the two regimens, researchers analyzed data from the National Registry of Peritoneal Carcinomatosis, which included 846 patients (mean age, 59 years) who underwent interval cytoreductive surgery with either cisplatin-based HIPEC (n = 325) or paclitaxel-based HIPEC (n = 521). After propensity score matching, there were 199 patients per group (total = 398).
- HIPEC was administered post-surgery with cisplatin (75-100 mg/m2 for 90 minutes) or paclitaxel (120 mg/m2 for 60 minutes), both at 42-43 °C.
TAKEAWAY:
- Using cisplatin as the reference group, the median overall survival was not significantly different between the two options (hazard ratio [HR], 0.74; P = .16); however, the median overall survival was 82 months in the paclitaxel group vs 58 months in the cisplatin group.
- Disease-free survival was also not significantly different between the 2 groups, with a median of 20 months in the cisplatin group and 21 months in the paclitaxel groups (HR, 0.95; 95% CI, 0.72-1.25; P = .70).
- Overall survival was comparable during the first 20 months of follow-up and disease-free survival was equivalent during the first 15 months of follow-up, based on a predefined equivalence margin of 0.1.
- Paclitaxel-based HIPEC was not associated with increased morbidity (odds ratio, 1.32; P = .06).
IN PRACTICE:
“Our study suggests that cisplatin and paclitaxel are two safe and effective drugs to be used for HIPEC in [interval cytoreductive surgery] for advanced ovarian cancer. As cisplatin is the preferred drug according to strong evidence, paclitaxel could be a valuable alternative for patients with any contraindication to cisplatin, with similar oncological and perioperative outcomes,” the authors wrote.
SOURCE:
This study, led by Salud González Sánchez, MD, Reina Sofía University Hospital in Córdoba, Spain, was published online in JAMA Network Open.
LIMITATIONS:
The retrospective design of this study limited causal inference. The BRCA mutation status was not captured in the national registry. Additionally, the matching procedure resulted in a moderate sample size, which could have led to residual confounding.
DISCLOSURES:
The authors did not declare any funding information and reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Patients with advanced ovarian cancer undergoing interval cytoreductive surgery who received paclitaxel-based hyperthermic intraperitoneal chemotherapy (HIPEC) during surgery appeared to have comparable overall survival and disease-free survival rates to those who received cisplatin-based HIPEC.
METHODOLOGY:
- Although the use of HIPEC remains controversial, cisplatin-based HIPEC during cytoreductive surgery may benefit patients with advanced ovarian cancer; however, there is less evidence for paclitaxel-based HIPEC, typically used in patients who are frail or intolerant to platinum agents.
- To compare the two regimens, researchers analyzed data from the National Registry of Peritoneal Carcinomatosis, which included 846 patients (mean age, 59 years) who underwent interval cytoreductive surgery with either cisplatin-based HIPEC (n = 325) or paclitaxel-based HIPEC (n = 521). After propensity score matching, there were 199 patients per group (total = 398).
- HIPEC was administered post-surgery with cisplatin (75-100 mg/m2 for 90 minutes) or paclitaxel (120 mg/m2 for 60 minutes), both at 42-43 °C.
TAKEAWAY:
- Using cisplatin as the reference group, the median overall survival was not significantly different between the two options (hazard ratio [HR], 0.74; P = .16); however, the median overall survival was 82 months in the paclitaxel group vs 58 months in the cisplatin group.
- Disease-free survival was also not significantly different between the 2 groups, with a median of 20 months in the cisplatin group and 21 months in the paclitaxel groups (HR, 0.95; 95% CI, 0.72-1.25; P = .70).
- Overall survival was comparable during the first 20 months of follow-up and disease-free survival was equivalent during the first 15 months of follow-up, based on a predefined equivalence margin of 0.1.
- Paclitaxel-based HIPEC was not associated with increased morbidity (odds ratio, 1.32; P = .06).
IN PRACTICE:
“Our study suggests that cisplatin and paclitaxel are two safe and effective drugs to be used for HIPEC in [interval cytoreductive surgery] for advanced ovarian cancer. As cisplatin is the preferred drug according to strong evidence, paclitaxel could be a valuable alternative for patients with any contraindication to cisplatin, with similar oncological and perioperative outcomes,” the authors wrote.
SOURCE:
This study, led by Salud González Sánchez, MD, Reina Sofía University Hospital in Córdoba, Spain, was published online in JAMA Network Open.
LIMITATIONS:
The retrospective design of this study limited causal inference. The BRCA mutation status was not captured in the national registry. Additionally, the matching procedure resulted in a moderate sample size, which could have led to residual confounding.
DISCLOSURES:
The authors did not declare any funding information and reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
Strength Training Can Improve Lymphedema in Breast Cancer
TOPLINE:
A recent study found that 3 months of resistance training did not worsen lymphedema in breast cancer survivors and instead significantly improved fluid balance and increased upper extremity muscle mass. The edema index also improved, suggesting potential therapeutic benefits of intense resistance training for managing lymphedema.
METHODOLOGY:
- Lymphedema is a common adverse effect of breast cancer treatment that can limit mobility. Although strength training can have multiple benefits for breast cancer survivors, such as increased bone density and metabolism, data on whether more intense resistance training exacerbates lymphedema in this population are limited. Worries that more intense training will lead to or worsen lymphedema have typically led to cautious recommendations.
- Researchers conducted a cohort study involving 115 women with breast cancer (median age, 54 years; 96% White; 4% Black) between September 2022 and March 2024. Most (83%) underwent sentinel lymph node biopsy (SLNB), while 12% had axillary lymph node dissection (ALND). At baseline, 13% had clinical lymphedema, including 37% in the ALND group and 8% in the SLNB group.
- Participants attended resistance training sessions three times a week, with intensity escalation over 3 months. Exercises involved hand weights, resistance bands, and body weight (eg, pushups) to promote strength, mobility, and muscle hypertrophy.
- Bioimpedance analysis measured intracellular water, extracellular water, and total body water before and after exercise. Lymphedema was defined as more than a 3% increase in arm circumference discrepancy relative to preoperative ipsilateral arm measurements, along with an elevated edema index (extracellular water to total body water ratio).
TAKEAWAY:
- No participants experienced subjective or clinical worsening of lymphedema after completing the resistance training regimen.
- Lean mass in the affected arm increased from a median of 5.45 lb to 5.64 lb (P < .001), while lean mass in the unaffected arm rose from 5.51 lb to 5.53 lb (P < .001) after the resistance training.
- Overall, participants’ fluid balance improved. The edema index in both arms showed a significant reduction at training completion (mean, 0.383) vs baseline (mean, 0.385), indicating reduced lymphedema. Subgroup analysis of women who underwent SLNB showed similar improvements in the edema index.
IN PRACTICE:
“These findings highlight the safety of strength and resistance training in a large group of patients with breast cancer during and after treatment,” the authors wrote. Beyond that, the authors noted, the results point to a potential role for resistance training in reducing lymphedema.
SOURCE:
This study, led by Parisa Shamsesfandabadi, MD, Allegheny Health Network, Pittsburgh, was published online in JAMA Network Open.
LIMITATIONS:
A major limitation was the absence of a control group, which prevented a direct comparison between the effects of exercise and the natural progression of lymphedema. The 3-month intervention provided limited insight into the long-term sustainability of benefits. Patient-reported outcomes were not included. Additionally, potential confounding variables such as diet, medication use, and baseline physical activity levels were not controlled for in the analysis.
DISCLOSURES:
The authors did not disclose any funding information. Several authors reported having ties with various sources. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A recent study found that 3 months of resistance training did not worsen lymphedema in breast cancer survivors and instead significantly improved fluid balance and increased upper extremity muscle mass. The edema index also improved, suggesting potential therapeutic benefits of intense resistance training for managing lymphedema.
METHODOLOGY:
- Lymphedema is a common adverse effect of breast cancer treatment that can limit mobility. Although strength training can have multiple benefits for breast cancer survivors, such as increased bone density and metabolism, data on whether more intense resistance training exacerbates lymphedema in this population are limited. Worries that more intense training will lead to or worsen lymphedema have typically led to cautious recommendations.
- Researchers conducted a cohort study involving 115 women with breast cancer (median age, 54 years; 96% White; 4% Black) between September 2022 and March 2024. Most (83%) underwent sentinel lymph node biopsy (SLNB), while 12% had axillary lymph node dissection (ALND). At baseline, 13% had clinical lymphedema, including 37% in the ALND group and 8% in the SLNB group.
- Participants attended resistance training sessions three times a week, with intensity escalation over 3 months. Exercises involved hand weights, resistance bands, and body weight (eg, pushups) to promote strength, mobility, and muscle hypertrophy.
- Bioimpedance analysis measured intracellular water, extracellular water, and total body water before and after exercise. Lymphedema was defined as more than a 3% increase in arm circumference discrepancy relative to preoperative ipsilateral arm measurements, along with an elevated edema index (extracellular water to total body water ratio).
TAKEAWAY:
- No participants experienced subjective or clinical worsening of lymphedema after completing the resistance training regimen.
- Lean mass in the affected arm increased from a median of 5.45 lb to 5.64 lb (P < .001), while lean mass in the unaffected arm rose from 5.51 lb to 5.53 lb (P < .001) after the resistance training.
- Overall, participants’ fluid balance improved. The edema index in both arms showed a significant reduction at training completion (mean, 0.383) vs baseline (mean, 0.385), indicating reduced lymphedema. Subgroup analysis of women who underwent SLNB showed similar improvements in the edema index.
IN PRACTICE:
“These findings highlight the safety of strength and resistance training in a large group of patients with breast cancer during and after treatment,” the authors wrote. Beyond that, the authors noted, the results point to a potential role for resistance training in reducing lymphedema.
SOURCE:
This study, led by Parisa Shamsesfandabadi, MD, Allegheny Health Network, Pittsburgh, was published online in JAMA Network Open.
LIMITATIONS:
A major limitation was the absence of a control group, which prevented a direct comparison between the effects of exercise and the natural progression of lymphedema. The 3-month intervention provided limited insight into the long-term sustainability of benefits. Patient-reported outcomes were not included. Additionally, potential confounding variables such as diet, medication use, and baseline physical activity levels were not controlled for in the analysis.
DISCLOSURES:
The authors did not disclose any funding information. Several authors reported having ties with various sources. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A recent study found that 3 months of resistance training did not worsen lymphedema in breast cancer survivors and instead significantly improved fluid balance and increased upper extremity muscle mass. The edema index also improved, suggesting potential therapeutic benefits of intense resistance training for managing lymphedema.
METHODOLOGY:
- Lymphedema is a common adverse effect of breast cancer treatment that can limit mobility. Although strength training can have multiple benefits for breast cancer survivors, such as increased bone density and metabolism, data on whether more intense resistance training exacerbates lymphedema in this population are limited. Worries that more intense training will lead to or worsen lymphedema have typically led to cautious recommendations.
- Researchers conducted a cohort study involving 115 women with breast cancer (median age, 54 years; 96% White; 4% Black) between September 2022 and March 2024. Most (83%) underwent sentinel lymph node biopsy (SLNB), while 12% had axillary lymph node dissection (ALND). At baseline, 13% had clinical lymphedema, including 37% in the ALND group and 8% in the SLNB group.
- Participants attended resistance training sessions three times a week, with intensity escalation over 3 months. Exercises involved hand weights, resistance bands, and body weight (eg, pushups) to promote strength, mobility, and muscle hypertrophy.
- Bioimpedance analysis measured intracellular water, extracellular water, and total body water before and after exercise. Lymphedema was defined as more than a 3% increase in arm circumference discrepancy relative to preoperative ipsilateral arm measurements, along with an elevated edema index (extracellular water to total body water ratio).
TAKEAWAY:
- No participants experienced subjective or clinical worsening of lymphedema after completing the resistance training regimen.
- Lean mass in the affected arm increased from a median of 5.45 lb to 5.64 lb (P < .001), while lean mass in the unaffected arm rose from 5.51 lb to 5.53 lb (P < .001) after the resistance training.
- Overall, participants’ fluid balance improved. The edema index in both arms showed a significant reduction at training completion (mean, 0.383) vs baseline (mean, 0.385), indicating reduced lymphedema. Subgroup analysis of women who underwent SLNB showed similar improvements in the edema index.
IN PRACTICE:
“These findings highlight the safety of strength and resistance training in a large group of patients with breast cancer during and after treatment,” the authors wrote. Beyond that, the authors noted, the results point to a potential role for resistance training in reducing lymphedema.
SOURCE:
This study, led by Parisa Shamsesfandabadi, MD, Allegheny Health Network, Pittsburgh, was published online in JAMA Network Open.
LIMITATIONS:
A major limitation was the absence of a control group, which prevented a direct comparison between the effects of exercise and the natural progression of lymphedema. The 3-month intervention provided limited insight into the long-term sustainability of benefits. Patient-reported outcomes were not included. Additionally, potential confounding variables such as diet, medication use, and baseline physical activity levels were not controlled for in the analysis.
DISCLOSURES:
The authors did not disclose any funding information. Several authors reported having ties with various sources. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Can Adjuvant Immunotherapy Boost Survival Outcomes in Advanced Nasopharyngeal Cancer?
TOPLINE:
Adjuvant therapy with camrelizumab significantly improved 3-year event-free survival in patients with locoregionally advanced nasopharyngeal carcinoma compared with observation, according to findings from the phase 3 DIPPER trial.
METHODOLOGY:
- About 20%-30% of patients with locoregionally advanced nasopharyngeal carcinoma experience disease relapse after definitive chemoradiotherapy. Camrelizumab plus chemotherapy can improve progression-free survival in patients with recurrent or metastatic nasopharyngeal carcinoma, but its effectiveness as adjuvant therapy in locoregionally advanced disease remains unclear.
- Researchers conducted the randomized phase 3 DIPPER trial at 11 centers in China, enrolling 450 patients with T4N1M0 or T1-4N2-3M0 nasopharyngeal carcinoma who had completed induction-concurrent chemoradiotherapy.
- Participants were randomly assigned to receive either adjuvant camrelizumab (200 mg intravenously every 3 weeks for 12 cycles; n = 226) or observation (n = 224). The median follow-up duration was 39 months.
- The primary endpoint was event-free survival, defined as freedom from distant metastasis, locoregional relapse, or death due to any cause; secondary endpoints included distant metastasis–free survival, locoregional relapse–free survival, overall survival, and safety.
TAKEAWAY:
- Patients who received camrelizumab had a higher 3-year event-free survival rate than those who underwent observation (86.9% vs 77.3%; stratified hazard ratio [HR], 0.56; P = .01).
- The 3-year distant metastasis–free survival was also higher in the camrelizumab group (92.4% vs 84.5%; stratified HR, 0.54; P = .04).
- Patients in the camrelizumab group had higher locoregional relapse–free survival at 3 years than those in the observation group (92.8% vs 87.0%; stratified HR, 0.53; P = .046). However, the difference in overall survival between the groups was not significant.
- The safety analysis included 426 patients; 97.1% of those who received camrelizumab experienced at least one adverse event of any grade, the most common being reactive capillary endothelial proliferation compared with 85.5% of those in the observation group. Further, 11.2% of patients taking camrelizumab reported grade 3 or 4 events, including leukopenia and neutropenia compared with 3% in the observation group.
IN PRACTICE:
“The DIPPER trial demonstrated that adjuvant camrelizumab following induction-concurrent chemoradiotherapy significantly improved event-free survival by 9.6% with a favorable safety profile in patients with locoregionally advanced [nasopharyngeal carcinoma],” the authors wrote.
“If survival is eventually proven to be improved with induction chemoimmunotherapy, can we begin asking about de-escalation of chemoradiotherapy” for patients with nasopharyngeal carcinoma? “This question is exceptionally important, given the significant long-term consequences of radiotherapy on survivors,” the author of an accompanying editorial wrote.
SOURCE:
The study was led by Ye-Lin Liang, MD, Sun Yat-sen University Cancer Center in Guangzhou, China, and was published online in JAMA.
LIMITATIONS:
The study included patients from an endemic region where nasopharyngeal carcinoma is predominantly linked to Epstein-Barr virus infection, potentially affecting the generalizability of the findings to nonendemic populations. The open-label design may have introduced bias. Additionally, combined positive scores for programmed cell death ligand 1 (PD-L1) were unavailable for some patients, potentially affecting the analysis of the correlation between PD-L1 expression and clinical outcomes.
DISCLOSURES:
The study was supported by the Noncommunicable Chronic Diseases-National Science and Technology Major Project, National Natural Science Foundation of China, Guangzhou Municipal Health Commission, Key Area Research and Development Program of Guangdong Province, Overseas Expertise Introduction Project for Discipline Innovation, and Cancer Innovative Research Program of Sun Yat-sen University Cancer Center. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Adjuvant therapy with camrelizumab significantly improved 3-year event-free survival in patients with locoregionally advanced nasopharyngeal carcinoma compared with observation, according to findings from the phase 3 DIPPER trial.
METHODOLOGY:
- About 20%-30% of patients with locoregionally advanced nasopharyngeal carcinoma experience disease relapse after definitive chemoradiotherapy. Camrelizumab plus chemotherapy can improve progression-free survival in patients with recurrent or metastatic nasopharyngeal carcinoma, but its effectiveness as adjuvant therapy in locoregionally advanced disease remains unclear.
- Researchers conducted the randomized phase 3 DIPPER trial at 11 centers in China, enrolling 450 patients with T4N1M0 or T1-4N2-3M0 nasopharyngeal carcinoma who had completed induction-concurrent chemoradiotherapy.
- Participants were randomly assigned to receive either adjuvant camrelizumab (200 mg intravenously every 3 weeks for 12 cycles; n = 226) or observation (n = 224). The median follow-up duration was 39 months.
- The primary endpoint was event-free survival, defined as freedom from distant metastasis, locoregional relapse, or death due to any cause; secondary endpoints included distant metastasis–free survival, locoregional relapse–free survival, overall survival, and safety.
TAKEAWAY:
- Patients who received camrelizumab had a higher 3-year event-free survival rate than those who underwent observation (86.9% vs 77.3%; stratified hazard ratio [HR], 0.56; P = .01).
- The 3-year distant metastasis–free survival was also higher in the camrelizumab group (92.4% vs 84.5%; stratified HR, 0.54; P = .04).
- Patients in the camrelizumab group had higher locoregional relapse–free survival at 3 years than those in the observation group (92.8% vs 87.0%; stratified HR, 0.53; P = .046). However, the difference in overall survival between the groups was not significant.
- The safety analysis included 426 patients; 97.1% of those who received camrelizumab experienced at least one adverse event of any grade, the most common being reactive capillary endothelial proliferation compared with 85.5% of those in the observation group. Further, 11.2% of patients taking camrelizumab reported grade 3 or 4 events, including leukopenia and neutropenia compared with 3% in the observation group.
IN PRACTICE:
“The DIPPER trial demonstrated that adjuvant camrelizumab following induction-concurrent chemoradiotherapy significantly improved event-free survival by 9.6% with a favorable safety profile in patients with locoregionally advanced [nasopharyngeal carcinoma],” the authors wrote.
“If survival is eventually proven to be improved with induction chemoimmunotherapy, can we begin asking about de-escalation of chemoradiotherapy” for patients with nasopharyngeal carcinoma? “This question is exceptionally important, given the significant long-term consequences of radiotherapy on survivors,” the author of an accompanying editorial wrote.
SOURCE:
The study was led by Ye-Lin Liang, MD, Sun Yat-sen University Cancer Center in Guangzhou, China, and was published online in JAMA.
LIMITATIONS:
The study included patients from an endemic region where nasopharyngeal carcinoma is predominantly linked to Epstein-Barr virus infection, potentially affecting the generalizability of the findings to nonendemic populations. The open-label design may have introduced bias. Additionally, combined positive scores for programmed cell death ligand 1 (PD-L1) were unavailable for some patients, potentially affecting the analysis of the correlation between PD-L1 expression and clinical outcomes.
DISCLOSURES:
The study was supported by the Noncommunicable Chronic Diseases-National Science and Technology Major Project, National Natural Science Foundation of China, Guangzhou Municipal Health Commission, Key Area Research and Development Program of Guangdong Province, Overseas Expertise Introduction Project for Discipline Innovation, and Cancer Innovative Research Program of Sun Yat-sen University Cancer Center. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Adjuvant therapy with camrelizumab significantly improved 3-year event-free survival in patients with locoregionally advanced nasopharyngeal carcinoma compared with observation, according to findings from the phase 3 DIPPER trial.
METHODOLOGY:
- About 20%-30% of patients with locoregionally advanced nasopharyngeal carcinoma experience disease relapse after definitive chemoradiotherapy. Camrelizumab plus chemotherapy can improve progression-free survival in patients with recurrent or metastatic nasopharyngeal carcinoma, but its effectiveness as adjuvant therapy in locoregionally advanced disease remains unclear.
- Researchers conducted the randomized phase 3 DIPPER trial at 11 centers in China, enrolling 450 patients with T4N1M0 or T1-4N2-3M0 nasopharyngeal carcinoma who had completed induction-concurrent chemoradiotherapy.
- Participants were randomly assigned to receive either adjuvant camrelizumab (200 mg intravenously every 3 weeks for 12 cycles; n = 226) or observation (n = 224). The median follow-up duration was 39 months.
- The primary endpoint was event-free survival, defined as freedom from distant metastasis, locoregional relapse, or death due to any cause; secondary endpoints included distant metastasis–free survival, locoregional relapse–free survival, overall survival, and safety.
TAKEAWAY:
- Patients who received camrelizumab had a higher 3-year event-free survival rate than those who underwent observation (86.9% vs 77.3%; stratified hazard ratio [HR], 0.56; P = .01).
- The 3-year distant metastasis–free survival was also higher in the camrelizumab group (92.4% vs 84.5%; stratified HR, 0.54; P = .04).
- Patients in the camrelizumab group had higher locoregional relapse–free survival at 3 years than those in the observation group (92.8% vs 87.0%; stratified HR, 0.53; P = .046). However, the difference in overall survival between the groups was not significant.
- The safety analysis included 426 patients; 97.1% of those who received camrelizumab experienced at least one adverse event of any grade, the most common being reactive capillary endothelial proliferation compared with 85.5% of those in the observation group. Further, 11.2% of patients taking camrelizumab reported grade 3 or 4 events, including leukopenia and neutropenia compared with 3% in the observation group.
IN PRACTICE:
“The DIPPER trial demonstrated that adjuvant camrelizumab following induction-concurrent chemoradiotherapy significantly improved event-free survival by 9.6% with a favorable safety profile in patients with locoregionally advanced [nasopharyngeal carcinoma],” the authors wrote.
“If survival is eventually proven to be improved with induction chemoimmunotherapy, can we begin asking about de-escalation of chemoradiotherapy” for patients with nasopharyngeal carcinoma? “This question is exceptionally important, given the significant long-term consequences of radiotherapy on survivors,” the author of an accompanying editorial wrote.
SOURCE:
The study was led by Ye-Lin Liang, MD, Sun Yat-sen University Cancer Center in Guangzhou, China, and was published online in JAMA.
LIMITATIONS:
The study included patients from an endemic region where nasopharyngeal carcinoma is predominantly linked to Epstein-Barr virus infection, potentially affecting the generalizability of the findings to nonendemic populations. The open-label design may have introduced bias. Additionally, combined positive scores for programmed cell death ligand 1 (PD-L1) were unavailable for some patients, potentially affecting the analysis of the correlation between PD-L1 expression and clinical outcomes.
DISCLOSURES:
The study was supported by the Noncommunicable Chronic Diseases-National Science and Technology Major Project, National Natural Science Foundation of China, Guangzhou Municipal Health Commission, Key Area Research and Development Program of Guangdong Province, Overseas Expertise Introduction Project for Discipline Innovation, and Cancer Innovative Research Program of Sun Yat-sen University Cancer Center. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
How Many Patients in Early Cancer Trials Get Drugs Ultimately Approved by FDA?
TOPLINE:
One in six patients in phase 2 cancer trials received treatments that were eventually approved by the Food and Drug Administration (FDA), a new analysis found. This proportion increased to 1 in 5 when considering National Comprehensive Cancer Network (NCCN) off-label recommendations and decreased to about 1 in 11 for approved regimens considered to have a substantial clinical benefit.
METHODOLOGY:
- Patients enroll in phase 2 oncology trials seeking access to promising new treatments, but the risk-benefit assessments and the likelihood of receiving a therapy that ultimately gains FDA approval remain unclear. Previous research suggests that the odds are 1 in 83 patients for those enrolled in a phase 1 cancer trial.
- Researchers randomly selected 400 phase 2 cancer trials initiated between November 2012 and November 2015 (to give enough time for an approval to occur); these trials included more than 25,000 patients across 608 specific treatment cohorts testing 332 drugs.
- The primary endpoint was the proportion of patients enrolled in phase 2 trials who received a treatment regimen that later attained FDA approval — defined as the “therapeutic proportion.”
- A secondary endpoint was determining the therapeutic proportion based on the therapeutic value of drugs. The three benchmarks were FDA approval alone, FDA approval plus NCCN off-label recommendations, and FDA approval for drugs considered to have a substantial clinical benefit, based on the European Society for Medical Oncology-Magnitude of Clinical Benefit Scale (ESMO-MCBS).
TAKEAWAY:
- A total of 4045 patients received a treatment regimen that advanced to FDA approval, corresponding to a therapeutic proportion of 16.2%.
- The therapeutic proportion increased to 19.4% when considering NCCN off-label recommendations and decreased to 9.3% for FDA-approved regimens considered to have a substantial clinical benefit, based on the ESMO-MCBS.
- The proportion of patients who participated in a trial in which the drug-indication pairing went on to phase 3 testing was 32.5%.
- Enrollment in a trial featuring biomarker enrichment, an immunotherapy drug, a large phase 2 cohort, and a nonrandomized, industry-sponsored trial all showed a trend toward a higher therapeutic proportion.
IN PRACTICE:
“By entering a phase 2 trial, a patient has a one in six chance of receiving a treatment that will later be approved for their condition,” the authors wrote. “The proportions described here, when juxtaposed with those estimated previously for phase 1 trials, suggest a striking improvement for a patient’s therapeutic prospects. This suggests that phase 1 trials do a good job at protecting patients downstream from unsafe and ineffective cancer treatments.”
In an editorial accompanying the study, Howard S. Hochster, MD, of the Rutgers Cancer Institute in New Brunswick, New Jersey, suggested that the 16.2% therapeutic proportion reported may be understated. For instance, “if using the criterion of drugs that were FDA approved in any indication and dose, the proportion of patient benefit in these trials rises to 38%, with a 51% benefit rate considering inclusion in NCCN guidelines,” he wrote.
SOURCE:
This study, led by Charlotte Ouimet, MSc, Department of Equity, Ethics and Policy, McGill University School of Population and Global Health, Montréal, Québec, Canada, was published online in Journal of the National Cancer Institute.
LIMITATIONS:
The longitudinal design of this study required using a historical cohort of phase 2 clinical trials, which may not reflect current drug development patterns. This study was underpowered to determine trial characteristics that predicted higher therapeutic proportions. Furthermore, the exclusion of cytotoxic drugs from the analysis resulted in a somewhat restricted view of overall drug development.
DISCLOSURES:
This study was supported by the Canadian Institutes of Health Research. The authors reported having no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
One in six patients in phase 2 cancer trials received treatments that were eventually approved by the Food and Drug Administration (FDA), a new analysis found. This proportion increased to 1 in 5 when considering National Comprehensive Cancer Network (NCCN) off-label recommendations and decreased to about 1 in 11 for approved regimens considered to have a substantial clinical benefit.
METHODOLOGY:
- Patients enroll in phase 2 oncology trials seeking access to promising new treatments, but the risk-benefit assessments and the likelihood of receiving a therapy that ultimately gains FDA approval remain unclear. Previous research suggests that the odds are 1 in 83 patients for those enrolled in a phase 1 cancer trial.
- Researchers randomly selected 400 phase 2 cancer trials initiated between November 2012 and November 2015 (to give enough time for an approval to occur); these trials included more than 25,000 patients across 608 specific treatment cohorts testing 332 drugs.
- The primary endpoint was the proportion of patients enrolled in phase 2 trials who received a treatment regimen that later attained FDA approval — defined as the “therapeutic proportion.”
- A secondary endpoint was determining the therapeutic proportion based on the therapeutic value of drugs. The three benchmarks were FDA approval alone, FDA approval plus NCCN off-label recommendations, and FDA approval for drugs considered to have a substantial clinical benefit, based on the European Society for Medical Oncology-Magnitude of Clinical Benefit Scale (ESMO-MCBS).
TAKEAWAY:
- A total of 4045 patients received a treatment regimen that advanced to FDA approval, corresponding to a therapeutic proportion of 16.2%.
- The therapeutic proportion increased to 19.4% when considering NCCN off-label recommendations and decreased to 9.3% for FDA-approved regimens considered to have a substantial clinical benefit, based on the ESMO-MCBS.
- The proportion of patients who participated in a trial in which the drug-indication pairing went on to phase 3 testing was 32.5%.
- Enrollment in a trial featuring biomarker enrichment, an immunotherapy drug, a large phase 2 cohort, and a nonrandomized, industry-sponsored trial all showed a trend toward a higher therapeutic proportion.
IN PRACTICE:
“By entering a phase 2 trial, a patient has a one in six chance of receiving a treatment that will later be approved for their condition,” the authors wrote. “The proportions described here, when juxtaposed with those estimated previously for phase 1 trials, suggest a striking improvement for a patient’s therapeutic prospects. This suggests that phase 1 trials do a good job at protecting patients downstream from unsafe and ineffective cancer treatments.”
In an editorial accompanying the study, Howard S. Hochster, MD, of the Rutgers Cancer Institute in New Brunswick, New Jersey, suggested that the 16.2% therapeutic proportion reported may be understated. For instance, “if using the criterion of drugs that were FDA approved in any indication and dose, the proportion of patient benefit in these trials rises to 38%, with a 51% benefit rate considering inclusion in NCCN guidelines,” he wrote.
SOURCE:
This study, led by Charlotte Ouimet, MSc, Department of Equity, Ethics and Policy, McGill University School of Population and Global Health, Montréal, Québec, Canada, was published online in Journal of the National Cancer Institute.
LIMITATIONS:
The longitudinal design of this study required using a historical cohort of phase 2 clinical trials, which may not reflect current drug development patterns. This study was underpowered to determine trial characteristics that predicted higher therapeutic proportions. Furthermore, the exclusion of cytotoxic drugs from the analysis resulted in a somewhat restricted view of overall drug development.
DISCLOSURES:
This study was supported by the Canadian Institutes of Health Research. The authors reported having no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
One in six patients in phase 2 cancer trials received treatments that were eventually approved by the Food and Drug Administration (FDA), a new analysis found. This proportion increased to 1 in 5 when considering National Comprehensive Cancer Network (NCCN) off-label recommendations and decreased to about 1 in 11 for approved regimens considered to have a substantial clinical benefit.
METHODOLOGY:
- Patients enroll in phase 2 oncology trials seeking access to promising new treatments, but the risk-benefit assessments and the likelihood of receiving a therapy that ultimately gains FDA approval remain unclear. Previous research suggests that the odds are 1 in 83 patients for those enrolled in a phase 1 cancer trial.
- Researchers randomly selected 400 phase 2 cancer trials initiated between November 2012 and November 2015 (to give enough time for an approval to occur); these trials included more than 25,000 patients across 608 specific treatment cohorts testing 332 drugs.
- The primary endpoint was the proportion of patients enrolled in phase 2 trials who received a treatment regimen that later attained FDA approval — defined as the “therapeutic proportion.”
- A secondary endpoint was determining the therapeutic proportion based on the therapeutic value of drugs. The three benchmarks were FDA approval alone, FDA approval plus NCCN off-label recommendations, and FDA approval for drugs considered to have a substantial clinical benefit, based on the European Society for Medical Oncology-Magnitude of Clinical Benefit Scale (ESMO-MCBS).
TAKEAWAY:
- A total of 4045 patients received a treatment regimen that advanced to FDA approval, corresponding to a therapeutic proportion of 16.2%.
- The therapeutic proportion increased to 19.4% when considering NCCN off-label recommendations and decreased to 9.3% for FDA-approved regimens considered to have a substantial clinical benefit, based on the ESMO-MCBS.
- The proportion of patients who participated in a trial in which the drug-indication pairing went on to phase 3 testing was 32.5%.
- Enrollment in a trial featuring biomarker enrichment, an immunotherapy drug, a large phase 2 cohort, and a nonrandomized, industry-sponsored trial all showed a trend toward a higher therapeutic proportion.
IN PRACTICE:
“By entering a phase 2 trial, a patient has a one in six chance of receiving a treatment that will later be approved for their condition,” the authors wrote. “The proportions described here, when juxtaposed with those estimated previously for phase 1 trials, suggest a striking improvement for a patient’s therapeutic prospects. This suggests that phase 1 trials do a good job at protecting patients downstream from unsafe and ineffective cancer treatments.”
In an editorial accompanying the study, Howard S. Hochster, MD, of the Rutgers Cancer Institute in New Brunswick, New Jersey, suggested that the 16.2% therapeutic proportion reported may be understated. For instance, “if using the criterion of drugs that were FDA approved in any indication and dose, the proportion of patient benefit in these trials rises to 38%, with a 51% benefit rate considering inclusion in NCCN guidelines,” he wrote.
SOURCE:
This study, led by Charlotte Ouimet, MSc, Department of Equity, Ethics and Policy, McGill University School of Population and Global Health, Montréal, Québec, Canada, was published online in Journal of the National Cancer Institute.
LIMITATIONS:
The longitudinal design of this study required using a historical cohort of phase 2 clinical trials, which may not reflect current drug development patterns. This study was underpowered to determine trial characteristics that predicted higher therapeutic proportions. Furthermore, the exclusion of cytotoxic drugs from the analysis resulted in a somewhat restricted view of overall drug development.
DISCLOSURES:
This study was supported by the Canadian Institutes of Health Research. The authors reported having no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Patients Have Many Fears, Misconceptions About Radiation Therapy
TOPLINE:
A cross-sectional survey of patients preparing for their first radiation therapy consultation found that many patients worried about the physical effects of radiation therapy, including pain, memory loss, and nausea, and more than 60% said they were concerned about their ability to perform daily activities. Respondents reported a range of other worries, including the financial cost of treatment, transportation to treatment sessions, and the ability to continue working, as well as misconceptions about radiation therapy, such as concerns about emitting radiation to others.
METHODOLOGY:
- Toxicities from cancer therapies can significantly affect patients’ quality of life and may contribute to their apprehensions before starting a new treatment. Some studies have indicated that patients may have misconceptions about chemotherapy, but less is known about patients’ perceptions of radiation therapy.
- Researchers conducted a cross-sectional survey of patients presenting for initial radiation therapy consultation at a single academic institution and analyzed responses from 214 patients (52% men; 51% White individuals) with no prior radiation therapy experience.
- The patients completed a 30-question electronic survey about radiation therapy perceptions and fears or concerns prior to their initial radiation consultation.
- Cancer diagnoses spanned 18 disease sites, with hematologic malignancies (21%), breast cancer (18%), and lung cancer (15%) being the most common.
TAKEAWAY:
- Physical adverse effects were the top concern for patients. These included radiation-induced pain (67%), memory loss (62%), nausea/vomiting (60%), and skin reactions (58%).
- Patients expressed concerns about the impact radiation therapy would have on daily activities, with 62% reporting being moderately or very concerned about their ability to perform daily activities and 37% worried about their ability to continue working. Other concerns included the ability to exercise (over half of respondents), financial cost (36%), and transportation to treatment sessions (26%).
- Misconceptions among patients were also common, with 48% expressing concerns about emitting radiation to others and 45% worrying about excreting radioactive urine or stool.
- Patients had varied levels of prior understanding of radiation therapy. Half of patients reported a complete lack of knowledge about radiation therapy, and 35% said they had read or heard stories about bad adverse effects.
IN PRACTICE:
“Our study suggests that a survey administered prior to radiation oncology consultation can reveal patients’ primary concerns which could promote a more patient-centered discussion that addresses specific concerns and involves appropriate services to help the patient,” the authors wrote.
SOURCE:
This study, led by Jennifer Novak, MD, MS, Department of Radiation Oncology, City of Hope National Medical Center, Duarte, California, was published online in Advances in Radiation Oncology.
LIMITATIONS:
Limitations included response bias and time constraints, which prevented many eligible patients from completing the survey. The single-institution design limits the generalizability of the findings. The survey results also showed a disproportionate focus on physical effects over the social impacts of radiation therapy, which could have limited the comprehensiveness of the findings.
DISCLOSURES:
The authors reported no specific funding for this work and no relevant competing financial interests or personal relationships that could have influenced the work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A cross-sectional survey of patients preparing for their first radiation therapy consultation found that many patients worried about the physical effects of radiation therapy, including pain, memory loss, and nausea, and more than 60% said they were concerned about their ability to perform daily activities. Respondents reported a range of other worries, including the financial cost of treatment, transportation to treatment sessions, and the ability to continue working, as well as misconceptions about radiation therapy, such as concerns about emitting radiation to others.
METHODOLOGY:
- Toxicities from cancer therapies can significantly affect patients’ quality of life and may contribute to their apprehensions before starting a new treatment. Some studies have indicated that patients may have misconceptions about chemotherapy, but less is known about patients’ perceptions of radiation therapy.
- Researchers conducted a cross-sectional survey of patients presenting for initial radiation therapy consultation at a single academic institution and analyzed responses from 214 patients (52% men; 51% White individuals) with no prior radiation therapy experience.
- The patients completed a 30-question electronic survey about radiation therapy perceptions and fears or concerns prior to their initial radiation consultation.
- Cancer diagnoses spanned 18 disease sites, with hematologic malignancies (21%), breast cancer (18%), and lung cancer (15%) being the most common.
TAKEAWAY:
- Physical adverse effects were the top concern for patients. These included radiation-induced pain (67%), memory loss (62%), nausea/vomiting (60%), and skin reactions (58%).
- Patients expressed concerns about the impact radiation therapy would have on daily activities, with 62% reporting being moderately or very concerned about their ability to perform daily activities and 37% worried about their ability to continue working. Other concerns included the ability to exercise (over half of respondents), financial cost (36%), and transportation to treatment sessions (26%).
- Misconceptions among patients were also common, with 48% expressing concerns about emitting radiation to others and 45% worrying about excreting radioactive urine or stool.
- Patients had varied levels of prior understanding of radiation therapy. Half of patients reported a complete lack of knowledge about radiation therapy, and 35% said they had read or heard stories about bad adverse effects.
IN PRACTICE:
“Our study suggests that a survey administered prior to radiation oncology consultation can reveal patients’ primary concerns which could promote a more patient-centered discussion that addresses specific concerns and involves appropriate services to help the patient,” the authors wrote.
SOURCE:
This study, led by Jennifer Novak, MD, MS, Department of Radiation Oncology, City of Hope National Medical Center, Duarte, California, was published online in Advances in Radiation Oncology.
LIMITATIONS:
Limitations included response bias and time constraints, which prevented many eligible patients from completing the survey. The single-institution design limits the generalizability of the findings. The survey results also showed a disproportionate focus on physical effects over the social impacts of radiation therapy, which could have limited the comprehensiveness of the findings.
DISCLOSURES:
The authors reported no specific funding for this work and no relevant competing financial interests or personal relationships that could have influenced the work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A cross-sectional survey of patients preparing for their first radiation therapy consultation found that many patients worried about the physical effects of radiation therapy, including pain, memory loss, and nausea, and more than 60% said they were concerned about their ability to perform daily activities. Respondents reported a range of other worries, including the financial cost of treatment, transportation to treatment sessions, and the ability to continue working, as well as misconceptions about radiation therapy, such as concerns about emitting radiation to others.
METHODOLOGY:
- Toxicities from cancer therapies can significantly affect patients’ quality of life and may contribute to their apprehensions before starting a new treatment. Some studies have indicated that patients may have misconceptions about chemotherapy, but less is known about patients’ perceptions of radiation therapy.
- Researchers conducted a cross-sectional survey of patients presenting for initial radiation therapy consultation at a single academic institution and analyzed responses from 214 patients (52% men; 51% White individuals) with no prior radiation therapy experience.
- The patients completed a 30-question electronic survey about radiation therapy perceptions and fears or concerns prior to their initial radiation consultation.
- Cancer diagnoses spanned 18 disease sites, with hematologic malignancies (21%), breast cancer (18%), and lung cancer (15%) being the most common.
TAKEAWAY:
- Physical adverse effects were the top concern for patients. These included radiation-induced pain (67%), memory loss (62%), nausea/vomiting (60%), and skin reactions (58%).
- Patients expressed concerns about the impact radiation therapy would have on daily activities, with 62% reporting being moderately or very concerned about their ability to perform daily activities and 37% worried about their ability to continue working. Other concerns included the ability to exercise (over half of respondents), financial cost (36%), and transportation to treatment sessions (26%).
- Misconceptions among patients were also common, with 48% expressing concerns about emitting radiation to others and 45% worrying about excreting radioactive urine or stool.
- Patients had varied levels of prior understanding of radiation therapy. Half of patients reported a complete lack of knowledge about radiation therapy, and 35% said they had read or heard stories about bad adverse effects.
IN PRACTICE:
“Our study suggests that a survey administered prior to radiation oncology consultation can reveal patients’ primary concerns which could promote a more patient-centered discussion that addresses specific concerns and involves appropriate services to help the patient,” the authors wrote.
SOURCE:
This study, led by Jennifer Novak, MD, MS, Department of Radiation Oncology, City of Hope National Medical Center, Duarte, California, was published online in Advances in Radiation Oncology.
LIMITATIONS:
Limitations included response bias and time constraints, which prevented many eligible patients from completing the survey. The single-institution design limits the generalizability of the findings. The survey results also showed a disproportionate focus on physical effects over the social impacts of radiation therapy, which could have limited the comprehensiveness of the findings.
DISCLOSURES:
The authors reported no specific funding for this work and no relevant competing financial interests or personal relationships that could have influenced the work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
MRI-Guided SBRT Cuts Long-Term Toxicities in Prostate Cancer
TOPLINE:
METHODOLOGY:
- MRI-guided SBRT is known to reduce planning margins in prostate cancer and lead to less acute toxicity compared with standard CT-guided SBRT. However, the long-term benefits of the MRI-guided approach remain unclear.
- To find out, researchers conducted the phase 3 MIRAGE trial, in which 156 patients with localized prostate cancer were randomly assigned to receive either MRI-guided SBRT with 2-mm margins or CT-guided SBRT with 4-mm margins.
- The MIRAGE trial initially reported the primary outcome of acute genitourinary grade ≥ 2 toxicity within 90 days of SBRT.
- In this secondary analysis, researchers evaluated physician-reported late genitourinary and gastrointestinal toxicity, along with changes in various patient-reported quality-of-life scores over a 2-year follow-up period.
TAKEAWAY:
- Over a period of 2 years, MRI-guided SBRT was associated with a significantly lower cumulative incidence of grade ≥ 2 genitourinary toxicities compared with CT-guided SBRT (27% vs 51%; P = .004). Similar outcomes were noted for grade ≥ 2 gastrointestinal toxicities (1.4% with MRI vs 9.5% with CT; P = .025).
- Fewer patients who received MRI-guided SBRT reported deterioration in urinary irritation between 6 and 24 months after radiotherapy — 14 of 73 patients (19.2%) in the MRI group vs 24 of 68 patients (35.3%) in the CT group (P = .031).
- Patients receiving MRI-guided SBRT were also less likely to experience clinically relevant deterioration in bowel function (odds ratio [OR], 0.444; P = .035) and sexual health score (OR, 0.366; P = .03).
- Between 6 and 24 months after radiotherapy, 26.4% of patients (19 of 72) in the MRI group vs 42.3% (30 of 71) in the CT group reported clinically relevant deterioration in bowel function.
IN PRACTICE:
“Our secondary analysis of a randomized trial revealed that aggressive planning for margin reduction with MRI guidance vs CT guidance for prostate SBRT led to lower physician-scored genitourinary and gastrointestinal toxicity and better bowel and sexual quality-of-life metrics over 2 years of follow-up,” the authors wrote.
SOURCE:
This study, led by Amar U. Kishan, University of California Los Angeles, was published online in European Urology.
LIMITATIONS:
The absence of blinding in this study may have influenced both physician-scored toxicity assessments and patient-reported quality-of-life outcomes. The MIRAGE trial was not specifically designed with sufficient statistical power to evaluate the secondary analyses presented in this study.
DISCLOSURES:
This study was supported by grants from the US Department of Defense. Several authors reported receiving grants or personal fees among other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- MRI-guided SBRT is known to reduce planning margins in prostate cancer and lead to less acute toxicity compared with standard CT-guided SBRT. However, the long-term benefits of the MRI-guided approach remain unclear.
- To find out, researchers conducted the phase 3 MIRAGE trial, in which 156 patients with localized prostate cancer were randomly assigned to receive either MRI-guided SBRT with 2-mm margins or CT-guided SBRT with 4-mm margins.
- The MIRAGE trial initially reported the primary outcome of acute genitourinary grade ≥ 2 toxicity within 90 days of SBRT.
- In this secondary analysis, researchers evaluated physician-reported late genitourinary and gastrointestinal toxicity, along with changes in various patient-reported quality-of-life scores over a 2-year follow-up period.
TAKEAWAY:
- Over a period of 2 years, MRI-guided SBRT was associated with a significantly lower cumulative incidence of grade ≥ 2 genitourinary toxicities compared with CT-guided SBRT (27% vs 51%; P = .004). Similar outcomes were noted for grade ≥ 2 gastrointestinal toxicities (1.4% with MRI vs 9.5% with CT; P = .025).
- Fewer patients who received MRI-guided SBRT reported deterioration in urinary irritation between 6 and 24 months after radiotherapy — 14 of 73 patients (19.2%) in the MRI group vs 24 of 68 patients (35.3%) in the CT group (P = .031).
- Patients receiving MRI-guided SBRT were also less likely to experience clinically relevant deterioration in bowel function (odds ratio [OR], 0.444; P = .035) and sexual health score (OR, 0.366; P = .03).
- Between 6 and 24 months after radiotherapy, 26.4% of patients (19 of 72) in the MRI group vs 42.3% (30 of 71) in the CT group reported clinically relevant deterioration in bowel function.
IN PRACTICE:
“Our secondary analysis of a randomized trial revealed that aggressive planning for margin reduction with MRI guidance vs CT guidance for prostate SBRT led to lower physician-scored genitourinary and gastrointestinal toxicity and better bowel and sexual quality-of-life metrics over 2 years of follow-up,” the authors wrote.
SOURCE:
This study, led by Amar U. Kishan, University of California Los Angeles, was published online in European Urology.
LIMITATIONS:
The absence of blinding in this study may have influenced both physician-scored toxicity assessments and patient-reported quality-of-life outcomes. The MIRAGE trial was not specifically designed with sufficient statistical power to evaluate the secondary analyses presented in this study.
DISCLOSURES:
This study was supported by grants from the US Department of Defense. Several authors reported receiving grants or personal fees among other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- MRI-guided SBRT is known to reduce planning margins in prostate cancer and lead to less acute toxicity compared with standard CT-guided SBRT. However, the long-term benefits of the MRI-guided approach remain unclear.
- To find out, researchers conducted the phase 3 MIRAGE trial, in which 156 patients with localized prostate cancer were randomly assigned to receive either MRI-guided SBRT with 2-mm margins or CT-guided SBRT with 4-mm margins.
- The MIRAGE trial initially reported the primary outcome of acute genitourinary grade ≥ 2 toxicity within 90 days of SBRT.
- In this secondary analysis, researchers evaluated physician-reported late genitourinary and gastrointestinal toxicity, along with changes in various patient-reported quality-of-life scores over a 2-year follow-up period.
TAKEAWAY:
- Over a period of 2 years, MRI-guided SBRT was associated with a significantly lower cumulative incidence of grade ≥ 2 genitourinary toxicities compared with CT-guided SBRT (27% vs 51%; P = .004). Similar outcomes were noted for grade ≥ 2 gastrointestinal toxicities (1.4% with MRI vs 9.5% with CT; P = .025).
- Fewer patients who received MRI-guided SBRT reported deterioration in urinary irritation between 6 and 24 months after radiotherapy — 14 of 73 patients (19.2%) in the MRI group vs 24 of 68 patients (35.3%) in the CT group (P = .031).
- Patients receiving MRI-guided SBRT were also less likely to experience clinically relevant deterioration in bowel function (odds ratio [OR], 0.444; P = .035) and sexual health score (OR, 0.366; P = .03).
- Between 6 and 24 months after radiotherapy, 26.4% of patients (19 of 72) in the MRI group vs 42.3% (30 of 71) in the CT group reported clinically relevant deterioration in bowel function.
IN PRACTICE:
“Our secondary analysis of a randomized trial revealed that aggressive planning for margin reduction with MRI guidance vs CT guidance for prostate SBRT led to lower physician-scored genitourinary and gastrointestinal toxicity and better bowel and sexual quality-of-life metrics over 2 years of follow-up,” the authors wrote.
SOURCE:
This study, led by Amar U. Kishan, University of California Los Angeles, was published online in European Urology.
LIMITATIONS:
The absence of blinding in this study may have influenced both physician-scored toxicity assessments and patient-reported quality-of-life outcomes. The MIRAGE trial was not specifically designed with sufficient statistical power to evaluate the secondary analyses presented in this study.
DISCLOSURES:
This study was supported by grants from the US Department of Defense. Several authors reported receiving grants or personal fees among other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Hepatocellular Carcinoma: Leading Causes of Mortality Predicted
TOPLINE:
Alcohol-associated liver disease (ALD) will likely become the leading cause of HCC-related mortality by 2026, and metabolic dysfunction–associated steatotic liver disease (MASLD) is projected to become the second leading cause by 2032, a new analysis found.
METHODOLOGY:
- HCC accounts for 75%-85% of primary liver cancers and most liver cancer deaths. Researchers have observed an upward trend in the incidence of and mortality from HCC in the past 2 decades.
- This cross-sectional study analyzed 188,280 HCC-related deaths among adults aged 25 and older to determine trends in mortality rates and project age-standardized mortality rates through 2040. Data came from the National Vital Statistics System database from 2006 to 2022.
- Researchers stratified mortality data by etiology of liver disease (ALD, hepatitis B virus, hepatitis C virus, and MASLD), age groups (25-64 or 65 and older years), sex, and race/ethnicity.
- Demographic data showed that 77.4% of deaths occurred in men, 55.6% in individuals aged 65 years or older, and 62.3% in White individuals.
TAKEAWAY:
- Overall, the age-standardized mortality rate for HCC-related deaths increased from 3.65 per 100,000 persons in 2006 to 5.03 in 2022 and was projected to increase to 6.39 per 100,000 persons by 2040.
- Sex- and age-related disparities were substantial. Men had much higher rates of HCC-related mortality than women (8.15 vs 2.33 per 100,000 persons), with a projected rate among men of 9.78 per 100,000 persons by 2040. HCC-related mortality rates for people aged 65 years or older were 10 times higher than for those aged 25-64 years (18.37 vs 1.79 per 100,000 persons) in 2022 and was projected to reach 32.81 per 100,000 persons by 2040 in the older group.
- Although hepatitis C virus–related deaths were projected to decline from 0.69 to 0.03 per 100,000 persons by 2034, ALD- and MASLD-related deaths showed increasing trends, with both projected to become the two leading causes of HCC-related mortality in the next few years.
- Racial disparities were also evident. By 2040, the American Indian/Alaska Native population showed the highest increase in projected HCC-related mortality rates, which went from 5.46 per 100,000 persons in 2006 to a project increase to 14.71 per 100,000 persons.
IN PRACTICE:
“HCC mortality was projected to continue increasing in the US, primarily due to rising rates of deaths attributable to ALD and MASLD,” the authors wrote.
This “study highlights the importance of addressing these conditions to decrease the burden of liver disease and liver disease mortality in the future,” Emad Qayed, MD, MPH, Emory University School of Medicine, Atlanta, wrote in an accompanying editorial.
SOURCE:
The study was led by Sikai Qiu, MM, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China, and was published online in JAMA Network Open.
LIMITATIONS:
The National Vital Statistics System database used in this study captured only mortality data without access to detailed clinical records or individual medical histories. Researchers could not analyze socioeconomic factors or individual-level risk factors owing to data anonymization requirements. Additionally, the inclusion of the COVID-19 pandemic period could have influenced observed trends and reliability of future projections.
DISCLOSURES:
This study was supported by grants from the National Natural Science Foundation of China. Several authors reported receiving consulting fees, speaking fees, or research support from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Alcohol-associated liver disease (ALD) will likely become the leading cause of HCC-related mortality by 2026, and metabolic dysfunction–associated steatotic liver disease (MASLD) is projected to become the second leading cause by 2032, a new analysis found.
METHODOLOGY:
- HCC accounts for 75%-85% of primary liver cancers and most liver cancer deaths. Researchers have observed an upward trend in the incidence of and mortality from HCC in the past 2 decades.
- This cross-sectional study analyzed 188,280 HCC-related deaths among adults aged 25 and older to determine trends in mortality rates and project age-standardized mortality rates through 2040. Data came from the National Vital Statistics System database from 2006 to 2022.
- Researchers stratified mortality data by etiology of liver disease (ALD, hepatitis B virus, hepatitis C virus, and MASLD), age groups (25-64 or 65 and older years), sex, and race/ethnicity.
- Demographic data showed that 77.4% of deaths occurred in men, 55.6% in individuals aged 65 years or older, and 62.3% in White individuals.
TAKEAWAY:
- Overall, the age-standardized mortality rate for HCC-related deaths increased from 3.65 per 100,000 persons in 2006 to 5.03 in 2022 and was projected to increase to 6.39 per 100,000 persons by 2040.
- Sex- and age-related disparities were substantial. Men had much higher rates of HCC-related mortality than women (8.15 vs 2.33 per 100,000 persons), with a projected rate among men of 9.78 per 100,000 persons by 2040. HCC-related mortality rates for people aged 65 years or older were 10 times higher than for those aged 25-64 years (18.37 vs 1.79 per 100,000 persons) in 2022 and was projected to reach 32.81 per 100,000 persons by 2040 in the older group.
- Although hepatitis C virus–related deaths were projected to decline from 0.69 to 0.03 per 100,000 persons by 2034, ALD- and MASLD-related deaths showed increasing trends, with both projected to become the two leading causes of HCC-related mortality in the next few years.
- Racial disparities were also evident. By 2040, the American Indian/Alaska Native population showed the highest increase in projected HCC-related mortality rates, which went from 5.46 per 100,000 persons in 2006 to a project increase to 14.71 per 100,000 persons.
IN PRACTICE:
“HCC mortality was projected to continue increasing in the US, primarily due to rising rates of deaths attributable to ALD and MASLD,” the authors wrote.
This “study highlights the importance of addressing these conditions to decrease the burden of liver disease and liver disease mortality in the future,” Emad Qayed, MD, MPH, Emory University School of Medicine, Atlanta, wrote in an accompanying editorial.
SOURCE:
The study was led by Sikai Qiu, MM, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China, and was published online in JAMA Network Open.
LIMITATIONS:
The National Vital Statistics System database used in this study captured only mortality data without access to detailed clinical records or individual medical histories. Researchers could not analyze socioeconomic factors or individual-level risk factors owing to data anonymization requirements. Additionally, the inclusion of the COVID-19 pandemic period could have influenced observed trends and reliability of future projections.
DISCLOSURES:
This study was supported by grants from the National Natural Science Foundation of China. Several authors reported receiving consulting fees, speaking fees, or research support from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Alcohol-associated liver disease (ALD) will likely become the leading cause of HCC-related mortality by 2026, and metabolic dysfunction–associated steatotic liver disease (MASLD) is projected to become the second leading cause by 2032, a new analysis found.
METHODOLOGY:
- HCC accounts for 75%-85% of primary liver cancers and most liver cancer deaths. Researchers have observed an upward trend in the incidence of and mortality from HCC in the past 2 decades.
- This cross-sectional study analyzed 188,280 HCC-related deaths among adults aged 25 and older to determine trends in mortality rates and project age-standardized mortality rates through 2040. Data came from the National Vital Statistics System database from 2006 to 2022.
- Researchers stratified mortality data by etiology of liver disease (ALD, hepatitis B virus, hepatitis C virus, and MASLD), age groups (25-64 or 65 and older years), sex, and race/ethnicity.
- Demographic data showed that 77.4% of deaths occurred in men, 55.6% in individuals aged 65 years or older, and 62.3% in White individuals.
TAKEAWAY:
- Overall, the age-standardized mortality rate for HCC-related deaths increased from 3.65 per 100,000 persons in 2006 to 5.03 in 2022 and was projected to increase to 6.39 per 100,000 persons by 2040.
- Sex- and age-related disparities were substantial. Men had much higher rates of HCC-related mortality than women (8.15 vs 2.33 per 100,000 persons), with a projected rate among men of 9.78 per 100,000 persons by 2040. HCC-related mortality rates for people aged 65 years or older were 10 times higher than for those aged 25-64 years (18.37 vs 1.79 per 100,000 persons) in 2022 and was projected to reach 32.81 per 100,000 persons by 2040 in the older group.
- Although hepatitis C virus–related deaths were projected to decline from 0.69 to 0.03 per 100,000 persons by 2034, ALD- and MASLD-related deaths showed increasing trends, with both projected to become the two leading causes of HCC-related mortality in the next few years.
- Racial disparities were also evident. By 2040, the American Indian/Alaska Native population showed the highest increase in projected HCC-related mortality rates, which went from 5.46 per 100,000 persons in 2006 to a project increase to 14.71 per 100,000 persons.
IN PRACTICE:
“HCC mortality was projected to continue increasing in the US, primarily due to rising rates of deaths attributable to ALD and MASLD,” the authors wrote.
This “study highlights the importance of addressing these conditions to decrease the burden of liver disease and liver disease mortality in the future,” Emad Qayed, MD, MPH, Emory University School of Medicine, Atlanta, wrote in an accompanying editorial.
SOURCE:
The study was led by Sikai Qiu, MM, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China, and was published online in JAMA Network Open.
LIMITATIONS:
The National Vital Statistics System database used in this study captured only mortality data without access to detailed clinical records or individual medical histories. Researchers could not analyze socioeconomic factors or individual-level risk factors owing to data anonymization requirements. Additionally, the inclusion of the COVID-19 pandemic period could have influenced observed trends and reliability of future projections.
DISCLOSURES:
This study was supported by grants from the National Natural Science Foundation of China. Several authors reported receiving consulting fees, speaking fees, or research support from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Does Intensive Follow-Up Testing Improve Survival in CRC?
TOPLINE:
, according to findings from a secondary analysis.
METHODOLOGY:
- After curative surgery for CRC, intensive patient follow-up is common in clinical practice. However, there’s limited evidence to suggest that more frequent testing provides a long-term survival benefit.
- In the COLOFOL trial, patients with stage II or III CRC who had undergone curative resection were randomly assigned to either high-frequency follow-up (CT scans and CEA screening at 6, 12, 18, 24, and 36 months) or low-frequency follow-up (testing at 12 and 36 months) after surgery.
- This secondary analysis of the COLOFOL trial included 2456 patients (median age, 65 years), 1227 of whom received high-frequency follow-up and 1229 of whom received low-frequency follow-up.
- The main outcome of the secondary analysis was 10-year overall mortality and CRC–specific mortality rates.
- The analysis included both intention-to-treat and per-protocol approaches, with outcomes measured through December 2020.
TAKEAWAY:
- In the intention-to-treat analysis, the 10-year overall mortality rates were similar between the high- and low-frequency follow-up groups — 27.1% and 28.4%, respectively (risk difference, 1.3%; P = .46).
- A per-protocol analysis confirmed these findings: The 10-year overall mortality risk was 26.4% in the high-frequency group and 27.8% in the low-frequency group.
- The 10-year CRC–specific mortality rate was also similar between the high-frequency and low-frequency groups — 15.6% and 16.0%, respectively — (risk difference, 0.4%; P = .72). The same pattern was seen in the per-protocol analysis, which found a 10-year CRC–specific mortality risk of 15.6% in the high-frequency group and 15.9% in the low-frequency group.
- Subgroup analyses by cancer stage and location (rectal and colon) also revealed no significant differences in mortality outcomes between the two follow-up groups.
IN PRACTICE:
“This secondary analysis of the COLOFOL randomized clinical trial found that, among patients with stage II or III colorectal cancer, more frequent follow-up testing with CT scan and CEA screening, compared with less frequent follow-up, did not result in a significant rate reduction in 10-year overall mortality or colorectal cancer-specific mortality,” the authors concluded. “The results of this trial should be considered as the evidence base for updating clinical guidelines.”
SOURCE:
The study, led by Henrik Toft Sørensen, MD, PhD, DMSc, DSc, Aarhus University Hospital and Aarhus University, Aarhus, Denmark, was published online in JAMA Network Open.
LIMITATIONS:
The staff turnover at recruitment centers potentially affected protocol adherence. The inability to blind patients and physicians to the follow-up frequency was another limitation. The low-frequency follow-up protocol was less intensive than that recommended in the current guidelines by the National Comprehensive Cancer Network and the American Society of Clinical Oncology, potentially limiting comparisons to current standard practices.
DISCLOSURES:
The initial trial received unrestricted grants from multiple organizations including the Nordic Cancer Union, A.P. Møller Foundation, Beckett Foundation, Danish Cancer Society, and Swedish Cancer Foundation project. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
, according to findings from a secondary analysis.
METHODOLOGY:
- After curative surgery for CRC, intensive patient follow-up is common in clinical practice. However, there’s limited evidence to suggest that more frequent testing provides a long-term survival benefit.
- In the COLOFOL trial, patients with stage II or III CRC who had undergone curative resection were randomly assigned to either high-frequency follow-up (CT scans and CEA screening at 6, 12, 18, 24, and 36 months) or low-frequency follow-up (testing at 12 and 36 months) after surgery.
- This secondary analysis of the COLOFOL trial included 2456 patients (median age, 65 years), 1227 of whom received high-frequency follow-up and 1229 of whom received low-frequency follow-up.
- The main outcome of the secondary analysis was 10-year overall mortality and CRC–specific mortality rates.
- The analysis included both intention-to-treat and per-protocol approaches, with outcomes measured through December 2020.
TAKEAWAY:
- In the intention-to-treat analysis, the 10-year overall mortality rates were similar between the high- and low-frequency follow-up groups — 27.1% and 28.4%, respectively (risk difference, 1.3%; P = .46).
- A per-protocol analysis confirmed these findings: The 10-year overall mortality risk was 26.4% in the high-frequency group and 27.8% in the low-frequency group.
- The 10-year CRC–specific mortality rate was also similar between the high-frequency and low-frequency groups — 15.6% and 16.0%, respectively — (risk difference, 0.4%; P = .72). The same pattern was seen in the per-protocol analysis, which found a 10-year CRC–specific mortality risk of 15.6% in the high-frequency group and 15.9% in the low-frequency group.
- Subgroup analyses by cancer stage and location (rectal and colon) also revealed no significant differences in mortality outcomes between the two follow-up groups.
IN PRACTICE:
“This secondary analysis of the COLOFOL randomized clinical trial found that, among patients with stage II or III colorectal cancer, more frequent follow-up testing with CT scan and CEA screening, compared with less frequent follow-up, did not result in a significant rate reduction in 10-year overall mortality or colorectal cancer-specific mortality,” the authors concluded. “The results of this trial should be considered as the evidence base for updating clinical guidelines.”
SOURCE:
The study, led by Henrik Toft Sørensen, MD, PhD, DMSc, DSc, Aarhus University Hospital and Aarhus University, Aarhus, Denmark, was published online in JAMA Network Open.
LIMITATIONS:
The staff turnover at recruitment centers potentially affected protocol adherence. The inability to blind patients and physicians to the follow-up frequency was another limitation. The low-frequency follow-up protocol was less intensive than that recommended in the current guidelines by the National Comprehensive Cancer Network and the American Society of Clinical Oncology, potentially limiting comparisons to current standard practices.
DISCLOSURES:
The initial trial received unrestricted grants from multiple organizations including the Nordic Cancer Union, A.P. Møller Foundation, Beckett Foundation, Danish Cancer Society, and Swedish Cancer Foundation project. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
, according to findings from a secondary analysis.
METHODOLOGY:
- After curative surgery for CRC, intensive patient follow-up is common in clinical practice. However, there’s limited evidence to suggest that more frequent testing provides a long-term survival benefit.
- In the COLOFOL trial, patients with stage II or III CRC who had undergone curative resection were randomly assigned to either high-frequency follow-up (CT scans and CEA screening at 6, 12, 18, 24, and 36 months) or low-frequency follow-up (testing at 12 and 36 months) after surgery.
- This secondary analysis of the COLOFOL trial included 2456 patients (median age, 65 years), 1227 of whom received high-frequency follow-up and 1229 of whom received low-frequency follow-up.
- The main outcome of the secondary analysis was 10-year overall mortality and CRC–specific mortality rates.
- The analysis included both intention-to-treat and per-protocol approaches, with outcomes measured through December 2020.
TAKEAWAY:
- In the intention-to-treat analysis, the 10-year overall mortality rates were similar between the high- and low-frequency follow-up groups — 27.1% and 28.4%, respectively (risk difference, 1.3%; P = .46).
- A per-protocol analysis confirmed these findings: The 10-year overall mortality risk was 26.4% in the high-frequency group and 27.8% in the low-frequency group.
- The 10-year CRC–specific mortality rate was also similar between the high-frequency and low-frequency groups — 15.6% and 16.0%, respectively — (risk difference, 0.4%; P = .72). The same pattern was seen in the per-protocol analysis, which found a 10-year CRC–specific mortality risk of 15.6% in the high-frequency group and 15.9% in the low-frequency group.
- Subgroup analyses by cancer stage and location (rectal and colon) also revealed no significant differences in mortality outcomes between the two follow-up groups.
IN PRACTICE:
“This secondary analysis of the COLOFOL randomized clinical trial found that, among patients with stage II or III colorectal cancer, more frequent follow-up testing with CT scan and CEA screening, compared with less frequent follow-up, did not result in a significant rate reduction in 10-year overall mortality or colorectal cancer-specific mortality,” the authors concluded. “The results of this trial should be considered as the evidence base for updating clinical guidelines.”
SOURCE:
The study, led by Henrik Toft Sørensen, MD, PhD, DMSc, DSc, Aarhus University Hospital and Aarhus University, Aarhus, Denmark, was published online in JAMA Network Open.
LIMITATIONS:
The staff turnover at recruitment centers potentially affected protocol adherence. The inability to blind patients and physicians to the follow-up frequency was another limitation. The low-frequency follow-up protocol was less intensive than that recommended in the current guidelines by the National Comprehensive Cancer Network and the American Society of Clinical Oncology, potentially limiting comparisons to current standard practices.
DISCLOSURES:
The initial trial received unrestricted grants from multiple organizations including the Nordic Cancer Union, A.P. Møller Foundation, Beckett Foundation, Danish Cancer Society, and Swedish Cancer Foundation project. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Novel Digital Intervention Shows Promise for Depression
TOPLINE:
InterRhythmic care (IRC), a novel digital intervention, was linked to greater improvements in depressive symptoms, anxiety, interpersonal relationships, and social functioning in patients with major depressive disorder (MDD), compared with internet general psychoeducation in new research.
METHODOLOGY:
- The randomized, single-blind trial included 120 outpatients from the Shanghai Mental Health Center between March and November 2021 with MDD (mean age, 28.2 years; 99% Han Chinese; 83% women) who were randomly assigned to receive either IRC or internet general psychoeducation (control group).
- IRC included computer-based psychoeducation on stabilizing social rhythm regularity and resolution of interpersonal problems plus brief online interactions with clinicians. Patients received 10 minutes of IRC daily, Monday through Friday, for 8 weeks.
- The researchers assessed participants’ depressive symptoms, anxiety symptoms, interpersonal relationships, social function, and biological rhythms using the 17-item Hamilton Depression Rating Scale, Hamilton Anxiety Scale, Interpersonal Comprehensive Diagnostic Scale, Sheehan Disability Scale, and Morning and Evening Questionnaire at baseline and at 8 weeks.
TAKEAWAY:
- The participants who received IRC had significantly lower Hamilton Depression Rating total scores than those who received internet general psychoeducation (P < .001).
- The IRC group demonstrated improved anxiety symptoms, as evidenced by lower Hamilton Anxiety Scale total scores, than those observed for the control group (P < .001).
- The IRC group also showed improved outcomes in interpersonal relationships, as indicated by lower Interpersonal Comprehensive Diagnostic Scale total scores (P < .001).
- Social functioning improved significantly in the IRC group, as measured by the Sheehan Disability Scale subscores for work/school (P = .03), social life (P < .001), and family life (P = .001).
IN PRACTICE:
“This study demonstrated that IRC can improve clinical symptoms such as depressive symptoms, anxiety symptoms, interpersonal problems, and social function in patients with MDD. Our study suggested that the IRC can be used in clinical practice,” the investigators wrote.
SOURCE:
The study was led by Chuchen Xu, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine in China. It was published online on November 20, 2024, in The Journal of Psychiatric Research.
LIMITATIONS:
The 8-week follow-up period was considered too short to comprehensively evaluate the intervention’s long-term impact. Additionally, the researchers had to check and supervise assignment completion, which increased research costs and may, therefore, potentially limit broader implementation.
DISCLOSURES:
The investigators reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
InterRhythmic care (IRC), a novel digital intervention, was linked to greater improvements in depressive symptoms, anxiety, interpersonal relationships, and social functioning in patients with major depressive disorder (MDD), compared with internet general psychoeducation in new research.
METHODOLOGY:
- The randomized, single-blind trial included 120 outpatients from the Shanghai Mental Health Center between March and November 2021 with MDD (mean age, 28.2 years; 99% Han Chinese; 83% women) who were randomly assigned to receive either IRC or internet general psychoeducation (control group).
- IRC included computer-based psychoeducation on stabilizing social rhythm regularity and resolution of interpersonal problems plus brief online interactions with clinicians. Patients received 10 minutes of IRC daily, Monday through Friday, for 8 weeks.
- The researchers assessed participants’ depressive symptoms, anxiety symptoms, interpersonal relationships, social function, and biological rhythms using the 17-item Hamilton Depression Rating Scale, Hamilton Anxiety Scale, Interpersonal Comprehensive Diagnostic Scale, Sheehan Disability Scale, and Morning and Evening Questionnaire at baseline and at 8 weeks.
TAKEAWAY:
- The participants who received IRC had significantly lower Hamilton Depression Rating total scores than those who received internet general psychoeducation (P < .001).
- The IRC group demonstrated improved anxiety symptoms, as evidenced by lower Hamilton Anxiety Scale total scores, than those observed for the control group (P < .001).
- The IRC group also showed improved outcomes in interpersonal relationships, as indicated by lower Interpersonal Comprehensive Diagnostic Scale total scores (P < .001).
- Social functioning improved significantly in the IRC group, as measured by the Sheehan Disability Scale subscores for work/school (P = .03), social life (P < .001), and family life (P = .001).
IN PRACTICE:
“This study demonstrated that IRC can improve clinical symptoms such as depressive symptoms, anxiety symptoms, interpersonal problems, and social function in patients with MDD. Our study suggested that the IRC can be used in clinical practice,” the investigators wrote.
SOURCE:
The study was led by Chuchen Xu, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine in China. It was published online on November 20, 2024, in The Journal of Psychiatric Research.
LIMITATIONS:
The 8-week follow-up period was considered too short to comprehensively evaluate the intervention’s long-term impact. Additionally, the researchers had to check and supervise assignment completion, which increased research costs and may, therefore, potentially limit broader implementation.
DISCLOSURES:
The investigators reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
InterRhythmic care (IRC), a novel digital intervention, was linked to greater improvements in depressive symptoms, anxiety, interpersonal relationships, and social functioning in patients with major depressive disorder (MDD), compared with internet general psychoeducation in new research.
METHODOLOGY:
- The randomized, single-blind trial included 120 outpatients from the Shanghai Mental Health Center between March and November 2021 with MDD (mean age, 28.2 years; 99% Han Chinese; 83% women) who were randomly assigned to receive either IRC or internet general psychoeducation (control group).
- IRC included computer-based psychoeducation on stabilizing social rhythm regularity and resolution of interpersonal problems plus brief online interactions with clinicians. Patients received 10 minutes of IRC daily, Monday through Friday, for 8 weeks.
- The researchers assessed participants’ depressive symptoms, anxiety symptoms, interpersonal relationships, social function, and biological rhythms using the 17-item Hamilton Depression Rating Scale, Hamilton Anxiety Scale, Interpersonal Comprehensive Diagnostic Scale, Sheehan Disability Scale, and Morning and Evening Questionnaire at baseline and at 8 weeks.
TAKEAWAY:
- The participants who received IRC had significantly lower Hamilton Depression Rating total scores than those who received internet general psychoeducation (P < .001).
- The IRC group demonstrated improved anxiety symptoms, as evidenced by lower Hamilton Anxiety Scale total scores, than those observed for the control group (P < .001).
- The IRC group also showed improved outcomes in interpersonal relationships, as indicated by lower Interpersonal Comprehensive Diagnostic Scale total scores (P < .001).
- Social functioning improved significantly in the IRC group, as measured by the Sheehan Disability Scale subscores for work/school (P = .03), social life (P < .001), and family life (P = .001).
IN PRACTICE:
“This study demonstrated that IRC can improve clinical symptoms such as depressive symptoms, anxiety symptoms, interpersonal problems, and social function in patients with MDD. Our study suggested that the IRC can be used in clinical practice,” the investigators wrote.
SOURCE:
The study was led by Chuchen Xu, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine in China. It was published online on November 20, 2024, in The Journal of Psychiatric Research.
LIMITATIONS:
The 8-week follow-up period was considered too short to comprehensively evaluate the intervention’s long-term impact. Additionally, the researchers had to check and supervise assignment completion, which increased research costs and may, therefore, potentially limit broader implementation.
DISCLOSURES:
The investigators reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
How Are Patients Managing Intermediate-Risk Prostate Cancer?
TOPLINE:
METHODOLOGY:
- Current guidelines support active surveillance or watchful waiting for select patients with intermediate-risk prostate cancer. These observation strategies may help reduce the adverse effects associated with immediate radical treatment.
- To understand the trends over time in the use of active surveillance and watchful waiting, researchers looked at data of 147,205 individuals with intermediate-risk prostate cancer from the Surveillance, Epidemiology, and End Results prostate cancer database between 2010 and 2020 in the United States.
- Criteria for intermediate-risk included Gleason grade group 2 or 3, prostate-specific antigen (PSA) levels of 10-20 ng/mL, or stage cT2b of the disease. Researchers also included trends for patients with Gleason grade group 1, as a reference group.
- Researchers assessed the temporal trends and factors associated with the selection of active surveillance and watchful waiting in this population.
TAKEAWAY:
- Overall, the rate of active surveillance and watchful waiting more than doubled among intermediate-risk patients from 5% to 12.3% between 2010 and 2020.
- Between 2010 and 2020, the use of active surveillance and watchful waiting increased significantly among patients in Gleason grade group 1 (13.2% to 53.8%) and Gleason grade group 2 (4.0% to 11.6%) but remained stable for those in Gleason grade group 3 (2.5% to 2.8%; P = .85). For those with PSA levels < 10 ng/mL, adoption increased from 3.4% in 2010 to 9.2% in 2020 and more than doubled (9.3% to 20.7%) for those with PSA levels of 10-20 ng/mL.
- Higher Gleason grade groups had a significantly lower likelihood of adopting active surveillance or watchful waiting (Gleason grade group 2 vs 1: odds ratio [OR], 0.83; Gleason grade group 3 vs 1: OR, 0.79).
- Hispanic or Latino individuals (OR, 0.98) and non-Hispanic Black individuals (OR, 0.99) were slightly less likely to adopt these strategies than non-Hispanic White individuals.
IN PRACTICE:
“This study found a significant increase in initial active surveillance and watchful waiting for intermediate-risk prostate cancer between 2010 and 2020,” the authors wrote. “Research priorities should include reducing upfront overdiagnosis and better defining criteria for starting and stopping active surveillance and watchful waiting beyond conventional clinical measures such as GGs [Gleason grade groups] or PSA levels alone.”
SOURCE:
This study, led by Ismail Ajjawi, Yale School of Medicine, New Haven, Connecticut, was published online in JAMA.
LIMITATIONS:
This study relied on observational data and therefore could not capture various factors influencing clinical decision-making processes. Additionally, the absence of information on patient outcomes restricted the ability to assess the long-term implications of different management strategies.
DISCLOSURES:
This study received financial support from the Urological Research Foundation. Several authors reported having various ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Current guidelines support active surveillance or watchful waiting for select patients with intermediate-risk prostate cancer. These observation strategies may help reduce the adverse effects associated with immediate radical treatment.
- To understand the trends over time in the use of active surveillance and watchful waiting, researchers looked at data of 147,205 individuals with intermediate-risk prostate cancer from the Surveillance, Epidemiology, and End Results prostate cancer database between 2010 and 2020 in the United States.
- Criteria for intermediate-risk included Gleason grade group 2 or 3, prostate-specific antigen (PSA) levels of 10-20 ng/mL, or stage cT2b of the disease. Researchers also included trends for patients with Gleason grade group 1, as a reference group.
- Researchers assessed the temporal trends and factors associated with the selection of active surveillance and watchful waiting in this population.
TAKEAWAY:
- Overall, the rate of active surveillance and watchful waiting more than doubled among intermediate-risk patients from 5% to 12.3% between 2010 and 2020.
- Between 2010 and 2020, the use of active surveillance and watchful waiting increased significantly among patients in Gleason grade group 1 (13.2% to 53.8%) and Gleason grade group 2 (4.0% to 11.6%) but remained stable for those in Gleason grade group 3 (2.5% to 2.8%; P = .85). For those with PSA levels < 10 ng/mL, adoption increased from 3.4% in 2010 to 9.2% in 2020 and more than doubled (9.3% to 20.7%) for those with PSA levels of 10-20 ng/mL.
- Higher Gleason grade groups had a significantly lower likelihood of adopting active surveillance or watchful waiting (Gleason grade group 2 vs 1: odds ratio [OR], 0.83; Gleason grade group 3 vs 1: OR, 0.79).
- Hispanic or Latino individuals (OR, 0.98) and non-Hispanic Black individuals (OR, 0.99) were slightly less likely to adopt these strategies than non-Hispanic White individuals.
IN PRACTICE:
“This study found a significant increase in initial active surveillance and watchful waiting for intermediate-risk prostate cancer between 2010 and 2020,” the authors wrote. “Research priorities should include reducing upfront overdiagnosis and better defining criteria for starting and stopping active surveillance and watchful waiting beyond conventional clinical measures such as GGs [Gleason grade groups] or PSA levels alone.”
SOURCE:
This study, led by Ismail Ajjawi, Yale School of Medicine, New Haven, Connecticut, was published online in JAMA.
LIMITATIONS:
This study relied on observational data and therefore could not capture various factors influencing clinical decision-making processes. Additionally, the absence of information on patient outcomes restricted the ability to assess the long-term implications of different management strategies.
DISCLOSURES:
This study received financial support from the Urological Research Foundation. Several authors reported having various ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Current guidelines support active surveillance or watchful waiting for select patients with intermediate-risk prostate cancer. These observation strategies may help reduce the adverse effects associated with immediate radical treatment.
- To understand the trends over time in the use of active surveillance and watchful waiting, researchers looked at data of 147,205 individuals with intermediate-risk prostate cancer from the Surveillance, Epidemiology, and End Results prostate cancer database between 2010 and 2020 in the United States.
- Criteria for intermediate-risk included Gleason grade group 2 or 3, prostate-specific antigen (PSA) levels of 10-20 ng/mL, or stage cT2b of the disease. Researchers also included trends for patients with Gleason grade group 1, as a reference group.
- Researchers assessed the temporal trends and factors associated with the selection of active surveillance and watchful waiting in this population.
TAKEAWAY:
- Overall, the rate of active surveillance and watchful waiting more than doubled among intermediate-risk patients from 5% to 12.3% between 2010 and 2020.
- Between 2010 and 2020, the use of active surveillance and watchful waiting increased significantly among patients in Gleason grade group 1 (13.2% to 53.8%) and Gleason grade group 2 (4.0% to 11.6%) but remained stable for those in Gleason grade group 3 (2.5% to 2.8%; P = .85). For those with PSA levels < 10 ng/mL, adoption increased from 3.4% in 2010 to 9.2% in 2020 and more than doubled (9.3% to 20.7%) for those with PSA levels of 10-20 ng/mL.
- Higher Gleason grade groups had a significantly lower likelihood of adopting active surveillance or watchful waiting (Gleason grade group 2 vs 1: odds ratio [OR], 0.83; Gleason grade group 3 vs 1: OR, 0.79).
- Hispanic or Latino individuals (OR, 0.98) and non-Hispanic Black individuals (OR, 0.99) were slightly less likely to adopt these strategies than non-Hispanic White individuals.
IN PRACTICE:
“This study found a significant increase in initial active surveillance and watchful waiting for intermediate-risk prostate cancer between 2010 and 2020,” the authors wrote. “Research priorities should include reducing upfront overdiagnosis and better defining criteria for starting and stopping active surveillance and watchful waiting beyond conventional clinical measures such as GGs [Gleason grade groups] or PSA levels alone.”
SOURCE:
This study, led by Ismail Ajjawi, Yale School of Medicine, New Haven, Connecticut, was published online in JAMA.
LIMITATIONS:
This study relied on observational data and therefore could not capture various factors influencing clinical decision-making processes. Additionally, the absence of information on patient outcomes restricted the ability to assess the long-term implications of different management strategies.
DISCLOSURES:
This study received financial support from the Urological Research Foundation. Several authors reported having various ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.