User login
Does Extended Postop Follow-Up Improve Survival in Gastric Cancer?
TOPLINE:
METHODOLOGY:
- Currently, postgastrectomy cancer surveillance typically lasts 5 years, although some centers now monitor patients beyond this point.
- To investigate the potential benefit of extended surveillance, researchers used Korean National Health Insurance claims data to identify 40,468 patients with gastric cancer who were disease free 5 years after gastrectomy — 14,294 received extended regular follow-up visits and 26,174 did not.
- The extended regular follow-up group was defined as having endoscopy or abdominopelvic CT between 2 months and 2 years before diagnosis of late recurrence or gastric remnant cancer and having two or more examinations between 5.5 and 8.5 years after gastrectomy. Late recurrence was a recurrence diagnosed 5 years after gastrectomy.
- Researchers used Cox proportional hazards regression to evaluate the independent association between follow-up and overall and postrecurrence survival rates.
TAKEAWAY:
- Overall, 5 years postgastrectomy, the incidence of late recurrence or gastric remnant cancer was 7.8% — 4.0% between 5 and 10 years (1610 of 40,468 patients) and 9.4% after 10 years (1528 of 16,287 patients).
- Regular follow-up beyond 5 years was associated with a significant reduction in overall mortality — from 49.4% to 36.9% at 15 years (P < .001). Overall survival after late recurrence or gastric remnant cancer also improved significantly with extended regular follow-up, with the 5-year postrecurrence survival rate increasing from 32.7% to 71.1% (P < .001).
- The combination of endoscopy and abdominopelvic CT provided the highest 5-year postrecurrence survival rate (74.5%), compared with endoscopy alone (54.5%) or CT alone (47.1%).
- A time interval of more than 2 years between a previous endoscopy or abdominopelvic CT and diagnosis of late recurrence or gastric remnant cancer significantly decreased postrecurrence survival (hazard ratio [HR], 1.72 for endoscopy and HR, 1.48 for abdominopelvic CT).
IN PRACTICE:
“These findings suggest that extended regular follow-up after 5 years post gastrectomy should be implemented clinically and that current practice and value of follow-up protocols in postoperative care of patients with gastric cancer be reconsidered,” the authors concluded.
The authors of an accompanying commentary cautioned that, while the study “successfully establishes groundwork for extending surveillance of gastric cancer in high-risk populations, more work is needed to strategically identify those who would benefit most from extended surveillance.”
SOURCE:
The study, with first author Ju-Hee Lee, MD, PhD, Department of Surgery, Hanyang University College of Medicine, Seoul, South Korea, and accompanying commentary were published online on June 18 in JAMA Surgery.
LIMITATIONS:
Recurrent cancer and gastric remnant cancer could not be distinguished from each other because clinical records were not analyzed. The claims database lacked detailed clinical information on individual patients, including cancer stages, and a separate analysis of tumor markers could not be performed.
DISCLOSURES:
The study was funded by a grant from the Korean Gastric Cancer Association. The study authors and commentary authors reported no conflicts of interest.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Currently, postgastrectomy cancer surveillance typically lasts 5 years, although some centers now monitor patients beyond this point.
- To investigate the potential benefit of extended surveillance, researchers used Korean National Health Insurance claims data to identify 40,468 patients with gastric cancer who were disease free 5 years after gastrectomy — 14,294 received extended regular follow-up visits and 26,174 did not.
- The extended regular follow-up group was defined as having endoscopy or abdominopelvic CT between 2 months and 2 years before diagnosis of late recurrence or gastric remnant cancer and having two or more examinations between 5.5 and 8.5 years after gastrectomy. Late recurrence was a recurrence diagnosed 5 years after gastrectomy.
- Researchers used Cox proportional hazards regression to evaluate the independent association between follow-up and overall and postrecurrence survival rates.
TAKEAWAY:
- Overall, 5 years postgastrectomy, the incidence of late recurrence or gastric remnant cancer was 7.8% — 4.0% between 5 and 10 years (1610 of 40,468 patients) and 9.4% after 10 years (1528 of 16,287 patients).
- Regular follow-up beyond 5 years was associated with a significant reduction in overall mortality — from 49.4% to 36.9% at 15 years (P < .001). Overall survival after late recurrence or gastric remnant cancer also improved significantly with extended regular follow-up, with the 5-year postrecurrence survival rate increasing from 32.7% to 71.1% (P < .001).
- The combination of endoscopy and abdominopelvic CT provided the highest 5-year postrecurrence survival rate (74.5%), compared with endoscopy alone (54.5%) or CT alone (47.1%).
- A time interval of more than 2 years between a previous endoscopy or abdominopelvic CT and diagnosis of late recurrence or gastric remnant cancer significantly decreased postrecurrence survival (hazard ratio [HR], 1.72 for endoscopy and HR, 1.48 for abdominopelvic CT).
IN PRACTICE:
“These findings suggest that extended regular follow-up after 5 years post gastrectomy should be implemented clinically and that current practice and value of follow-up protocols in postoperative care of patients with gastric cancer be reconsidered,” the authors concluded.
The authors of an accompanying commentary cautioned that, while the study “successfully establishes groundwork for extending surveillance of gastric cancer in high-risk populations, more work is needed to strategically identify those who would benefit most from extended surveillance.”
SOURCE:
The study, with first author Ju-Hee Lee, MD, PhD, Department of Surgery, Hanyang University College of Medicine, Seoul, South Korea, and accompanying commentary were published online on June 18 in JAMA Surgery.
LIMITATIONS:
Recurrent cancer and gastric remnant cancer could not be distinguished from each other because clinical records were not analyzed. The claims database lacked detailed clinical information on individual patients, including cancer stages, and a separate analysis of tumor markers could not be performed.
DISCLOSURES:
The study was funded by a grant from the Korean Gastric Cancer Association. The study authors and commentary authors reported no conflicts of interest.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Currently, postgastrectomy cancer surveillance typically lasts 5 years, although some centers now monitor patients beyond this point.
- To investigate the potential benefit of extended surveillance, researchers used Korean National Health Insurance claims data to identify 40,468 patients with gastric cancer who were disease free 5 years after gastrectomy — 14,294 received extended regular follow-up visits and 26,174 did not.
- The extended regular follow-up group was defined as having endoscopy or abdominopelvic CT between 2 months and 2 years before diagnosis of late recurrence or gastric remnant cancer and having two or more examinations between 5.5 and 8.5 years after gastrectomy. Late recurrence was a recurrence diagnosed 5 years after gastrectomy.
- Researchers used Cox proportional hazards regression to evaluate the independent association between follow-up and overall and postrecurrence survival rates.
TAKEAWAY:
- Overall, 5 years postgastrectomy, the incidence of late recurrence or gastric remnant cancer was 7.8% — 4.0% between 5 and 10 years (1610 of 40,468 patients) and 9.4% after 10 years (1528 of 16,287 patients).
- Regular follow-up beyond 5 years was associated with a significant reduction in overall mortality — from 49.4% to 36.9% at 15 years (P < .001). Overall survival after late recurrence or gastric remnant cancer also improved significantly with extended regular follow-up, with the 5-year postrecurrence survival rate increasing from 32.7% to 71.1% (P < .001).
- The combination of endoscopy and abdominopelvic CT provided the highest 5-year postrecurrence survival rate (74.5%), compared with endoscopy alone (54.5%) or CT alone (47.1%).
- A time interval of more than 2 years between a previous endoscopy or abdominopelvic CT and diagnosis of late recurrence or gastric remnant cancer significantly decreased postrecurrence survival (hazard ratio [HR], 1.72 for endoscopy and HR, 1.48 for abdominopelvic CT).
IN PRACTICE:
“These findings suggest that extended regular follow-up after 5 years post gastrectomy should be implemented clinically and that current practice and value of follow-up protocols in postoperative care of patients with gastric cancer be reconsidered,” the authors concluded.
The authors of an accompanying commentary cautioned that, while the study “successfully establishes groundwork for extending surveillance of gastric cancer in high-risk populations, more work is needed to strategically identify those who would benefit most from extended surveillance.”
SOURCE:
The study, with first author Ju-Hee Lee, MD, PhD, Department of Surgery, Hanyang University College of Medicine, Seoul, South Korea, and accompanying commentary were published online on June 18 in JAMA Surgery.
LIMITATIONS:
Recurrent cancer and gastric remnant cancer could not be distinguished from each other because clinical records were not analyzed. The claims database lacked detailed clinical information on individual patients, including cancer stages, and a separate analysis of tumor markers could not be performed.
DISCLOSURES:
The study was funded by a grant from the Korean Gastric Cancer Association. The study authors and commentary authors reported no conflicts of interest.
A version of this article appeared on Medscape.com.
New Vitamin D Recs: Testing, Supplementing, Dosing
This transcript has been edited for clarity.
I’m Dr. Neil Skolnik, and today I’m going to talk about the Endocrine Society Guideline on Vitamin D. The question of who and when to test for vitamin D, and when to prescribe vitamin D, comes up frequently. There have been a lot of studies, and many people I know have opinions about this, but I haven’t seen a lot of clear, evidence-based guidance. This much-needed guideline provides guidance, though I’m not sure that everyone is going to be happy with the recommendations. That said, the society did conduct a comprehensive assessment and systematic review of the evidence that was impressive and well done. For our discussion, I will focus on the recommendations for nonpregnant adults.
The assumption for all of the recommendations is that these are for individuals who are already getting the Institute of Medicine’s recommended amount of vitamin D, which is 600 IU daily for those 50-70 years of age and 800 IU daily for those above 80 years.
For adults aged 18-74 years, who do not have prediabetes, the guidelines suggest against routinely testing for vitamin D deficiency and recommend against routine supplementation. For the older part of this cohort, adults aged 50-74 years, there is abundant randomized trial evidence showing little to no significant differences with vitamin D supplementation on outcomes of fracture, cancer, cardiovascular disease, kidney stones, or mortality. While supplementation is safe, there does not appear to be any benefit to routine supplementation or testing. It is important to note that the trials were done in populations that were meeting the daily recommended intake of vitamin D and who did not have low vitamin D levels at baseline, so individuals who may not be meeting the recommended daily intake though their diet or through sun exposure may consider vitamin D supplementation.
For adults with prediabetes, vitamin D supplementation is recommended to reduce the risk for progression from prediabetes to diabetes. This is about 1 in 3 adults in the United States. A number of trials have looked at vitamin D supplementation for adults with prediabetes in addition to lifestyle modification (diet and exercise). Vitamin D decreases the risk for progression from prediabetes to diabetes by approximately 10%-15%. The effect may be greater in those who are over age 60 and who have lower initial vitamin D levels.
Vitamin D in older adults (aged 75 or older) has a separate recommendation. In this age group, low vitamin D levels are common, with up to 20% of older adults having low levels. The guidelines suggest against testing vitamin D in adults aged 75 or over and recommend empiric vitamin D supplementation for all adults aged 75 or older. While observational studies have shown a relationship between low vitamin D levels in this age group and adverse outcomes, including falls, fractures, and respiratory infections, evidence from randomized placebo-controlled trials of vitamin D supplementation have been inconsistent in regard to benefit. That said, a meta-analysis has shown that vitamin D supplementation lowers mortality compared with placebo, with a relative risk of 0.96 (confidence interval, 0.93-1.00). There was no difference in effect according to setting (community vs nursing home), vitamin D dosage, or baseline vitamin D level.
There appeared to be a benefit of low-dose vitamin D supplementation on fall risk, with possibly greater fall risk when high-dose supplementation was used. No significant effect on fracture rate was seen with vitamin D supplementation alone, although there was a decrease in fractures when vitamin D was combined with calcium. In these studies, the median dose of calcium was 1000 mg per day.
Based on the probability of a “slight decrease in all-cause mortality” and its safety, as well as possible benefit to decrease falls, the recommendation is for supplementation for all adults aged 75 or older. Since there was not a consistent difference by vitamin D level, testing is not necessary.
Let’s now discuss dosage. The guidelines recommend daily lower-dose vitamin D over nondaily higher-dose vitamin D. Unfortunately, the guideline does not specify a specific dose of vitamin D. The supplementation dose used in trials of adults aged 75 or older ranged from 400 to 3333 IU daily, with an average dose of 900 IU daily, so it seems to me that a dose of 1000-2000 IU daily is a reasonable choice for older adults. In the prediabetes trials, a higher average dose was used, with a mean of 3500 IU daily, so a higher dose might make sense in this group.
Dr. Skolnik, is a professor in the Department of Family Medicine, Sidney Kimmel Medical College of Thomas Jefferson University, Philadelphia, and associate director, Department of Family Medicine, Abington Jefferson Health, Abington, Pennsylvania. He disclosed ties with AstraZeneca, Bayer, Teva, Eli Lilly, Boehringer Ingelheim, Sanofi, Sanofi Pasteur, GlaxoSmithKline, and Merck.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’m Dr. Neil Skolnik, and today I’m going to talk about the Endocrine Society Guideline on Vitamin D. The question of who and when to test for vitamin D, and when to prescribe vitamin D, comes up frequently. There have been a lot of studies, and many people I know have opinions about this, but I haven’t seen a lot of clear, evidence-based guidance. This much-needed guideline provides guidance, though I’m not sure that everyone is going to be happy with the recommendations. That said, the society did conduct a comprehensive assessment and systematic review of the evidence that was impressive and well done. For our discussion, I will focus on the recommendations for nonpregnant adults.
The assumption for all of the recommendations is that these are for individuals who are already getting the Institute of Medicine’s recommended amount of vitamin D, which is 600 IU daily for those 50-70 years of age and 800 IU daily for those above 80 years.
For adults aged 18-74 years, who do not have prediabetes, the guidelines suggest against routinely testing for vitamin D deficiency and recommend against routine supplementation. For the older part of this cohort, adults aged 50-74 years, there is abundant randomized trial evidence showing little to no significant differences with vitamin D supplementation on outcomes of fracture, cancer, cardiovascular disease, kidney stones, or mortality. While supplementation is safe, there does not appear to be any benefit to routine supplementation or testing. It is important to note that the trials were done in populations that were meeting the daily recommended intake of vitamin D and who did not have low vitamin D levels at baseline, so individuals who may not be meeting the recommended daily intake though their diet or through sun exposure may consider vitamin D supplementation.
For adults with prediabetes, vitamin D supplementation is recommended to reduce the risk for progression from prediabetes to diabetes. This is about 1 in 3 adults in the United States. A number of trials have looked at vitamin D supplementation for adults with prediabetes in addition to lifestyle modification (diet and exercise). Vitamin D decreases the risk for progression from prediabetes to diabetes by approximately 10%-15%. The effect may be greater in those who are over age 60 and who have lower initial vitamin D levels.
Vitamin D in older adults (aged 75 or older) has a separate recommendation. In this age group, low vitamin D levels are common, with up to 20% of older adults having low levels. The guidelines suggest against testing vitamin D in adults aged 75 or over and recommend empiric vitamin D supplementation for all adults aged 75 or older. While observational studies have shown a relationship between low vitamin D levels in this age group and adverse outcomes, including falls, fractures, and respiratory infections, evidence from randomized placebo-controlled trials of vitamin D supplementation have been inconsistent in regard to benefit. That said, a meta-analysis has shown that vitamin D supplementation lowers mortality compared with placebo, with a relative risk of 0.96 (confidence interval, 0.93-1.00). There was no difference in effect according to setting (community vs nursing home), vitamin D dosage, or baseline vitamin D level.
There appeared to be a benefit of low-dose vitamin D supplementation on fall risk, with possibly greater fall risk when high-dose supplementation was used. No significant effect on fracture rate was seen with vitamin D supplementation alone, although there was a decrease in fractures when vitamin D was combined with calcium. In these studies, the median dose of calcium was 1000 mg per day.
Based on the probability of a “slight decrease in all-cause mortality” and its safety, as well as possible benefit to decrease falls, the recommendation is for supplementation for all adults aged 75 or older. Since there was not a consistent difference by vitamin D level, testing is not necessary.
Let’s now discuss dosage. The guidelines recommend daily lower-dose vitamin D over nondaily higher-dose vitamin D. Unfortunately, the guideline does not specify a specific dose of vitamin D. The supplementation dose used in trials of adults aged 75 or older ranged from 400 to 3333 IU daily, with an average dose of 900 IU daily, so it seems to me that a dose of 1000-2000 IU daily is a reasonable choice for older adults. In the prediabetes trials, a higher average dose was used, with a mean of 3500 IU daily, so a higher dose might make sense in this group.
Dr. Skolnik, is a professor in the Department of Family Medicine, Sidney Kimmel Medical College of Thomas Jefferson University, Philadelphia, and associate director, Department of Family Medicine, Abington Jefferson Health, Abington, Pennsylvania. He disclosed ties with AstraZeneca, Bayer, Teva, Eli Lilly, Boehringer Ingelheim, Sanofi, Sanofi Pasteur, GlaxoSmithKline, and Merck.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’m Dr. Neil Skolnik, and today I’m going to talk about the Endocrine Society Guideline on Vitamin D. The question of who and when to test for vitamin D, and when to prescribe vitamin D, comes up frequently. There have been a lot of studies, and many people I know have opinions about this, but I haven’t seen a lot of clear, evidence-based guidance. This much-needed guideline provides guidance, though I’m not sure that everyone is going to be happy with the recommendations. That said, the society did conduct a comprehensive assessment and systematic review of the evidence that was impressive and well done. For our discussion, I will focus on the recommendations for nonpregnant adults.
The assumption for all of the recommendations is that these are for individuals who are already getting the Institute of Medicine’s recommended amount of vitamin D, which is 600 IU daily for those 50-70 years of age and 800 IU daily for those above 80 years.
For adults aged 18-74 years, who do not have prediabetes, the guidelines suggest against routinely testing for vitamin D deficiency and recommend against routine supplementation. For the older part of this cohort, adults aged 50-74 years, there is abundant randomized trial evidence showing little to no significant differences with vitamin D supplementation on outcomes of fracture, cancer, cardiovascular disease, kidney stones, or mortality. While supplementation is safe, there does not appear to be any benefit to routine supplementation or testing. It is important to note that the trials were done in populations that were meeting the daily recommended intake of vitamin D and who did not have low vitamin D levels at baseline, so individuals who may not be meeting the recommended daily intake though their diet or through sun exposure may consider vitamin D supplementation.
For adults with prediabetes, vitamin D supplementation is recommended to reduce the risk for progression from prediabetes to diabetes. This is about 1 in 3 adults in the United States. A number of trials have looked at vitamin D supplementation for adults with prediabetes in addition to lifestyle modification (diet and exercise). Vitamin D decreases the risk for progression from prediabetes to diabetes by approximately 10%-15%. The effect may be greater in those who are over age 60 and who have lower initial vitamin D levels.
Vitamin D in older adults (aged 75 or older) has a separate recommendation. In this age group, low vitamin D levels are common, with up to 20% of older adults having low levels. The guidelines suggest against testing vitamin D in adults aged 75 or over and recommend empiric vitamin D supplementation for all adults aged 75 or older. While observational studies have shown a relationship between low vitamin D levels in this age group and adverse outcomes, including falls, fractures, and respiratory infections, evidence from randomized placebo-controlled trials of vitamin D supplementation have been inconsistent in regard to benefit. That said, a meta-analysis has shown that vitamin D supplementation lowers mortality compared with placebo, with a relative risk of 0.96 (confidence interval, 0.93-1.00). There was no difference in effect according to setting (community vs nursing home), vitamin D dosage, or baseline vitamin D level.
There appeared to be a benefit of low-dose vitamin D supplementation on fall risk, with possibly greater fall risk when high-dose supplementation was used. No significant effect on fracture rate was seen with vitamin D supplementation alone, although there was a decrease in fractures when vitamin D was combined with calcium. In these studies, the median dose of calcium was 1000 mg per day.
Based on the probability of a “slight decrease in all-cause mortality” and its safety, as well as possible benefit to decrease falls, the recommendation is for supplementation for all adults aged 75 or older. Since there was not a consistent difference by vitamin D level, testing is not necessary.
Let’s now discuss dosage. The guidelines recommend daily lower-dose vitamin D over nondaily higher-dose vitamin D. Unfortunately, the guideline does not specify a specific dose of vitamin D. The supplementation dose used in trials of adults aged 75 or older ranged from 400 to 3333 IU daily, with an average dose of 900 IU daily, so it seems to me that a dose of 1000-2000 IU daily is a reasonable choice for older adults. In the prediabetes trials, a higher average dose was used, with a mean of 3500 IU daily, so a higher dose might make sense in this group.
Dr. Skolnik, is a professor in the Department of Family Medicine, Sidney Kimmel Medical College of Thomas Jefferson University, Philadelphia, and associate director, Department of Family Medicine, Abington Jefferson Health, Abington, Pennsylvania. He disclosed ties with AstraZeneca, Bayer, Teva, Eli Lilly, Boehringer Ingelheim, Sanofi, Sanofi Pasteur, GlaxoSmithKline, and Merck.
A version of this article first appeared on Medscape.com.
Let ’em Play: In Defense of Youth Football
Over the last couple of decades, I have become increasingly more uncomfortable watching American-style football on television. Lax refereeing coupled with over-juiced players who can generate g-forces previously attainable only on a NASA rocket sled has resulted in a spate of injuries I find unacceptable. The revolving door of transfers from college to college has made the term scholar-athlete a relic that can be applied to only a handful of players at the smallest uncompetitive schools.
Many of you who are regular readers of Letters from Maine have probably tired of my boasting that when I played football in high school we wore leather helmets. I enjoyed playing football and continued playing in college for a couple of years until it became obvious that “bench” was going to be my usual position. But, I would not want my grandson to play college football. Certainly, not at the elite college level. Were he to do so, he would be putting himself at risk for significant injury by participating in what I no longer view as an appealing activity. Let me add that I am not including chronic traumatic encephalopathy among my concerns, because I think its association with football injuries is far from settled. My concern is more about spinal cord injuries, which, although infrequent, are almost always devastating.
I should also make it perfectly clear that my lack of enthusiasm for college and professional football does not place me among the increasingly vocal throng calling for the elimination of youth football. For the 5- to 12-year-olds, putting on pads and a helmet and scrambling around on a grassy field bumping shoulders and heads with their peers is a wonderful way to burn off energy and satisfies a need for roughhousing that comes naturally to most young boys (and many girls). The chance of anyone of those kids playing youth football reaching the elite college or professional level is extremely unlikely. Other activities and the realization that football is not in their future weeds the field during adolescence.
Although there have been some studies suggesting that starting football at an early age is associated with increased injury risk, a recent and well-controlled study published in the journal Sports Medicine has found no such association in professional football players. This finding makes some sense when you consider that most of the children in this age group are not mustering g-forces anywhere close to those a college or professional athlete can generate.
Another recent study published in the Journal of Pediatrics offers more evidence to consider before one passes judgment on youth football. When reviewing the records of nearly 1500 patients in a specialty-care concussion setting at the Children’s Hospital of Philadelphia, investigators found that recreation-related concussions and non–sport- or recreation-related concussions were more prevalent than sports-related concussions. The authors propose that “less supervision at the time of injury and less access to established concussion healthcare following injury” may explain their observations.
Of course as a card-carrying AARP old fogey, I long for the good old days when youth sports were organized by the kids in backyards and playgrounds. There we learned to pick teams and deal with the disappointment of not being a first-round pick and the embarrassment of being a last rounder. We settled out-of-bounds calls and arguments about ball possession without adults’ assistance — or video replays for that matter. But those days are gone and likely never to return, with parental anxiety running at record highs. We must accept youth sports organized for kids by adults is the way it’s going to be for the foreseeable future.
As long as the program is organized with the emphasis on fun nor structured as a fast track to elite play it will be healthier for the kids than sitting on the couch at home watching the carnage on TV.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at pdnews@mdedge.com.
Over the last couple of decades, I have become increasingly more uncomfortable watching American-style football on television. Lax refereeing coupled with over-juiced players who can generate g-forces previously attainable only on a NASA rocket sled has resulted in a spate of injuries I find unacceptable. The revolving door of transfers from college to college has made the term scholar-athlete a relic that can be applied to only a handful of players at the smallest uncompetitive schools.
Many of you who are regular readers of Letters from Maine have probably tired of my boasting that when I played football in high school we wore leather helmets. I enjoyed playing football and continued playing in college for a couple of years until it became obvious that “bench” was going to be my usual position. But, I would not want my grandson to play college football. Certainly, not at the elite college level. Were he to do so, he would be putting himself at risk for significant injury by participating in what I no longer view as an appealing activity. Let me add that I am not including chronic traumatic encephalopathy among my concerns, because I think its association with football injuries is far from settled. My concern is more about spinal cord injuries, which, although infrequent, are almost always devastating.
I should also make it perfectly clear that my lack of enthusiasm for college and professional football does not place me among the increasingly vocal throng calling for the elimination of youth football. For the 5- to 12-year-olds, putting on pads and a helmet and scrambling around on a grassy field bumping shoulders and heads with their peers is a wonderful way to burn off energy and satisfies a need for roughhousing that comes naturally to most young boys (and many girls). The chance of anyone of those kids playing youth football reaching the elite college or professional level is extremely unlikely. Other activities and the realization that football is not in their future weeds the field during adolescence.
Although there have been some studies suggesting that starting football at an early age is associated with increased injury risk, a recent and well-controlled study published in the journal Sports Medicine has found no such association in professional football players. This finding makes some sense when you consider that most of the children in this age group are not mustering g-forces anywhere close to those a college or professional athlete can generate.
Another recent study published in the Journal of Pediatrics offers more evidence to consider before one passes judgment on youth football. When reviewing the records of nearly 1500 patients in a specialty-care concussion setting at the Children’s Hospital of Philadelphia, investigators found that recreation-related concussions and non–sport- or recreation-related concussions were more prevalent than sports-related concussions. The authors propose that “less supervision at the time of injury and less access to established concussion healthcare following injury” may explain their observations.
Of course as a card-carrying AARP old fogey, I long for the good old days when youth sports were organized by the kids in backyards and playgrounds. There we learned to pick teams and deal with the disappointment of not being a first-round pick and the embarrassment of being a last rounder. We settled out-of-bounds calls and arguments about ball possession without adults’ assistance — or video replays for that matter. But those days are gone and likely never to return, with parental anxiety running at record highs. We must accept youth sports organized for kids by adults is the way it’s going to be for the foreseeable future.
As long as the program is organized with the emphasis on fun nor structured as a fast track to elite play it will be healthier for the kids than sitting on the couch at home watching the carnage on TV.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at pdnews@mdedge.com.
Over the last couple of decades, I have become increasingly more uncomfortable watching American-style football on television. Lax refereeing coupled with over-juiced players who can generate g-forces previously attainable only on a NASA rocket sled has resulted in a spate of injuries I find unacceptable. The revolving door of transfers from college to college has made the term scholar-athlete a relic that can be applied to only a handful of players at the smallest uncompetitive schools.
Many of you who are regular readers of Letters from Maine have probably tired of my boasting that when I played football in high school we wore leather helmets. I enjoyed playing football and continued playing in college for a couple of years until it became obvious that “bench” was going to be my usual position. But, I would not want my grandson to play college football. Certainly, not at the elite college level. Were he to do so, he would be putting himself at risk for significant injury by participating in what I no longer view as an appealing activity. Let me add that I am not including chronic traumatic encephalopathy among my concerns, because I think its association with football injuries is far from settled. My concern is more about spinal cord injuries, which, although infrequent, are almost always devastating.
I should also make it perfectly clear that my lack of enthusiasm for college and professional football does not place me among the increasingly vocal throng calling for the elimination of youth football. For the 5- to 12-year-olds, putting on pads and a helmet and scrambling around on a grassy field bumping shoulders and heads with their peers is a wonderful way to burn off energy and satisfies a need for roughhousing that comes naturally to most young boys (and many girls). The chance of anyone of those kids playing youth football reaching the elite college or professional level is extremely unlikely. Other activities and the realization that football is not in their future weeds the field during adolescence.
Although there have been some studies suggesting that starting football at an early age is associated with increased injury risk, a recent and well-controlled study published in the journal Sports Medicine has found no such association in professional football players. This finding makes some sense when you consider that most of the children in this age group are not mustering g-forces anywhere close to those a college or professional athlete can generate.
Another recent study published in the Journal of Pediatrics offers more evidence to consider before one passes judgment on youth football. When reviewing the records of nearly 1500 patients in a specialty-care concussion setting at the Children’s Hospital of Philadelphia, investigators found that recreation-related concussions and non–sport- or recreation-related concussions were more prevalent than sports-related concussions. The authors propose that “less supervision at the time of injury and less access to established concussion healthcare following injury” may explain their observations.
Of course as a card-carrying AARP old fogey, I long for the good old days when youth sports were organized by the kids in backyards and playgrounds. There we learned to pick teams and deal with the disappointment of not being a first-round pick and the embarrassment of being a last rounder. We settled out-of-bounds calls and arguments about ball possession without adults’ assistance — or video replays for that matter. But those days are gone and likely never to return, with parental anxiety running at record highs. We must accept youth sports organized for kids by adults is the way it’s going to be for the foreseeable future.
As long as the program is organized with the emphasis on fun nor structured as a fast track to elite play it will be healthier for the kids than sitting on the couch at home watching the carnage on TV.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at pdnews@mdedge.com.
Night Owl or Lark? The Answer May Affect Cognition
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM BMJ PUBLIC HEALTH
EMA Warns of Anaphylactic Reactions to MS Drug
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.
Factors Linked to Complete Response, Survival in Pancreatic Cancer
TOPLINE:
a multicenter cohort study found. Several factors, including treatment type and tumor features, influenced the outcomes.
METHODOLOGY:
- Preoperative chemo(radio)therapy is increasingly used in patients with localized pancreatic adenocarcinoma and may improve the chance of a pathologic complete response. Achieving a pathologic complete response is associated with improved overall survival.
- However, the evidence on pathologic complete response is based on large national databases or small single-center series. Multicenter studies with in-depth data about complete response are lacking.
- In the current analysis, researchers investigated the incidence and factors associated with pathologic complete response after preoperative chemo(radio)therapy among 1758 patients (mean age, 64 years; 50% men) with localized pancreatic adenocarcinoma who underwent resection after two or more cycles of chemotherapy (with or without radiotherapy).
- Patients were treated at 19 centers in eight countries. The median follow-up was 19 months. Pathologic complete response was defined as the absence of vital tumor cells in the patient’s sampled pancreas specimen after resection.
- Factors associated with overall survival and pathologic complete response were investigated with Cox proportional hazards and logistic regression models, respectively.
TAKEAWAY:
- Researchers found that the rate of pathologic complete response was 4.8% in patients who received chemo(radio)therapy before pancreatic cancer resection.
- Having a pathologic complete response was associated with a 54% lower risk for death (hazard ratio, 0.46). At 5 years, the overall survival rate was 63% in patients with a pathologic complete response vs 30% in patients without one.
- More patients who received preoperative modified FOLFIRINOX achieved a pathologic complete response (58.8% vs 44.7%). Other factors associated with pathologic complete response included tumors located in the pancreatic head (odds ratio [OR], 2.51), tumors > 40 mm at diagnosis (OR, 2.58), partial or complete radiologic response (OR, 13.0), and normal(ized) serum carbohydrate antigen 19-9 after preoperative therapy (OR, 3.76).
- Preoperative radiotherapy (OR, 2.03) and preoperative stereotactic body radiotherapy (OR, 8.91) were also associated with a pathologic complete response; however, preoperative radiotherapy did not improve overall survival, and preoperative stereotactic body radiotherapy was independently associated with worse overall survival. These findings suggest that a pathologic complete response might not always reflect an optimal disease response.
IN PRACTICE:
Although pathologic complete response does not reflect cure, it is associated with better overall survival, the authors wrote. Factors associated with a pathologic complete response may inform treatment decisions.
SOURCE:
The study, with first author Thomas F. Stoop, MD, University of Amsterdam, the Netherlands, was published online on June 18 in JAMA Network Open.
LIMITATIONS:
The study had several limitations. The sample size and the limited number of events precluded comparative subanalyses, as well as a more detailed stratification for preoperative chemotherapy regimens. Information about patients’ race and the presence of BRCA germline mutations, both of which seem to be relevant to the chance of achieving a major pathologic response, was not collected or available.
DISCLOSURES:
No specific funding was noted. Several coauthors have industry relationships outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
a multicenter cohort study found. Several factors, including treatment type and tumor features, influenced the outcomes.
METHODOLOGY:
- Preoperative chemo(radio)therapy is increasingly used in patients with localized pancreatic adenocarcinoma and may improve the chance of a pathologic complete response. Achieving a pathologic complete response is associated with improved overall survival.
- However, the evidence on pathologic complete response is based on large national databases or small single-center series. Multicenter studies with in-depth data about complete response are lacking.
- In the current analysis, researchers investigated the incidence and factors associated with pathologic complete response after preoperative chemo(radio)therapy among 1758 patients (mean age, 64 years; 50% men) with localized pancreatic adenocarcinoma who underwent resection after two or more cycles of chemotherapy (with or without radiotherapy).
- Patients were treated at 19 centers in eight countries. The median follow-up was 19 months. Pathologic complete response was defined as the absence of vital tumor cells in the patient’s sampled pancreas specimen after resection.
- Factors associated with overall survival and pathologic complete response were investigated with Cox proportional hazards and logistic regression models, respectively.
TAKEAWAY:
- Researchers found that the rate of pathologic complete response was 4.8% in patients who received chemo(radio)therapy before pancreatic cancer resection.
- Having a pathologic complete response was associated with a 54% lower risk for death (hazard ratio, 0.46). At 5 years, the overall survival rate was 63% in patients with a pathologic complete response vs 30% in patients without one.
- More patients who received preoperative modified FOLFIRINOX achieved a pathologic complete response (58.8% vs 44.7%). Other factors associated with pathologic complete response included tumors located in the pancreatic head (odds ratio [OR], 2.51), tumors > 40 mm at diagnosis (OR, 2.58), partial or complete radiologic response (OR, 13.0), and normal(ized) serum carbohydrate antigen 19-9 after preoperative therapy (OR, 3.76).
- Preoperative radiotherapy (OR, 2.03) and preoperative stereotactic body radiotherapy (OR, 8.91) were also associated with a pathologic complete response; however, preoperative radiotherapy did not improve overall survival, and preoperative stereotactic body radiotherapy was independently associated with worse overall survival. These findings suggest that a pathologic complete response might not always reflect an optimal disease response.
IN PRACTICE:
Although pathologic complete response does not reflect cure, it is associated with better overall survival, the authors wrote. Factors associated with a pathologic complete response may inform treatment decisions.
SOURCE:
The study, with first author Thomas F. Stoop, MD, University of Amsterdam, the Netherlands, was published online on June 18 in JAMA Network Open.
LIMITATIONS:
The study had several limitations. The sample size and the limited number of events precluded comparative subanalyses, as well as a more detailed stratification for preoperative chemotherapy regimens. Information about patients’ race and the presence of BRCA germline mutations, both of which seem to be relevant to the chance of achieving a major pathologic response, was not collected or available.
DISCLOSURES:
No specific funding was noted. Several coauthors have industry relationships outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
a multicenter cohort study found. Several factors, including treatment type and tumor features, influenced the outcomes.
METHODOLOGY:
- Preoperative chemo(radio)therapy is increasingly used in patients with localized pancreatic adenocarcinoma and may improve the chance of a pathologic complete response. Achieving a pathologic complete response is associated with improved overall survival.
- However, the evidence on pathologic complete response is based on large national databases or small single-center series. Multicenter studies with in-depth data about complete response are lacking.
- In the current analysis, researchers investigated the incidence and factors associated with pathologic complete response after preoperative chemo(radio)therapy among 1758 patients (mean age, 64 years; 50% men) with localized pancreatic adenocarcinoma who underwent resection after two or more cycles of chemotherapy (with or without radiotherapy).
- Patients were treated at 19 centers in eight countries. The median follow-up was 19 months. Pathologic complete response was defined as the absence of vital tumor cells in the patient’s sampled pancreas specimen after resection.
- Factors associated with overall survival and pathologic complete response were investigated with Cox proportional hazards and logistic regression models, respectively.
TAKEAWAY:
- Researchers found that the rate of pathologic complete response was 4.8% in patients who received chemo(radio)therapy before pancreatic cancer resection.
- Having a pathologic complete response was associated with a 54% lower risk for death (hazard ratio, 0.46). At 5 years, the overall survival rate was 63% in patients with a pathologic complete response vs 30% in patients without one.
- More patients who received preoperative modified FOLFIRINOX achieved a pathologic complete response (58.8% vs 44.7%). Other factors associated with pathologic complete response included tumors located in the pancreatic head (odds ratio [OR], 2.51), tumors > 40 mm at diagnosis (OR, 2.58), partial or complete radiologic response (OR, 13.0), and normal(ized) serum carbohydrate antigen 19-9 after preoperative therapy (OR, 3.76).
- Preoperative radiotherapy (OR, 2.03) and preoperative stereotactic body radiotherapy (OR, 8.91) were also associated with a pathologic complete response; however, preoperative radiotherapy did not improve overall survival, and preoperative stereotactic body radiotherapy was independently associated with worse overall survival. These findings suggest that a pathologic complete response might not always reflect an optimal disease response.
IN PRACTICE:
Although pathologic complete response does not reflect cure, it is associated with better overall survival, the authors wrote. Factors associated with a pathologic complete response may inform treatment decisions.
SOURCE:
The study, with first author Thomas F. Stoop, MD, University of Amsterdam, the Netherlands, was published online on June 18 in JAMA Network Open.
LIMITATIONS:
The study had several limitations. The sample size and the limited number of events precluded comparative subanalyses, as well as a more detailed stratification for preoperative chemotherapy regimens. Information about patients’ race and the presence of BRCA germline mutations, both of which seem to be relevant to the chance of achieving a major pathologic response, was not collected or available.
DISCLOSURES:
No specific funding was noted. Several coauthors have industry relationships outside of the submitted work.
A version of this article first appeared on Medscape.com.
Managing Cancer in Pregnancy: Improvements and Considerations
Introduction: Tremendous Progress on Cancer Extends to Cancer in Pregnancy
The biomedical research enterprise that took shape in the United States after World War II has had numerous positive effects, including significant progress made during the past 75-plus years in the diagnosis, prevention, and treatment of cancer.
President Franklin D. Roosevelt’s 1944 request of Dr. Vannevar Bush, director of the then Office of Scientific Research and Development, to organize a program that would advance and apply scientific knowledge for times of peace — just as it been advanced and applied in times of war — culminated in a historic report, Science – The Endless Frontier. Presented in 1945 to President Harry S. Truman, this report helped fuel decades of broad, bold, and coordinated government-sponsored biomedical research aimed at addressing disease and improving the health of the American people (National Science Foundation, 1945).
Discoveries made from research in basic and translational sciences deepened our knowledge of the cellular and molecular underpinnings of cancer, leading to advances in chemotherapy, radiotherapy, and other treatment approaches as well as continual refinements in their application. Similarly, our diagnostic armamentarium has significantly improved.
As a result, we have reduced both the incidence and mortality of cancer. Today, some cancers can be prevented. Others can be reversed or put in remission. Granted, progress has been variable, with some cancers such as ovarian cancer still having relatively low survival rates. Much more needs to be done. Overall, however, the positive effects of the U.S. biomedical research enterprise on cancer are evident. According to the National Cancer Institute’s most recent report on the status of cancer, death rates from cancer fell 1.9% per year on average in females from 2015 to 2019 (Cancer. 2022 Oct 22. doi: 10.1002/cncr.34479).
It is not only patients whose cancer occurs outside of pregnancy who have benefited. When treatment is appropriately selected and timing considerations are made, patients whose cancer is diagnosed during pregnancy — and their children — can have good outcomes.
To explain how the management of cancer in pregnancy has improved, we have invited Gautam G. Rao, MD, gynecologic oncologist and associate professor of obstetrics, gynecology, and reproductive sciences at the University of Maryland School of Medicine, to write this installment of the Master Class in Obstetrics. As Dr. Rao explains, radiation is not as dangerous to the fetus as once thought, and the safety of many chemotherapeutic regimens in pregnancy has been documented. Obstetricians can and should counsel patients, he explains, about the likelihood of good maternal and fetal outcomes.
E. Albert Reece, MD, PhD, MBA, a maternal-fetal medicine specialist, is dean emeritus of the University of Maryland School of Medicine, former university executive vice president; currently the endowed professor and director of the Center for Advanced Research Training and Innovation (CARTI), and senior scientist in the Center for Birth Defects Research. Dr. Reece reported no relevant disclosures. He is the medical editor of this column. Contact him at obnews@mdedge.com.
Managing Cancer in Pregnancy
Cancer can cause fear and distress for any patient, but when cancer is diagnosed during pregnancy, an expectant mother fears not only for her own health but for the health of her unborn child. Fortunately, ob.gyn.s and multidisciplinary teams have good reason to reassure patients about the likelihood of good outcomes.
Cancer treatment in pregnancy has improved with advancements in imaging and chemotherapy, and while maternal and fetal outcomes of prenatal cancer treatment are not well reported, evidence acquired in recent years from case series and retrospective studies shows that most imaging studies and procedural diagnostic tests – and many treatments – can be performed safely in pregnancy.
Decades ago, we avoided CT scans during pregnancy because of concerns about radiation exposure to the fetus, leaving some patients without an accurate staging of their cancer. Today, we have evidence that a CT scan is generally safe in pregnancy. Similarly, the safety of many chemotherapeutic regimens in pregnancy has been documented in recent decades,and the use of chemotherapy during pregnancy has increased progressively. Radiation is also commonly utilized in the management of cancers that may occur during pregnancy, such as breast cancer.1
Considerations of timing are often central to decision-making; chemotherapy and radiotherapy are generally avoided in the first trimester to prevent structural fetal anomalies, for instance, and delaying cancer treatment is often warranted when the patient is a few weeks away from delivery. On occasion, iatrogenic preterm birth is considered when the risks to the mother of delaying a necessary cancer treatment outweigh the risks to the fetus of prematurity.1
Pregnancy termination is rarely indicated, however, and information gathered over the past 2 decades suggests that fetal and placental metastases are rare.1 There is broad agreement that prenatal treatment of cancer in pregnancy should adhere as much as possible to protocols and guidelines for nonpregnant patients and that treatment delays driven by fear of fetal anomalies and miscarriage are unnecessary.
Cancer Incidence, Use of Diagnostic Imaging
Data on the incidence of cancer in pregnancy comes from population-based cancer registries, and unfortunately, these data are not standardized and are often incomplete. Many studies include cancer diagnosed up to 1 year after pregnancy, and some include preinvasive disease. Estimates therefore vary considerably (see Table 1 for a sampling of estimates incidences.)
It has been reported, and often cited in the literature, that invasive malignancy complicates one in 1,000 pregnancies and that the incidence of cancer in pregnancy (invasive and noninvasive malignancies) has been rising over time.8 Increasing maternal age is believed to be playing a role in this rise; as women delay childbearing, they enter the age range in which some cancers become more common. Additionally, improvements in screening and diagnostics have led to earlier cancer detection. The incidence of ovarian neoplasms found during pregnancy has increased, for instance, with the routine use of diagnostic ultrasound in pregnancy.1
Among the studies showing an increased incidence of pregnancy-associated cancer is a population-based study in Australia, which found that from 1994 to 2007 the crude incidence of pregnancy-associated cancer increased from 112.3 to 191.5 per 100,000 pregnancies (P < .001).9 A cohort study in the United States documented an increase in incidence from 75.0 per 100,000 pregnancies in 2002 to 138.5 per 100,000 pregnancies in 2012.10
Overall, the literature shows us that the skin, cervix, and breast are also common sites for malignancy during pregnancy.1 According to a 2022 review, breast cancer during pregnancy is less often hormone receptor–positive and more frequently triple negative compared with age-matched controls.11 The frequencies of other pregnancy-associated cancers appear overall to be similar to that of cancer occurring in all women across their reproductive years.1
Too often, diagnosis is delayed because cancer symptoms can be masked by or can mimic normal physiological changes in pregnancy. For instance, breast cancer can be difficult to diagnose during pregnancy and lactation due to anatomic changes in the breast parenchyma. Several studies published in the 1990s showed that breast cancer presents at a more advanced stage in pregnant patients than in nonpregnant patients because of this delay.1 Skin changes suggestive of melanoma can be attributed to hyperpigmentation of pregnancy, for instance. Several observational studies have suggested that thicker melanomas found in pregnancy may be because of delayed diagnosis.8
It is important that we thoroughly investigate signs and symptoms suggestive of a malignancy and not automatically attribute these symptoms to the pregnancy itself. Cervical biopsy of a mass or lesion suspicious for cervical cancer can be done safely during pregnancy and should not be delayed or deferred.
Fetal radiation exposure from radiologic examinations has long been a concern, but we know today that while the imaging modality should be chosen to minimize fetal radiation exposure, CT scans and even PET scans should be performed if these exams are deemed best for evaluation. Embryonic exposure to a dose of less than 50 mGy is rarely if at all associated with fetal malformations or miscarriage and a radiation dose of 100 mGy may be considered a floor for consideration of therapeutic termination of pregnancy.1,8
CT exams are associated with a fetal dose far less than 50 mGy (see Table 2 for radiation doses).
Magnetic resonance imaging with a magnet strength of 3 Tesla or less in any trimester is not associated with an increased risk of harm to the fetus or in early childhood, but the contrast agent gadolinium should be avoided in pregnancy as it has been associated with an increased risk of stillbirth, neonatal death, and childhood inflammatory, rheumatologic, and infiltrative skin lesions.1,8,12
Chemotherapy, Surgery, and Radiation in Pregnancy
The management of cancer during pregnancy requires a multidisciplinary team including medical, gynecologic, or radiation oncologists, and maternal-fetal medicine specialists (Figure 1). Prematurity and low birth weight are frequent complications for fetuses exposed to chemotherapy, although there is some uncertainty as to whether the treatment is causative. However, congenital anomalies no longer are a major concern, provided that drugs are appropriately selected and that fetal exposure occurs during the second or third trimester.
For instance, alkylating agents including cisplatin (an important drug in the management of gynecologic malignancies) have been associated with congenital anomalies in the first trimester but not in the second and third trimesters, and a variety of antimetabolites — excluding methotrexate and aminopterin — similarly have been shown to be relatively safe when used after the first trimester.1
Small studies have shown no long-term effects of chemotherapy exposure on postnatal growth and long-term neurologic/neurocognitive function,1 but this is an area that needs more research.
Also in need of investigation is the safety of newer agents in pregnancy. Data are limited on the use of new targeted treatments, monoclonal antibodies, and immunotherapies in pregnancy and their effects on the fetus, with current knowledge coming mainly from single case reports.13
Until more is learned — a challenge given that pregnant women are generally excluded from clinical trials — management teams are generally postponing use of these therapies until after delivery. Considering the pace of new developments revolutionizing cancer treatment, this topic will likely get more complex and confusing before we begin acquiring sufficient knowledge.
The timing of surgery for malignancy in pregnancy is similarly based on the balance of maternal and fetal risks, including the risk of maternal disease progression, the risk of preterm delivery, and the prevention of fetal metastases. In general, the safest time is the second trimester.
Maternal surgery in the third trimester may be associated with a risk of premature labor and altered uteroplacental perfusion. A 2005 systematic review of 12,452 women who underwent nonobstetric surgery during pregnancy provides some reassurance, however; compared with the general obstetric population, there was no increase in the rate of miscarriage or major birth defects.14
Radiotherapy used to be contraindicated in pregnancy but many experts today believe it can be safely utilized provided the uterus is out of field and is protected from scattered radiation. The head, neck, and breast, for instance, can be treated with newer radiotherapies, including stereotactic ablative radiation therapy.8 Patients with advanced cervical cancer often receive chemotherapy during pregnancy to slow metastatic growth followed by definitive treatment with postpartum radiation or surgery.
More research is needed, but available data on maternal outcomes are encouraging. For instance, there appear to be no significant differences in short- and long-term complications or survival between women who are pregnant and nonpregnant when treated for invasive cervical cancer.8 Similarly, while earlier studies of breast cancer diagnosed during pregnancy suggested a poor prognosis, data now show similar prognoses for pregnant and nonpregnant patients when controlled for stage.1
Dr. Rao is a gynecologic oncologist and associate professor of obstetrics, gynecology, and reproductive sciences at the University of Maryland School of Medicine, Baltimore. He reported no relevant disclosures.
References
1. Rao GG. Chapter 42. Clinical Obstetrics: The Fetus & Mother, 4th ed. Reece EA et al. (eds): 2021.
2. Bannister-Tyrrell M et al. Aust N Z J Obstet Gynaecol. 2014;55:116-122.
3. Oehler MK et al. Aust N Z J Obstet Gynaecol. 2003;43(6):414-420.
4. Ruiz R et al. Breast. 2017;35:136-141. doi: 10.1016/j.breast.2017.07.008.
5. Nolan S et al. Am J Obstet Gynecol. 2019;220(1):S480. doi: 10.1016/j.ajog.2018.11.752.
6. El-Messidi A et al. J Perinat Med. 2015;43(6):683-688. doi: 10.1515/jpm-2014-0133.
7. Pellino G et al. Eur J Gastroenterol Hepatol. 2017;29(7):743-753. doi: 10.1097/MEG.0000000000000863.
8. Eastwood-Wilshere N et al. Asia-Pac J Clin Oncol. 2019;15:296-308.
9. Lee YY et al. BJOG. 2012;119(13):1572-1582.
10. Cottreau CM et al. J Womens Health (Larchmt). 2019 Feb;28(2):250-257.
11. Boere I et al. Best Pract Res Clin Obstet Gynaecol. 2022;82:46-59.
12. Ray JG et al. JAMA 2016;316(9):952-961.
13. Schwab R et al. Cancers. (Basel) 2021;13(12):3048.
14. Cohen-Kerem et al. Am J Surg. 2005;190(3):467-473.
Introduction: Tremendous Progress on Cancer Extends to Cancer in Pregnancy
The biomedical research enterprise that took shape in the United States after World War II has had numerous positive effects, including significant progress made during the past 75-plus years in the diagnosis, prevention, and treatment of cancer.
President Franklin D. Roosevelt’s 1944 request of Dr. Vannevar Bush, director of the then Office of Scientific Research and Development, to organize a program that would advance and apply scientific knowledge for times of peace — just as it been advanced and applied in times of war — culminated in a historic report, Science – The Endless Frontier. Presented in 1945 to President Harry S. Truman, this report helped fuel decades of broad, bold, and coordinated government-sponsored biomedical research aimed at addressing disease and improving the health of the American people (National Science Foundation, 1945).
Discoveries made from research in basic and translational sciences deepened our knowledge of the cellular and molecular underpinnings of cancer, leading to advances in chemotherapy, radiotherapy, and other treatment approaches as well as continual refinements in their application. Similarly, our diagnostic armamentarium has significantly improved.
As a result, we have reduced both the incidence and mortality of cancer. Today, some cancers can be prevented. Others can be reversed or put in remission. Granted, progress has been variable, with some cancers such as ovarian cancer still having relatively low survival rates. Much more needs to be done. Overall, however, the positive effects of the U.S. biomedical research enterprise on cancer are evident. According to the National Cancer Institute’s most recent report on the status of cancer, death rates from cancer fell 1.9% per year on average in females from 2015 to 2019 (Cancer. 2022 Oct 22. doi: 10.1002/cncr.34479).
It is not only patients whose cancer occurs outside of pregnancy who have benefited. When treatment is appropriately selected and timing considerations are made, patients whose cancer is diagnosed during pregnancy — and their children — can have good outcomes.
To explain how the management of cancer in pregnancy has improved, we have invited Gautam G. Rao, MD, gynecologic oncologist and associate professor of obstetrics, gynecology, and reproductive sciences at the University of Maryland School of Medicine, to write this installment of the Master Class in Obstetrics. As Dr. Rao explains, radiation is not as dangerous to the fetus as once thought, and the safety of many chemotherapeutic regimens in pregnancy has been documented. Obstetricians can and should counsel patients, he explains, about the likelihood of good maternal and fetal outcomes.
E. Albert Reece, MD, PhD, MBA, a maternal-fetal medicine specialist, is dean emeritus of the University of Maryland School of Medicine, former university executive vice president; currently the endowed professor and director of the Center for Advanced Research Training and Innovation (CARTI), and senior scientist in the Center for Birth Defects Research. Dr. Reece reported no relevant disclosures. He is the medical editor of this column. Contact him at obnews@mdedge.com.
Managing Cancer in Pregnancy
Cancer can cause fear and distress for any patient, but when cancer is diagnosed during pregnancy, an expectant mother fears not only for her own health but for the health of her unborn child. Fortunately, ob.gyn.s and multidisciplinary teams have good reason to reassure patients about the likelihood of good outcomes.
Cancer treatment in pregnancy has improved with advancements in imaging and chemotherapy, and while maternal and fetal outcomes of prenatal cancer treatment are not well reported, evidence acquired in recent years from case series and retrospective studies shows that most imaging studies and procedural diagnostic tests – and many treatments – can be performed safely in pregnancy.
Decades ago, we avoided CT scans during pregnancy because of concerns about radiation exposure to the fetus, leaving some patients without an accurate staging of their cancer. Today, we have evidence that a CT scan is generally safe in pregnancy. Similarly, the safety of many chemotherapeutic regimens in pregnancy has been documented in recent decades,and the use of chemotherapy during pregnancy has increased progressively. Radiation is also commonly utilized in the management of cancers that may occur during pregnancy, such as breast cancer.1
Considerations of timing are often central to decision-making; chemotherapy and radiotherapy are generally avoided in the first trimester to prevent structural fetal anomalies, for instance, and delaying cancer treatment is often warranted when the patient is a few weeks away from delivery. On occasion, iatrogenic preterm birth is considered when the risks to the mother of delaying a necessary cancer treatment outweigh the risks to the fetus of prematurity.1
Pregnancy termination is rarely indicated, however, and information gathered over the past 2 decades suggests that fetal and placental metastases are rare.1 There is broad agreement that prenatal treatment of cancer in pregnancy should adhere as much as possible to protocols and guidelines for nonpregnant patients and that treatment delays driven by fear of fetal anomalies and miscarriage are unnecessary.
Cancer Incidence, Use of Diagnostic Imaging
Data on the incidence of cancer in pregnancy comes from population-based cancer registries, and unfortunately, these data are not standardized and are often incomplete. Many studies include cancer diagnosed up to 1 year after pregnancy, and some include preinvasive disease. Estimates therefore vary considerably (see Table 1 for a sampling of estimates incidences.)
It has been reported, and often cited in the literature, that invasive malignancy complicates one in 1,000 pregnancies and that the incidence of cancer in pregnancy (invasive and noninvasive malignancies) has been rising over time.8 Increasing maternal age is believed to be playing a role in this rise; as women delay childbearing, they enter the age range in which some cancers become more common. Additionally, improvements in screening and diagnostics have led to earlier cancer detection. The incidence of ovarian neoplasms found during pregnancy has increased, for instance, with the routine use of diagnostic ultrasound in pregnancy.1
Among the studies showing an increased incidence of pregnancy-associated cancer is a population-based study in Australia, which found that from 1994 to 2007 the crude incidence of pregnancy-associated cancer increased from 112.3 to 191.5 per 100,000 pregnancies (P < .001).9 A cohort study in the United States documented an increase in incidence from 75.0 per 100,000 pregnancies in 2002 to 138.5 per 100,000 pregnancies in 2012.10
Overall, the literature shows us that the skin, cervix, and breast are also common sites for malignancy during pregnancy.1 According to a 2022 review, breast cancer during pregnancy is less often hormone receptor–positive and more frequently triple negative compared with age-matched controls.11 The frequencies of other pregnancy-associated cancers appear overall to be similar to that of cancer occurring in all women across their reproductive years.1
Too often, diagnosis is delayed because cancer symptoms can be masked by or can mimic normal physiological changes in pregnancy. For instance, breast cancer can be difficult to diagnose during pregnancy and lactation due to anatomic changes in the breast parenchyma. Several studies published in the 1990s showed that breast cancer presents at a more advanced stage in pregnant patients than in nonpregnant patients because of this delay.1 Skin changes suggestive of melanoma can be attributed to hyperpigmentation of pregnancy, for instance. Several observational studies have suggested that thicker melanomas found in pregnancy may be because of delayed diagnosis.8
It is important that we thoroughly investigate signs and symptoms suggestive of a malignancy and not automatically attribute these symptoms to the pregnancy itself. Cervical biopsy of a mass or lesion suspicious for cervical cancer can be done safely during pregnancy and should not be delayed or deferred.
Fetal radiation exposure from radiologic examinations has long been a concern, but we know today that while the imaging modality should be chosen to minimize fetal radiation exposure, CT scans and even PET scans should be performed if these exams are deemed best for evaluation. Embryonic exposure to a dose of less than 50 mGy is rarely if at all associated with fetal malformations or miscarriage and a radiation dose of 100 mGy may be considered a floor for consideration of therapeutic termination of pregnancy.1,8
CT exams are associated with a fetal dose far less than 50 mGy (see Table 2 for radiation doses).
Magnetic resonance imaging with a magnet strength of 3 Tesla or less in any trimester is not associated with an increased risk of harm to the fetus or in early childhood, but the contrast agent gadolinium should be avoided in pregnancy as it has been associated with an increased risk of stillbirth, neonatal death, and childhood inflammatory, rheumatologic, and infiltrative skin lesions.1,8,12
Chemotherapy, Surgery, and Radiation in Pregnancy
The management of cancer during pregnancy requires a multidisciplinary team including medical, gynecologic, or radiation oncologists, and maternal-fetal medicine specialists (Figure 1). Prematurity and low birth weight are frequent complications for fetuses exposed to chemotherapy, although there is some uncertainty as to whether the treatment is causative. However, congenital anomalies no longer are a major concern, provided that drugs are appropriately selected and that fetal exposure occurs during the second or third trimester.
For instance, alkylating agents including cisplatin (an important drug in the management of gynecologic malignancies) have been associated with congenital anomalies in the first trimester but not in the second and third trimesters, and a variety of antimetabolites — excluding methotrexate and aminopterin — similarly have been shown to be relatively safe when used after the first trimester.1
Small studies have shown no long-term effects of chemotherapy exposure on postnatal growth and long-term neurologic/neurocognitive function,1 but this is an area that needs more research.
Also in need of investigation is the safety of newer agents in pregnancy. Data are limited on the use of new targeted treatments, monoclonal antibodies, and immunotherapies in pregnancy and their effects on the fetus, with current knowledge coming mainly from single case reports.13
Until more is learned — a challenge given that pregnant women are generally excluded from clinical trials — management teams are generally postponing use of these therapies until after delivery. Considering the pace of new developments revolutionizing cancer treatment, this topic will likely get more complex and confusing before we begin acquiring sufficient knowledge.
The timing of surgery for malignancy in pregnancy is similarly based on the balance of maternal and fetal risks, including the risk of maternal disease progression, the risk of preterm delivery, and the prevention of fetal metastases. In general, the safest time is the second trimester.
Maternal surgery in the third trimester may be associated with a risk of premature labor and altered uteroplacental perfusion. A 2005 systematic review of 12,452 women who underwent nonobstetric surgery during pregnancy provides some reassurance, however; compared with the general obstetric population, there was no increase in the rate of miscarriage or major birth defects.14
Radiotherapy used to be contraindicated in pregnancy but many experts today believe it can be safely utilized provided the uterus is out of field and is protected from scattered radiation. The head, neck, and breast, for instance, can be treated with newer radiotherapies, including stereotactic ablative radiation therapy.8 Patients with advanced cervical cancer often receive chemotherapy during pregnancy to slow metastatic growth followed by definitive treatment with postpartum radiation or surgery.
More research is needed, but available data on maternal outcomes are encouraging. For instance, there appear to be no significant differences in short- and long-term complications or survival between women who are pregnant and nonpregnant when treated for invasive cervical cancer.8 Similarly, while earlier studies of breast cancer diagnosed during pregnancy suggested a poor prognosis, data now show similar prognoses for pregnant and nonpregnant patients when controlled for stage.1
Dr. Rao is a gynecologic oncologist and associate professor of obstetrics, gynecology, and reproductive sciences at the University of Maryland School of Medicine, Baltimore. He reported no relevant disclosures.
References
1. Rao GG. Chapter 42. Clinical Obstetrics: The Fetus & Mother, 4th ed. Reece EA et al. (eds): 2021.
2. Bannister-Tyrrell M et al. Aust N Z J Obstet Gynaecol. 2014;55:116-122.
3. Oehler MK et al. Aust N Z J Obstet Gynaecol. 2003;43(6):414-420.
4. Ruiz R et al. Breast. 2017;35:136-141. doi: 10.1016/j.breast.2017.07.008.
5. Nolan S et al. Am J Obstet Gynecol. 2019;220(1):S480. doi: 10.1016/j.ajog.2018.11.752.
6. El-Messidi A et al. J Perinat Med. 2015;43(6):683-688. doi: 10.1515/jpm-2014-0133.
7. Pellino G et al. Eur J Gastroenterol Hepatol. 2017;29(7):743-753. doi: 10.1097/MEG.0000000000000863.
8. Eastwood-Wilshere N et al. Asia-Pac J Clin Oncol. 2019;15:296-308.
9. Lee YY et al. BJOG. 2012;119(13):1572-1582.
10. Cottreau CM et al. J Womens Health (Larchmt). 2019 Feb;28(2):250-257.
11. Boere I et al. Best Pract Res Clin Obstet Gynaecol. 2022;82:46-59.
12. Ray JG et al. JAMA 2016;316(9):952-961.
13. Schwab R et al. Cancers. (Basel) 2021;13(12):3048.
14. Cohen-Kerem et al. Am J Surg. 2005;190(3):467-473.
Introduction: Tremendous Progress on Cancer Extends to Cancer in Pregnancy
The biomedical research enterprise that took shape in the United States after World War II has had numerous positive effects, including significant progress made during the past 75-plus years in the diagnosis, prevention, and treatment of cancer.
President Franklin D. Roosevelt’s 1944 request of Dr. Vannevar Bush, director of the then Office of Scientific Research and Development, to organize a program that would advance and apply scientific knowledge for times of peace — just as it been advanced and applied in times of war — culminated in a historic report, Science – The Endless Frontier. Presented in 1945 to President Harry S. Truman, this report helped fuel decades of broad, bold, and coordinated government-sponsored biomedical research aimed at addressing disease and improving the health of the American people (National Science Foundation, 1945).
Discoveries made from research in basic and translational sciences deepened our knowledge of the cellular and molecular underpinnings of cancer, leading to advances in chemotherapy, radiotherapy, and other treatment approaches as well as continual refinements in their application. Similarly, our diagnostic armamentarium has significantly improved.
As a result, we have reduced both the incidence and mortality of cancer. Today, some cancers can be prevented. Others can be reversed or put in remission. Granted, progress has been variable, with some cancers such as ovarian cancer still having relatively low survival rates. Much more needs to be done. Overall, however, the positive effects of the U.S. biomedical research enterprise on cancer are evident. According to the National Cancer Institute’s most recent report on the status of cancer, death rates from cancer fell 1.9% per year on average in females from 2015 to 2019 (Cancer. 2022 Oct 22. doi: 10.1002/cncr.34479).
It is not only patients whose cancer occurs outside of pregnancy who have benefited. When treatment is appropriately selected and timing considerations are made, patients whose cancer is diagnosed during pregnancy — and their children — can have good outcomes.
To explain how the management of cancer in pregnancy has improved, we have invited Gautam G. Rao, MD, gynecologic oncologist and associate professor of obstetrics, gynecology, and reproductive sciences at the University of Maryland School of Medicine, to write this installment of the Master Class in Obstetrics. As Dr. Rao explains, radiation is not as dangerous to the fetus as once thought, and the safety of many chemotherapeutic regimens in pregnancy has been documented. Obstetricians can and should counsel patients, he explains, about the likelihood of good maternal and fetal outcomes.
E. Albert Reece, MD, PhD, MBA, a maternal-fetal medicine specialist, is dean emeritus of the University of Maryland School of Medicine, former university executive vice president; currently the endowed professor and director of the Center for Advanced Research Training and Innovation (CARTI), and senior scientist in the Center for Birth Defects Research. Dr. Reece reported no relevant disclosures. He is the medical editor of this column. Contact him at obnews@mdedge.com.
Managing Cancer in Pregnancy
Cancer can cause fear and distress for any patient, but when cancer is diagnosed during pregnancy, an expectant mother fears not only for her own health but for the health of her unborn child. Fortunately, ob.gyn.s and multidisciplinary teams have good reason to reassure patients about the likelihood of good outcomes.
Cancer treatment in pregnancy has improved with advancements in imaging and chemotherapy, and while maternal and fetal outcomes of prenatal cancer treatment are not well reported, evidence acquired in recent years from case series and retrospective studies shows that most imaging studies and procedural diagnostic tests – and many treatments – can be performed safely in pregnancy.
Decades ago, we avoided CT scans during pregnancy because of concerns about radiation exposure to the fetus, leaving some patients without an accurate staging of their cancer. Today, we have evidence that a CT scan is generally safe in pregnancy. Similarly, the safety of many chemotherapeutic regimens in pregnancy has been documented in recent decades,and the use of chemotherapy during pregnancy has increased progressively. Radiation is also commonly utilized in the management of cancers that may occur during pregnancy, such as breast cancer.1
Considerations of timing are often central to decision-making; chemotherapy and radiotherapy are generally avoided in the first trimester to prevent structural fetal anomalies, for instance, and delaying cancer treatment is often warranted when the patient is a few weeks away from delivery. On occasion, iatrogenic preterm birth is considered when the risks to the mother of delaying a necessary cancer treatment outweigh the risks to the fetus of prematurity.1
Pregnancy termination is rarely indicated, however, and information gathered over the past 2 decades suggests that fetal and placental metastases are rare.1 There is broad agreement that prenatal treatment of cancer in pregnancy should adhere as much as possible to protocols and guidelines for nonpregnant patients and that treatment delays driven by fear of fetal anomalies and miscarriage are unnecessary.
Cancer Incidence, Use of Diagnostic Imaging
Data on the incidence of cancer in pregnancy comes from population-based cancer registries, and unfortunately, these data are not standardized and are often incomplete. Many studies include cancer diagnosed up to 1 year after pregnancy, and some include preinvasive disease. Estimates therefore vary considerably (see Table 1 for a sampling of estimates incidences.)
It has been reported, and often cited in the literature, that invasive malignancy complicates one in 1,000 pregnancies and that the incidence of cancer in pregnancy (invasive and noninvasive malignancies) has been rising over time.8 Increasing maternal age is believed to be playing a role in this rise; as women delay childbearing, they enter the age range in which some cancers become more common. Additionally, improvements in screening and diagnostics have led to earlier cancer detection. The incidence of ovarian neoplasms found during pregnancy has increased, for instance, with the routine use of diagnostic ultrasound in pregnancy.1
Among the studies showing an increased incidence of pregnancy-associated cancer is a population-based study in Australia, which found that from 1994 to 2007 the crude incidence of pregnancy-associated cancer increased from 112.3 to 191.5 per 100,000 pregnancies (P < .001).9 A cohort study in the United States documented an increase in incidence from 75.0 per 100,000 pregnancies in 2002 to 138.5 per 100,000 pregnancies in 2012.10
Overall, the literature shows us that the skin, cervix, and breast are also common sites for malignancy during pregnancy.1 According to a 2022 review, breast cancer during pregnancy is less often hormone receptor–positive and more frequently triple negative compared with age-matched controls.11 The frequencies of other pregnancy-associated cancers appear overall to be similar to that of cancer occurring in all women across their reproductive years.1
Too often, diagnosis is delayed because cancer symptoms can be masked by or can mimic normal physiological changes in pregnancy. For instance, breast cancer can be difficult to diagnose during pregnancy and lactation due to anatomic changes in the breast parenchyma. Several studies published in the 1990s showed that breast cancer presents at a more advanced stage in pregnant patients than in nonpregnant patients because of this delay.1 Skin changes suggestive of melanoma can be attributed to hyperpigmentation of pregnancy, for instance. Several observational studies have suggested that thicker melanomas found in pregnancy may be because of delayed diagnosis.8
It is important that we thoroughly investigate signs and symptoms suggestive of a malignancy and not automatically attribute these symptoms to the pregnancy itself. Cervical biopsy of a mass or lesion suspicious for cervical cancer can be done safely during pregnancy and should not be delayed or deferred.
Fetal radiation exposure from radiologic examinations has long been a concern, but we know today that while the imaging modality should be chosen to minimize fetal radiation exposure, CT scans and even PET scans should be performed if these exams are deemed best for evaluation. Embryonic exposure to a dose of less than 50 mGy is rarely if at all associated with fetal malformations or miscarriage and a radiation dose of 100 mGy may be considered a floor for consideration of therapeutic termination of pregnancy.1,8
CT exams are associated with a fetal dose far less than 50 mGy (see Table 2 for radiation doses).
Magnetic resonance imaging with a magnet strength of 3 Tesla or less in any trimester is not associated with an increased risk of harm to the fetus or in early childhood, but the contrast agent gadolinium should be avoided in pregnancy as it has been associated with an increased risk of stillbirth, neonatal death, and childhood inflammatory, rheumatologic, and infiltrative skin lesions.1,8,12
Chemotherapy, Surgery, and Radiation in Pregnancy
The management of cancer during pregnancy requires a multidisciplinary team including medical, gynecologic, or radiation oncologists, and maternal-fetal medicine specialists (Figure 1). Prematurity and low birth weight are frequent complications for fetuses exposed to chemotherapy, although there is some uncertainty as to whether the treatment is causative. However, congenital anomalies no longer are a major concern, provided that drugs are appropriately selected and that fetal exposure occurs during the second or third trimester.
For instance, alkylating agents including cisplatin (an important drug in the management of gynecologic malignancies) have been associated with congenital anomalies in the first trimester but not in the second and third trimesters, and a variety of antimetabolites — excluding methotrexate and aminopterin — similarly have been shown to be relatively safe when used after the first trimester.1
Small studies have shown no long-term effects of chemotherapy exposure on postnatal growth and long-term neurologic/neurocognitive function,1 but this is an area that needs more research.
Also in need of investigation is the safety of newer agents in pregnancy. Data are limited on the use of new targeted treatments, monoclonal antibodies, and immunotherapies in pregnancy and their effects on the fetus, with current knowledge coming mainly from single case reports.13
Until more is learned — a challenge given that pregnant women are generally excluded from clinical trials — management teams are generally postponing use of these therapies until after delivery. Considering the pace of new developments revolutionizing cancer treatment, this topic will likely get more complex and confusing before we begin acquiring sufficient knowledge.
The timing of surgery for malignancy in pregnancy is similarly based on the balance of maternal and fetal risks, including the risk of maternal disease progression, the risk of preterm delivery, and the prevention of fetal metastases. In general, the safest time is the second trimester.
Maternal surgery in the third trimester may be associated with a risk of premature labor and altered uteroplacental perfusion. A 2005 systematic review of 12,452 women who underwent nonobstetric surgery during pregnancy provides some reassurance, however; compared with the general obstetric population, there was no increase in the rate of miscarriage or major birth defects.14
Radiotherapy used to be contraindicated in pregnancy but many experts today believe it can be safely utilized provided the uterus is out of field and is protected from scattered radiation. The head, neck, and breast, for instance, can be treated with newer radiotherapies, including stereotactic ablative radiation therapy.8 Patients with advanced cervical cancer often receive chemotherapy during pregnancy to slow metastatic growth followed by definitive treatment with postpartum radiation or surgery.
More research is needed, but available data on maternal outcomes are encouraging. For instance, there appear to be no significant differences in short- and long-term complications or survival between women who are pregnant and nonpregnant when treated for invasive cervical cancer.8 Similarly, while earlier studies of breast cancer diagnosed during pregnancy suggested a poor prognosis, data now show similar prognoses for pregnant and nonpregnant patients when controlled for stage.1
Dr. Rao is a gynecologic oncologist and associate professor of obstetrics, gynecology, and reproductive sciences at the University of Maryland School of Medicine, Baltimore. He reported no relevant disclosures.
References
1. Rao GG. Chapter 42. Clinical Obstetrics: The Fetus & Mother, 4th ed. Reece EA et al. (eds): 2021.
2. Bannister-Tyrrell M et al. Aust N Z J Obstet Gynaecol. 2014;55:116-122.
3. Oehler MK et al. Aust N Z J Obstet Gynaecol. 2003;43(6):414-420.
4. Ruiz R et al. Breast. 2017;35:136-141. doi: 10.1016/j.breast.2017.07.008.
5. Nolan S et al. Am J Obstet Gynecol. 2019;220(1):S480. doi: 10.1016/j.ajog.2018.11.752.
6. El-Messidi A et al. J Perinat Med. 2015;43(6):683-688. doi: 10.1515/jpm-2014-0133.
7. Pellino G et al. Eur J Gastroenterol Hepatol. 2017;29(7):743-753. doi: 10.1097/MEG.0000000000000863.
8. Eastwood-Wilshere N et al. Asia-Pac J Clin Oncol. 2019;15:296-308.
9. Lee YY et al. BJOG. 2012;119(13):1572-1582.
10. Cottreau CM et al. J Womens Health (Larchmt). 2019 Feb;28(2):250-257.
11. Boere I et al. Best Pract Res Clin Obstet Gynaecol. 2022;82:46-59.
12. Ray JG et al. JAMA 2016;316(9):952-961.
13. Schwab R et al. Cancers. (Basel) 2021;13(12):3048.
14. Cohen-Kerem et al. Am J Surg. 2005;190(3):467-473.
Benefit of Massage Therapy for Pain Unclear
The effectiveness of massage therapy for a range of painful adult health conditions remains uncertain. Despite hundreds of randomized clinical trials and dozens of systematic reviews, few studies have offered conclusions based on more than low-certainty evidence, a systematic review in JAMA Network Open has shown (doi: 10.1001/jamanetworkopen.2024.22259).
Some moderate-certainty evidence, however, suggested massage therapy may alleviate pain related to such conditions as low-back problems, labor, and breast cancer surgery, concluded a group led by Selene Mak, PhD, MPH, program manager in the Evidence Synthesis Program at the Veterans Health Administration Greater Los Angeles Healthcare System in Los Angeles, California.
“More high-quality randomized clinical trials are needed to provide a stronger evidence base to assess the effect of massage therapy on pain,” Dr. Mak and colleagues wrote.
The review updates a previous Veterans Affairs evidence map covering reviews of massage therapy for pain published through 2018.
To categorize the evidence base for decision-making by policymakers and practitioners, the VA requested an updated evidence map of reviews to answer the question: “What is the certainty of evidence in systematic reviews of massage therapy for pain?”
The Analysis
The current review included studies published from 2018 to 2023 with formal ratings of evidence quality or certainty, excluding other nonpharmacologic techniques such as sports massage therapy, osteopathy, dry cupping, dry needling, and internal massage therapy, and self-administered techniques such as foam rolling.
Of 129 systematic reviews, only 41 formally rated evidence quality, and 17 were evidence-mapped for pain across 13 health states: cancer, back, neck and mechanical neck issues, fibromyalgia, labor, myofascial, palliative care need, plantar fasciitis, postoperative, post breast cancer surgery, and post cesarean/postpartum.
The investigators found no conclusions based on a high certainty of evidence, while seven based conclusions on moderate-certainty evidence. All remaining conclusions were rated as having low- or very-low-certainty evidence.
The priority, they added, should be studies comparing massage therapy with other recommended, accepted, and active therapies for pain and should have sufficiently long follow-up to allow any nonspecific outcomes to dissipate, At least 6 months’ follow-up has been suggested for studies of chronic pain.
While massage therapy is considered safe, in patients with central sensitizations more aggressive treatments may cause a flare of myofascial pain.
This study was funded by the Department of Veterans Affairs Health Services Research and Development. The authors had no conflicts of interest to disclose.
The effectiveness of massage therapy for a range of painful adult health conditions remains uncertain. Despite hundreds of randomized clinical trials and dozens of systematic reviews, few studies have offered conclusions based on more than low-certainty evidence, a systematic review in JAMA Network Open has shown (doi: 10.1001/jamanetworkopen.2024.22259).
Some moderate-certainty evidence, however, suggested massage therapy may alleviate pain related to such conditions as low-back problems, labor, and breast cancer surgery, concluded a group led by Selene Mak, PhD, MPH, program manager in the Evidence Synthesis Program at the Veterans Health Administration Greater Los Angeles Healthcare System in Los Angeles, California.
“More high-quality randomized clinical trials are needed to provide a stronger evidence base to assess the effect of massage therapy on pain,” Dr. Mak and colleagues wrote.
The review updates a previous Veterans Affairs evidence map covering reviews of massage therapy for pain published through 2018.
To categorize the evidence base for decision-making by policymakers and practitioners, the VA requested an updated evidence map of reviews to answer the question: “What is the certainty of evidence in systematic reviews of massage therapy for pain?”
The Analysis
The current review included studies published from 2018 to 2023 with formal ratings of evidence quality or certainty, excluding other nonpharmacologic techniques such as sports massage therapy, osteopathy, dry cupping, dry needling, and internal massage therapy, and self-administered techniques such as foam rolling.
Of 129 systematic reviews, only 41 formally rated evidence quality, and 17 were evidence-mapped for pain across 13 health states: cancer, back, neck and mechanical neck issues, fibromyalgia, labor, myofascial, palliative care need, plantar fasciitis, postoperative, post breast cancer surgery, and post cesarean/postpartum.
The investigators found no conclusions based on a high certainty of evidence, while seven based conclusions on moderate-certainty evidence. All remaining conclusions were rated as having low- or very-low-certainty evidence.
The priority, they added, should be studies comparing massage therapy with other recommended, accepted, and active therapies for pain and should have sufficiently long follow-up to allow any nonspecific outcomes to dissipate, At least 6 months’ follow-up has been suggested for studies of chronic pain.
While massage therapy is considered safe, in patients with central sensitizations more aggressive treatments may cause a flare of myofascial pain.
This study was funded by the Department of Veterans Affairs Health Services Research and Development. The authors had no conflicts of interest to disclose.
The effectiveness of massage therapy for a range of painful adult health conditions remains uncertain. Despite hundreds of randomized clinical trials and dozens of systematic reviews, few studies have offered conclusions based on more than low-certainty evidence, a systematic review in JAMA Network Open has shown (doi: 10.1001/jamanetworkopen.2024.22259).
Some moderate-certainty evidence, however, suggested massage therapy may alleviate pain related to such conditions as low-back problems, labor, and breast cancer surgery, concluded a group led by Selene Mak, PhD, MPH, program manager in the Evidence Synthesis Program at the Veterans Health Administration Greater Los Angeles Healthcare System in Los Angeles, California.
“More high-quality randomized clinical trials are needed to provide a stronger evidence base to assess the effect of massage therapy on pain,” Dr. Mak and colleagues wrote.
The review updates a previous Veterans Affairs evidence map covering reviews of massage therapy for pain published through 2018.
To categorize the evidence base for decision-making by policymakers and practitioners, the VA requested an updated evidence map of reviews to answer the question: “What is the certainty of evidence in systematic reviews of massage therapy for pain?”
The Analysis
The current review included studies published from 2018 to 2023 with formal ratings of evidence quality or certainty, excluding other nonpharmacologic techniques such as sports massage therapy, osteopathy, dry cupping, dry needling, and internal massage therapy, and self-administered techniques such as foam rolling.
Of 129 systematic reviews, only 41 formally rated evidence quality, and 17 were evidence-mapped for pain across 13 health states: cancer, back, neck and mechanical neck issues, fibromyalgia, labor, myofascial, palliative care need, plantar fasciitis, postoperative, post breast cancer surgery, and post cesarean/postpartum.
The investigators found no conclusions based on a high certainty of evidence, while seven based conclusions on moderate-certainty evidence. All remaining conclusions were rated as having low- or very-low-certainty evidence.
The priority, they added, should be studies comparing massage therapy with other recommended, accepted, and active therapies for pain and should have sufficiently long follow-up to allow any nonspecific outcomes to dissipate, At least 6 months’ follow-up has been suggested for studies of chronic pain.
While massage therapy is considered safe, in patients with central sensitizations more aggressive treatments may cause a flare of myofascial pain.
This study was funded by the Department of Veterans Affairs Health Services Research and Development. The authors had no conflicts of interest to disclose.
FROM JAMA NETWORK OPEN
How Aspirin May Lower Risk for Colorectal Cancer
A 2020 meta-analysis, for instance, found that 325 mg of daily aspirin — the typical dose in a single tablet — conferred a 35% reduced risk of developing CRC, and a highly cited The Lancet study from 2010 found that a low dose of daily aspirin reduced the incidence of colon cancer by 24% and colon cancer deaths by 35% over 20 years.
The evidence surrounding aspirin and CRC is so intriguing that more than 70,000 people are currently participating in more than two dozen clinical studies worldwide, putting aspirin through its paces as an intervention in CRC.
But what, exactly, is aspirin doing?
We know that aspirin inhibits cyclooxygenase (COX) enzymes — COX-1 and COX-2, specifically — and that the COX-2 pathway is implicated in the development and progression of CRC, explained Marco Scarpa, MD, PhD, staff surgeon at the University of Padova in Padova, Italy.
“However, the new thing we’ve found is that aspirin may have a direct role in enhancing immunosurveillance,” Dr. Scarpa said in an interview.
In April, Dr. Scarpa’s team published a paper in Cancer describing a mechanism that provides deeper insight into the aspirin-CRC connection.
Dr. Scarpa heads up the IMMUNOREACT study group, a collaboration of dozens of researchers across Italy running studies on immunosurveillance in rectal cancer. In the baseline study, IMMUNOREACT 1, the team created and analyzed a database of records from 238 patients who underwent surgery for CRC at the Azienda Ospedale Università di Padova, Padova, Italy, from 2015 to 2019.
Using the same database, the latest findings from IMMUNOREACT 7 focused on the fate of the 31 patients (13%) who used aspirin regularly.
The researchers found that regular aspirin use did not appear to affect colorectal tumor stage at diagnosis, but tumor grading was significantly lower overall, especially in patients with BRAF mutations. Regular aspirin users were also less likely to have nodal metastases and metastatic lymph nodes, and this effect was more pronounced in patients with proximal (right-sided) colon cancer vs distal (left-sided).
Most notably, IMMUNOREACT 7 revealed that aspirin has beneficial effects on the CRC immune microenvironment.
The team found that aspirin directly boosts the presence of antigens on gastrointestinal epithelial tumor cells, which can direct the body’s immune response to combat the cancer.
At a macro level, the aspirin users in the study were more likely to have high levels of tumor-infiltrating lymphocytes (TILs). Dr. Scarpa’s team had previously shown that high levels of CD8+ and CD3+ TILs were predictive of successful neoadjuvant therapy in rectal cancer.
Cytotoxic CD8+ T cells are central to the anticancer immune response, and in the latest study, a high ratio of CD8+/CD3+ T cells was more common in aspirin users, suggesting a stronger presence of cancer-killing CD8+ cells. Expression of CD8 beta+, an activation marker of CD8+ cells, was also enhanced in aspirin users.
The most significant discovery, according to Dr. Scarpa, was that aspirin users were more likely to show high expression of CD80 on the surface of their rectal epithelial cells.
CD80 is a molecule that allows T cells to identify the tumor cell as foreign and kill it. Although cancer cells can downregulate their CD80 to avoid detection by T cells, the study suggests that aspirin appears to help foil this strategy by boosting the production of CD80 on the surface of the tumor cells.
The researchers confirmed the clinical findings by showing that aspirin increased CD80 gene expression in lab-cultivated CRC cells.
“We didn’t expect the activation through CD80,” said Dr. Scarpa. “This means that aspirin can act on this very first interaction between the epithelial cell and the CD8+ lymphocyte.”
Overall, these new data suggest that aspirin helps activate the immune system, which helps explain its potential chemopreventive effect in CRC.
However, one puzzling result was that aspirin boosted expression of PD-L1 genes in the CRC cells, said Joanna Davies, DPhil, an immunologist who heads up the San Diego Biomedical Research Institute, San Diego, California, and was not involved in the study.
PD-L1 serves as an “off” switch for patrolling T cells, which protects the tumor cell from being recognized.
“If aspirin is inducing PD-L1 on cancer cells, that is a potential problem,” said Dr. Davies. “An ideal therapy might be the combination of aspirin to enhance the CD8 T cells in the tumor and immune checkpoint blockade to block PD-L1.”
David Kerr, CBE, MD, DSc, agreed that high-dose aspirin plus immunotherapy might be “a wee bit more effective.” However, the combination would be blocked by the economics of drug development: “Will anybody ever do a trial of 10,000 patients to prove that? Not on your nelly,” said Dr. Kerr, professor of cancer medicine at the University of Oxford, Oxford, England.
Despite the small patient numbers in the study, Dr. Kerr felt encouraged by the IMMUNOREACT analysis. “It’s a plausible piece of science and some quite promising work on the tumor immune microenvironment and the effects of aspirin on it,” Dr. Kerr said in a recent commentary for this news organization.
Dr. Scarpa and Dr. Davies had no conflicts of interest to declare.
A version of this article appeared on Medscape.com .
A 2020 meta-analysis, for instance, found that 325 mg of daily aspirin — the typical dose in a single tablet — conferred a 35% reduced risk of developing CRC, and a highly cited The Lancet study from 2010 found that a low dose of daily aspirin reduced the incidence of colon cancer by 24% and colon cancer deaths by 35% over 20 years.
The evidence surrounding aspirin and CRC is so intriguing that more than 70,000 people are currently participating in more than two dozen clinical studies worldwide, putting aspirin through its paces as an intervention in CRC.
But what, exactly, is aspirin doing?
We know that aspirin inhibits cyclooxygenase (COX) enzymes — COX-1 and COX-2, specifically — and that the COX-2 pathway is implicated in the development and progression of CRC, explained Marco Scarpa, MD, PhD, staff surgeon at the University of Padova in Padova, Italy.
“However, the new thing we’ve found is that aspirin may have a direct role in enhancing immunosurveillance,” Dr. Scarpa said in an interview.
In April, Dr. Scarpa’s team published a paper in Cancer describing a mechanism that provides deeper insight into the aspirin-CRC connection.
Dr. Scarpa heads up the IMMUNOREACT study group, a collaboration of dozens of researchers across Italy running studies on immunosurveillance in rectal cancer. In the baseline study, IMMUNOREACT 1, the team created and analyzed a database of records from 238 patients who underwent surgery for CRC at the Azienda Ospedale Università di Padova, Padova, Italy, from 2015 to 2019.
Using the same database, the latest findings from IMMUNOREACT 7 focused on the fate of the 31 patients (13%) who used aspirin regularly.
The researchers found that regular aspirin use did not appear to affect colorectal tumor stage at diagnosis, but tumor grading was significantly lower overall, especially in patients with BRAF mutations. Regular aspirin users were also less likely to have nodal metastases and metastatic lymph nodes, and this effect was more pronounced in patients with proximal (right-sided) colon cancer vs distal (left-sided).
Most notably, IMMUNOREACT 7 revealed that aspirin has beneficial effects on the CRC immune microenvironment.
The team found that aspirin directly boosts the presence of antigens on gastrointestinal epithelial tumor cells, which can direct the body’s immune response to combat the cancer.
At a macro level, the aspirin users in the study were more likely to have high levels of tumor-infiltrating lymphocytes (TILs). Dr. Scarpa’s team had previously shown that high levels of CD8+ and CD3+ TILs were predictive of successful neoadjuvant therapy in rectal cancer.
Cytotoxic CD8+ T cells are central to the anticancer immune response, and in the latest study, a high ratio of CD8+/CD3+ T cells was more common in aspirin users, suggesting a stronger presence of cancer-killing CD8+ cells. Expression of CD8 beta+, an activation marker of CD8+ cells, was also enhanced in aspirin users.
The most significant discovery, according to Dr. Scarpa, was that aspirin users were more likely to show high expression of CD80 on the surface of their rectal epithelial cells.
CD80 is a molecule that allows T cells to identify the tumor cell as foreign and kill it. Although cancer cells can downregulate their CD80 to avoid detection by T cells, the study suggests that aspirin appears to help foil this strategy by boosting the production of CD80 on the surface of the tumor cells.
The researchers confirmed the clinical findings by showing that aspirin increased CD80 gene expression in lab-cultivated CRC cells.
“We didn’t expect the activation through CD80,” said Dr. Scarpa. “This means that aspirin can act on this very first interaction between the epithelial cell and the CD8+ lymphocyte.”
Overall, these new data suggest that aspirin helps activate the immune system, which helps explain its potential chemopreventive effect in CRC.
However, one puzzling result was that aspirin boosted expression of PD-L1 genes in the CRC cells, said Joanna Davies, DPhil, an immunologist who heads up the San Diego Biomedical Research Institute, San Diego, California, and was not involved in the study.
PD-L1 serves as an “off” switch for patrolling T cells, which protects the tumor cell from being recognized.
“If aspirin is inducing PD-L1 on cancer cells, that is a potential problem,” said Dr. Davies. “An ideal therapy might be the combination of aspirin to enhance the CD8 T cells in the tumor and immune checkpoint blockade to block PD-L1.”
David Kerr, CBE, MD, DSc, agreed that high-dose aspirin plus immunotherapy might be “a wee bit more effective.” However, the combination would be blocked by the economics of drug development: “Will anybody ever do a trial of 10,000 patients to prove that? Not on your nelly,” said Dr. Kerr, professor of cancer medicine at the University of Oxford, Oxford, England.
Despite the small patient numbers in the study, Dr. Kerr felt encouraged by the IMMUNOREACT analysis. “It’s a plausible piece of science and some quite promising work on the tumor immune microenvironment and the effects of aspirin on it,” Dr. Kerr said in a recent commentary for this news organization.
Dr. Scarpa and Dr. Davies had no conflicts of interest to declare.
A version of this article appeared on Medscape.com .
A 2020 meta-analysis, for instance, found that 325 mg of daily aspirin — the typical dose in a single tablet — conferred a 35% reduced risk of developing CRC, and a highly cited The Lancet study from 2010 found that a low dose of daily aspirin reduced the incidence of colon cancer by 24% and colon cancer deaths by 35% over 20 years.
The evidence surrounding aspirin and CRC is so intriguing that more than 70,000 people are currently participating in more than two dozen clinical studies worldwide, putting aspirin through its paces as an intervention in CRC.
But what, exactly, is aspirin doing?
We know that aspirin inhibits cyclooxygenase (COX) enzymes — COX-1 and COX-2, specifically — and that the COX-2 pathway is implicated in the development and progression of CRC, explained Marco Scarpa, MD, PhD, staff surgeon at the University of Padova in Padova, Italy.
“However, the new thing we’ve found is that aspirin may have a direct role in enhancing immunosurveillance,” Dr. Scarpa said in an interview.
In April, Dr. Scarpa’s team published a paper in Cancer describing a mechanism that provides deeper insight into the aspirin-CRC connection.
Dr. Scarpa heads up the IMMUNOREACT study group, a collaboration of dozens of researchers across Italy running studies on immunosurveillance in rectal cancer. In the baseline study, IMMUNOREACT 1, the team created and analyzed a database of records from 238 patients who underwent surgery for CRC at the Azienda Ospedale Università di Padova, Padova, Italy, from 2015 to 2019.
Using the same database, the latest findings from IMMUNOREACT 7 focused on the fate of the 31 patients (13%) who used aspirin regularly.
The researchers found that regular aspirin use did not appear to affect colorectal tumor stage at diagnosis, but tumor grading was significantly lower overall, especially in patients with BRAF mutations. Regular aspirin users were also less likely to have nodal metastases and metastatic lymph nodes, and this effect was more pronounced in patients with proximal (right-sided) colon cancer vs distal (left-sided).
Most notably, IMMUNOREACT 7 revealed that aspirin has beneficial effects on the CRC immune microenvironment.
The team found that aspirin directly boosts the presence of antigens on gastrointestinal epithelial tumor cells, which can direct the body’s immune response to combat the cancer.
At a macro level, the aspirin users in the study were more likely to have high levels of tumor-infiltrating lymphocytes (TILs). Dr. Scarpa’s team had previously shown that high levels of CD8+ and CD3+ TILs were predictive of successful neoadjuvant therapy in rectal cancer.
Cytotoxic CD8+ T cells are central to the anticancer immune response, and in the latest study, a high ratio of CD8+/CD3+ T cells was more common in aspirin users, suggesting a stronger presence of cancer-killing CD8+ cells. Expression of CD8 beta+, an activation marker of CD8+ cells, was also enhanced in aspirin users.
The most significant discovery, according to Dr. Scarpa, was that aspirin users were more likely to show high expression of CD80 on the surface of their rectal epithelial cells.
CD80 is a molecule that allows T cells to identify the tumor cell as foreign and kill it. Although cancer cells can downregulate their CD80 to avoid detection by T cells, the study suggests that aspirin appears to help foil this strategy by boosting the production of CD80 on the surface of the tumor cells.
The researchers confirmed the clinical findings by showing that aspirin increased CD80 gene expression in lab-cultivated CRC cells.
“We didn’t expect the activation through CD80,” said Dr. Scarpa. “This means that aspirin can act on this very first interaction between the epithelial cell and the CD8+ lymphocyte.”
Overall, these new data suggest that aspirin helps activate the immune system, which helps explain its potential chemopreventive effect in CRC.
However, one puzzling result was that aspirin boosted expression of PD-L1 genes in the CRC cells, said Joanna Davies, DPhil, an immunologist who heads up the San Diego Biomedical Research Institute, San Diego, California, and was not involved in the study.
PD-L1 serves as an “off” switch for patrolling T cells, which protects the tumor cell from being recognized.
“If aspirin is inducing PD-L1 on cancer cells, that is a potential problem,” said Dr. Davies. “An ideal therapy might be the combination of aspirin to enhance the CD8 T cells in the tumor and immune checkpoint blockade to block PD-L1.”
David Kerr, CBE, MD, DSc, agreed that high-dose aspirin plus immunotherapy might be “a wee bit more effective.” However, the combination would be blocked by the economics of drug development: “Will anybody ever do a trial of 10,000 patients to prove that? Not on your nelly,” said Dr. Kerr, professor of cancer medicine at the University of Oxford, Oxford, England.
Despite the small patient numbers in the study, Dr. Kerr felt encouraged by the IMMUNOREACT analysis. “It’s a plausible piece of science and some quite promising work on the tumor immune microenvironment and the effects of aspirin on it,” Dr. Kerr said in a recent commentary for this news organization.
Dr. Scarpa and Dr. Davies had no conflicts of interest to declare.
A version of this article appeared on Medscape.com .
Barriers to Mohs Micrographic Surgery in Japanese Patients With Basal Cell Carcinoma
Margin-controlled surgery for squamous cell carcinoma (SCC) on the lower lip was first performed by Dr. Frederic Mohs on June 30, 1936. Since then, thousands of skin cancer surgeons have refined and adopted the technique. Due to the high cure rate and sparing of normal tissue, Mohs micrographic surgery (MMS) has become the gold standard treatment for facial and special-site nonmelanoma skin cancer worldwide. Mohs micrographic surgery is performed on more than 876,000 tumors annually in the United States.1 Among 3.5 million Americans diagnosed with nonmelanoma skin cancer in 2006, one-quarter were treated with MMS.2 In Japan, basal cell carcinoma (BCC) is the most common skin malignancy, with an incidence of 3.34 cases per 100,000 individuals; SCC is the second most common, with an incidence of 2.5 cases per 100,000 individuals.3
The essential element that makes MMS unique is the careful microscopic examination of the entire margin of the removed specimen. Tissue processing is done with careful en face orientation to ensure that circumferential and deep margins are entirely visible. The surgeon interprets the slides and proceeds to remove the additional tumor as necessary. Because the same physician performs both the surgery and the pathologic assessment throughout the procedure, a precise correlation between the microscopic and surgical findings can be made. The surgeon can begin with smaller margins, removing minimal healthy tissue while removing all the cancer cells, which results in the smallest-possible skin defect and the best prognosis for the malignancy (Figure 1).
At the only facility in Japan offering MMS, the lead author (S.S.) has treated 52 lesions with MMS in 46 patients (2020-2022). Of these patients, 40 were White, 5 were Japanese, and 1 was of African descent. In this case series, we present 5 Japanese patients who had BCC treated with MMS.
Case Series
Patient 1—A 50-year-old Japanese woman presented to dermatology with a brown papule on the nasal tip of 1.25 year’s duration (Figure 2). A biopsy revealed infiltrative BCC (Figure 3), and the patient was referred to the dermatology department at a nearby university hospital. Because the BCC was an aggressive variant, wide local excision (WLE) with subsequent flap reconstruction was recommended as well as radiation therapy. The patient learned about MMS through an internet search and refused both options, seeking MMS treatment at our clinic. Although Japanese health insurance does not cover MMS, the patient had supplemental private insurance that did cover the cost. She provided consent to undergo the procedure. Physical examination revealed a 7.5×6-mm, brown-red macule with ill-defined borders on the tip of the nose. We used a 1.5-mm margin for the first stage of MMS (Figure 4A). The frozen section revealed that the tumor had been entirely excised in the first stage, leaving only a 10.5×9-mm skin defect that was reconstructed with a Dufourmentel flap (Figure 4B). No signs of recurrence were noted at 3.5-year follow-up, and the cosmetic outcome was favorable (Figure 4C). National Comprehensive Cancer Network guidelines recommend a margin greater than 4 mm for infiltrative BCCs4; therefore, our technique reduced the total defect by at least 4 mm in a cosmetically sensitive area. The patient also did not need radiation therapy, which reduced morbidity. She continues to be recurrence free at 3.5-year follow-up.
Patient 2—A 63-year-old Japanese man presented to dermatology with a brown macule on the right lower eyelid of 2 years’ duration. A biopsy of the lesion was positive for nodular BCC. After being advised to undergo WLE and extensive reconstruction with plastic surgery, the patient learned of MMS through an internet search and found our clinic. Physical examination revealed a 7×5-mm brown macule on the right lower eyelid. The patient had supplemental private insurance that covered the cost of MMS, and he provided consent for the procedure. A 1.5-mm margin was taken for the first stage, resulting in a 10×8-mm defect superficial to the orbicularis oculi muscle. The frozen section revealed residual tumor exposure in the dermis at the 9- to 10-o’clock position. A second-stage excision was performed to remove an additional 1.5 mm of skin at the 9- to 12-o’clock position with a thin layer of the orbicularis oculi muscle. The subsequent histologic examination revealed no residual BCC, and the final 13×9-mm skin defect was reconstructed with a rotation flap. There were no signs of recurrence at 2.5-year follow-up with an excellent cosmetic outcome.
Patient 3—A 73-year-old Japanese man presented to a local university dermatology clinic with a new papule on the nose. The dermatologist suggested WLE with 4-mm margins and reconstruction of the skin defect 2 weeks later by a plastic surgeon. The patient was not satisfied with the proposed surgical plan, which led him to learn about MMS on the internet; he subsequently found our clinic. Physical examination revealed a 4×3.5-mm brown papule on the tip of the nose. He understood the nature of MMS and chose to pay out-of-pocket because Japanese health insurance did not cover the procedure. We used a 2-mm margin for the first stage, which created a 7.5×7-mm skin defect. The frozen section pathology revealed no residual BCC at the cut surface. The skin defect was reconstructed with a Limberg rhombic flap. There were no signs of recurrence at 1.5-year follow-up with a favorable cosmetic outcome.
Patient 4—A 45-year-old man presented to a dermatology clinic with a papule on the right side of the nose of 1 year’s duration. A biopsy revealed the lesion was a nodular BCC. The dermatologist recommended WLE at a general hospital, but the patient refused after learning about MMS. He subsequently made an appointment with our clinic. Physical examination revealed a 7×4-mm white papule on the right side of the nose. The patient had private insurance that covered the cost of MMS. The first stage was performed with 1.5-mm margins and was clear of residual tumor. A Limberg rhombic flap from the adjacent cheek was used to repair the final 10×7-mm skin defect. There were no signs of recurrence at 1 year and 9 months’ follow-up with a favorable cosmetic outcome.
Patient 5—A 76-year-old Japanese woman presented to a university hospital near Tokyo with a black papule on the left cutaneous lip of 5 years’ duration. A biopsy revealed nodular BCC, and WLE with flap reconstruction was recommended. The patient’s son learned about MMS through internet research and referred her to our clinic. Physical examination revealed a 7×5-mm black papule on the left upper lip. The patient’s private insurance covered the cost of MMS, and she consented to the procedure. We used a 2-mm initial margin, and the immediate frozen section revealed no signs of BCC at the cut surface. The 11×9-mm skin defect was reconstructed with a Limberg rhombic flap. There were no signs of recurrence at 1.5-year follow-up with a favorable cosmetic outcome.
Comment
We presented 5 cases of MMS in Japanese patients with BCC. More than 7000 new cases of nonmelanoma skin cancer occur every year in Japan.3 Only 0.04% of these cases—the 5 cases presented here—were treated with MMS in Japan in 2020 and 2021, in contrast to 25% in the United States in 2006.2
MMS vs Other BCC Treatments—Mohs micrographic surgery offers 2 distinct advantages over conventional excision: an improved cure rate while achieving a smaller final defect size, generally leading to better cosmetic outcomes. Overall 5-year recurrence rates of BCC are 10% for conventional surgical excision vs 1% for MMS, while the recurrence rates for SCC are 8% and 3%, respectively.5 A study of well-demarcated BCCs smaller than 2 cm that were treated with MMS with 2-mm increments revealed that 95% of the cases were free of malignancy within a 4-mm margin of the normal-appearing skin surrounding the tumor.6 Several articles have reported a 95% cure rate or higher with conventional excision of localized BCC,7 but 4- to 5-mm excision margins are required, resulting in a greater skin defect and a lower cure rate compared to MMS.
Aggressive subtypes of BCC have a higher recurrence rate. Rowe et al8 reported the following 5-year recurrence rates: 5.6% for MMS, 17.4% for conventional surgical excision, 40.0% for curettage and electrodesiccation, and 9.8% for radiation therapy. Primary BCCs with high-risk histologic subtypes has a 10-year recurrence rate of 4.4% with MMS vs 12.2% with conventional excision.9 These findings reveal that MMS yields a better prognosis compared to traditional treatment methods for recurrent BCCs and BCCs of high-risk histologic subtypes.
The primary reason for the excellent cure rate seen in MMS is the ability to perform complete margin assessment. Peripheral and deep en face margin assessment (PDEMA) is crucial in achieving high cure rates with narrow margins. In WLE (Figure 1), vertical sectioning (also known as bread-loafing) does not achieve direct visualization of the entire surgical margin, as this technique only evaluates random sections and does not achieve PDEMA.10 The bread-loafing method is used almost exclusively in Japan and visualizes only 0.1% of the entire margin compared to 100% with MMS.11 Beyond the superior cure rate, the MMS technique often yields smaller final defects compared to WLE. All 5 of our patients achieved complete tumor removal while sparing more normal tissue compared to conventional WLE, which takes at least a 4-mm margin in all directions.
Barriers to Adopting MMS in Japan—There are many barriers to the broader adoption of MMS in Japan. A guideline of the Japanese Dermatological Association says, MMS “is complicated, requires special training for acquisition, and requires time and labor for implementation of a series of processes, and it has not gained wide acceptance in Japan because of these disadvantages.”3 There currently are no MMS training programs in Japan. We refute this statement from the Japanese Dermatological Association because, in our experience, only 1 surgeon plus a single histotechnician familiar with MMS is sufficient for a facility to offer the procedure (the lead author of this study [S.S.] acts as both the surgeon and the histotechnician). Another misconception among some physicians in Japan is that cancer on ethnically Japanese skin is uniquely suited to excision without microscopic verification of tumor clearance because the borders of the tumors are easily identified, which was based on good cure rates for the excision of well-demarcated pigmented BCCs in a Japanese cohort. This study of a Japanese cohort investigated the specimens with the conventional bread-loafing technique but not with the PDEMA.12
Eighty percent (4/5) of our patients presented with nodular BCC, and only 1 required a second stage. In comparison, we also treated 16 White patients with nodular BCC with MMS during the same period, and 31% (5/16) required more than 1 stage, with 1 patient requiring 3 stages. This cohort, however, is too small to demonstrate a statistically significant difference (S.S., unpublished data, 2020-2022).
A study in Singapore reported the postsurgical complication rate and 5-year recurrence rate for 481 tumors (92% BCC and 7.5% SCC). The median follow-up duration after MMS was 36 months, and the recurrence rate was 0.6%. The postsurgical complications included 11 (2.3%) cases with superficial tip necrosis of surgical flaps/grafts, 2 (0.4%) with mild wound dehiscence, 1 (0.2%) with minor surgical site bleeding, and 1 (0.2%) with minor wound infection.13 This study supports the notion that MMS is equally effective for Asian patients.
Awareness of MMS in Japan is lacking, and most Japanese dermatologists do not know about the technique. All 5 patients in our case series asked their dermatologists about alternative treatment options and were not offered MMS. In each case, the patients learned of the technique through internet research.
The lack of insurance reimbursement for MMS in Japan is another barrier. Because the national health insurance does not reimburse for MMS, the procedure is relatively unavailable to most Japanese citizens who cannot pay out-of-pocket for the treatment and do not have supplemental insurance. Mohs micrographic surgery may seem expensive compared to WLE followed by repair; however, in the authors’ experience, in Japan, excision without MMS may require general sedation and multiple surgeries to reconstruct larger skin defects, leading to greater morbidity and risk for the patient.
Conclusion
Mohs micrographic surgery in Japan is in its infancy, and further studies showing recurrence rates and long-term prognosis are needed. Such data should help increase awareness of MMS among Japanese physicians as an excellent treatment option for their patients. Furthermore, as Japan becomes more heterogenous as a society and the US Military increases its presence in the region, the need for MMS is likely to increase.
Acknowledgments—We appreciate the proofreading support by Mark Bivens, MBA, MSc (Tokyo, Japan), as well as the technical support from Ben Tallon, MBChB, and Robyn Mason (both in Tauranga, New Zealand) to start MMS at our clinic.
- Asgari MM, Olson J, Alam M. Needs assessment for Mohs micrographic surgery. Dermatol Clin. 2012;30:167-175. doi:10.1016/j.det.2011.08.010
- Connolly SM, Baker DR, Baker DR, et al. AAD/ACMS/ASDSA/ASMS 2012 appropriate use criteria for Mohs micrographic surgery: a report of the American Academy of Dermatology, American College of Mohs Surgery, American Society for Dermatologic Surgery Association, and the American Society for Mohs Surgery. J Am Acad Dermatol. 2012;67:531-550.
- Ansai SI, Umebayashi Y, Katsumata N, et al. Japanese Dermatological Association Guidelines: outlines of guidelines for cutaneous squamous cell carcinoma 2020. J Dermatol. 2021;48:E288-E311.
- Schmults CD, Blitzblau R, Aasi SZ, et at. Basal cell skin cancer, version 2.2024, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2023;21:1181-1203. doi:10.6004/jncn.2023.0056
- Snow SN, Gunkel J. Mohs surgery. In: Bolognia JL, Schaffer JV, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2017:2445-2455. doi:10.1016/b978-0-070-94171-3.00041-7
- Wolf DJ, Zitelli JA. Surgical margins for basal cell carcinoma. Arch Dermatol. 1987;123:340-344.
- Quazi SJ, Aslam N, Saleem H, et al. Surgical margin of excision in basal cell carcinoma: a systematic review of literature. Cureus. 2020;12:E9211.
- Rowe DE, Carroll RJ, Day Jus CL. Mohs surgery is the treatment of choice for recurrent (previously treated) basal cell carcinoma. J Dermatol Surg Oncol. 1989;15:424-431.
- Van Loo, Mosterd K, Krekels GA. Surgical excision versus Mohs’ micrographic surgery for basal cell carcinoma of the face. Eur J Cancer. 2014;50:3011-3020.
- Schmults CD, Blitzblau R, Aasi SZ, et al. NCCN Guidelines Insights: Squamous Cell Skin Cancer, Version 1.2022. J Natl Compr Canc Netw. 2021;19:1382-1394.
- Hui AM, Jacobson M, Markowitz O, et al. Mohs micrographic surgery for the treatment of melanoma. Dermatol Clin. 2012;30:503-515.
- Ito T, Inatomi Y, Nagae K, et al. Narrow-margin excision is a safe, reliable treatment for well-defined, primary pigmented basal cell carcinoma: an analysis of 288 lesions in Japan. J Eur Acad Dermatol Venereol. 2015;29:1828-1831.
- Ho WYB, Zhao X, Tan WPM. Mohs micrographic surgery in Singapore: a long-term follow-up review. Ann Acad Med Singap. 2021;50:922-923.
Margin-controlled surgery for squamous cell carcinoma (SCC) on the lower lip was first performed by Dr. Frederic Mohs on June 30, 1936. Since then, thousands of skin cancer surgeons have refined and adopted the technique. Due to the high cure rate and sparing of normal tissue, Mohs micrographic surgery (MMS) has become the gold standard treatment for facial and special-site nonmelanoma skin cancer worldwide. Mohs micrographic surgery is performed on more than 876,000 tumors annually in the United States.1 Among 3.5 million Americans diagnosed with nonmelanoma skin cancer in 2006, one-quarter were treated with MMS.2 In Japan, basal cell carcinoma (BCC) is the most common skin malignancy, with an incidence of 3.34 cases per 100,000 individuals; SCC is the second most common, with an incidence of 2.5 cases per 100,000 individuals.3
The essential element that makes MMS unique is the careful microscopic examination of the entire margin of the removed specimen. Tissue processing is done with careful en face orientation to ensure that circumferential and deep margins are entirely visible. The surgeon interprets the slides and proceeds to remove the additional tumor as necessary. Because the same physician performs both the surgery and the pathologic assessment throughout the procedure, a precise correlation between the microscopic and surgical findings can be made. The surgeon can begin with smaller margins, removing minimal healthy tissue while removing all the cancer cells, which results in the smallest-possible skin defect and the best prognosis for the malignancy (Figure 1).
At the only facility in Japan offering MMS, the lead author (S.S.) has treated 52 lesions with MMS in 46 patients (2020-2022). Of these patients, 40 were White, 5 were Japanese, and 1 was of African descent. In this case series, we present 5 Japanese patients who had BCC treated with MMS.
Case Series
Patient 1—A 50-year-old Japanese woman presented to dermatology with a brown papule on the nasal tip of 1.25 year’s duration (Figure 2). A biopsy revealed infiltrative BCC (Figure 3), and the patient was referred to the dermatology department at a nearby university hospital. Because the BCC was an aggressive variant, wide local excision (WLE) with subsequent flap reconstruction was recommended as well as radiation therapy. The patient learned about MMS through an internet search and refused both options, seeking MMS treatment at our clinic. Although Japanese health insurance does not cover MMS, the patient had supplemental private insurance that did cover the cost. She provided consent to undergo the procedure. Physical examination revealed a 7.5×6-mm, brown-red macule with ill-defined borders on the tip of the nose. We used a 1.5-mm margin for the first stage of MMS (Figure 4A). The frozen section revealed that the tumor had been entirely excised in the first stage, leaving only a 10.5×9-mm skin defect that was reconstructed with a Dufourmentel flap (Figure 4B). No signs of recurrence were noted at 3.5-year follow-up, and the cosmetic outcome was favorable (Figure 4C). National Comprehensive Cancer Network guidelines recommend a margin greater than 4 mm for infiltrative BCCs4; therefore, our technique reduced the total defect by at least 4 mm in a cosmetically sensitive area. The patient also did not need radiation therapy, which reduced morbidity. She continues to be recurrence free at 3.5-year follow-up.
Patient 2—A 63-year-old Japanese man presented to dermatology with a brown macule on the right lower eyelid of 2 years’ duration. A biopsy of the lesion was positive for nodular BCC. After being advised to undergo WLE and extensive reconstruction with plastic surgery, the patient learned of MMS through an internet search and found our clinic. Physical examination revealed a 7×5-mm brown macule on the right lower eyelid. The patient had supplemental private insurance that covered the cost of MMS, and he provided consent for the procedure. A 1.5-mm margin was taken for the first stage, resulting in a 10×8-mm defect superficial to the orbicularis oculi muscle. The frozen section revealed residual tumor exposure in the dermis at the 9- to 10-o’clock position. A second-stage excision was performed to remove an additional 1.5 mm of skin at the 9- to 12-o’clock position with a thin layer of the orbicularis oculi muscle. The subsequent histologic examination revealed no residual BCC, and the final 13×9-mm skin defect was reconstructed with a rotation flap. There were no signs of recurrence at 2.5-year follow-up with an excellent cosmetic outcome.
Patient 3—A 73-year-old Japanese man presented to a local university dermatology clinic with a new papule on the nose. The dermatologist suggested WLE with 4-mm margins and reconstruction of the skin defect 2 weeks later by a plastic surgeon. The patient was not satisfied with the proposed surgical plan, which led him to learn about MMS on the internet; he subsequently found our clinic. Physical examination revealed a 4×3.5-mm brown papule on the tip of the nose. He understood the nature of MMS and chose to pay out-of-pocket because Japanese health insurance did not cover the procedure. We used a 2-mm margin for the first stage, which created a 7.5×7-mm skin defect. The frozen section pathology revealed no residual BCC at the cut surface. The skin defect was reconstructed with a Limberg rhombic flap. There were no signs of recurrence at 1.5-year follow-up with a favorable cosmetic outcome.
Patient 4—A 45-year-old man presented to a dermatology clinic with a papule on the right side of the nose of 1 year’s duration. A biopsy revealed the lesion was a nodular BCC. The dermatologist recommended WLE at a general hospital, but the patient refused after learning about MMS. He subsequently made an appointment with our clinic. Physical examination revealed a 7×4-mm white papule on the right side of the nose. The patient had private insurance that covered the cost of MMS. The first stage was performed with 1.5-mm margins and was clear of residual tumor. A Limberg rhombic flap from the adjacent cheek was used to repair the final 10×7-mm skin defect. There were no signs of recurrence at 1 year and 9 months’ follow-up with a favorable cosmetic outcome.
Patient 5—A 76-year-old Japanese woman presented to a university hospital near Tokyo with a black papule on the left cutaneous lip of 5 years’ duration. A biopsy revealed nodular BCC, and WLE with flap reconstruction was recommended. The patient’s son learned about MMS through internet research and referred her to our clinic. Physical examination revealed a 7×5-mm black papule on the left upper lip. The patient’s private insurance covered the cost of MMS, and she consented to the procedure. We used a 2-mm initial margin, and the immediate frozen section revealed no signs of BCC at the cut surface. The 11×9-mm skin defect was reconstructed with a Limberg rhombic flap. There were no signs of recurrence at 1.5-year follow-up with a favorable cosmetic outcome.
Comment
We presented 5 cases of MMS in Japanese patients with BCC. More than 7000 new cases of nonmelanoma skin cancer occur every year in Japan.3 Only 0.04% of these cases—the 5 cases presented here—were treated with MMS in Japan in 2020 and 2021, in contrast to 25% in the United States in 2006.2
MMS vs Other BCC Treatments—Mohs micrographic surgery offers 2 distinct advantages over conventional excision: an improved cure rate while achieving a smaller final defect size, generally leading to better cosmetic outcomes. Overall 5-year recurrence rates of BCC are 10% for conventional surgical excision vs 1% for MMS, while the recurrence rates for SCC are 8% and 3%, respectively.5 A study of well-demarcated BCCs smaller than 2 cm that were treated with MMS with 2-mm increments revealed that 95% of the cases were free of malignancy within a 4-mm margin of the normal-appearing skin surrounding the tumor.6 Several articles have reported a 95% cure rate or higher with conventional excision of localized BCC,7 but 4- to 5-mm excision margins are required, resulting in a greater skin defect and a lower cure rate compared to MMS.
Aggressive subtypes of BCC have a higher recurrence rate. Rowe et al8 reported the following 5-year recurrence rates: 5.6% for MMS, 17.4% for conventional surgical excision, 40.0% for curettage and electrodesiccation, and 9.8% for radiation therapy. Primary BCCs with high-risk histologic subtypes has a 10-year recurrence rate of 4.4% with MMS vs 12.2% with conventional excision.9 These findings reveal that MMS yields a better prognosis compared to traditional treatment methods for recurrent BCCs and BCCs of high-risk histologic subtypes.
The primary reason for the excellent cure rate seen in MMS is the ability to perform complete margin assessment. Peripheral and deep en face margin assessment (PDEMA) is crucial in achieving high cure rates with narrow margins. In WLE (Figure 1), vertical sectioning (also known as bread-loafing) does not achieve direct visualization of the entire surgical margin, as this technique only evaluates random sections and does not achieve PDEMA.10 The bread-loafing method is used almost exclusively in Japan and visualizes only 0.1% of the entire margin compared to 100% with MMS.11 Beyond the superior cure rate, the MMS technique often yields smaller final defects compared to WLE. All 5 of our patients achieved complete tumor removal while sparing more normal tissue compared to conventional WLE, which takes at least a 4-mm margin in all directions.
Barriers to Adopting MMS in Japan—There are many barriers to the broader adoption of MMS in Japan. A guideline of the Japanese Dermatological Association says, MMS “is complicated, requires special training for acquisition, and requires time and labor for implementation of a series of processes, and it has not gained wide acceptance in Japan because of these disadvantages.”3 There currently are no MMS training programs in Japan. We refute this statement from the Japanese Dermatological Association because, in our experience, only 1 surgeon plus a single histotechnician familiar with MMS is sufficient for a facility to offer the procedure (the lead author of this study [S.S.] acts as both the surgeon and the histotechnician). Another misconception among some physicians in Japan is that cancer on ethnically Japanese skin is uniquely suited to excision without microscopic verification of tumor clearance because the borders of the tumors are easily identified, which was based on good cure rates for the excision of well-demarcated pigmented BCCs in a Japanese cohort. This study of a Japanese cohort investigated the specimens with the conventional bread-loafing technique but not with the PDEMA.12
Eighty percent (4/5) of our patients presented with nodular BCC, and only 1 required a second stage. In comparison, we also treated 16 White patients with nodular BCC with MMS during the same period, and 31% (5/16) required more than 1 stage, with 1 patient requiring 3 stages. This cohort, however, is too small to demonstrate a statistically significant difference (S.S., unpublished data, 2020-2022).
A study in Singapore reported the postsurgical complication rate and 5-year recurrence rate for 481 tumors (92% BCC and 7.5% SCC). The median follow-up duration after MMS was 36 months, and the recurrence rate was 0.6%. The postsurgical complications included 11 (2.3%) cases with superficial tip necrosis of surgical flaps/grafts, 2 (0.4%) with mild wound dehiscence, 1 (0.2%) with minor surgical site bleeding, and 1 (0.2%) with minor wound infection.13 This study supports the notion that MMS is equally effective for Asian patients.
Awareness of MMS in Japan is lacking, and most Japanese dermatologists do not know about the technique. All 5 patients in our case series asked their dermatologists about alternative treatment options and were not offered MMS. In each case, the patients learned of the technique through internet research.
The lack of insurance reimbursement for MMS in Japan is another barrier. Because the national health insurance does not reimburse for MMS, the procedure is relatively unavailable to most Japanese citizens who cannot pay out-of-pocket for the treatment and do not have supplemental insurance. Mohs micrographic surgery may seem expensive compared to WLE followed by repair; however, in the authors’ experience, in Japan, excision without MMS may require general sedation and multiple surgeries to reconstruct larger skin defects, leading to greater morbidity and risk for the patient.
Conclusion
Mohs micrographic surgery in Japan is in its infancy, and further studies showing recurrence rates and long-term prognosis are needed. Such data should help increase awareness of MMS among Japanese physicians as an excellent treatment option for their patients. Furthermore, as Japan becomes more heterogenous as a society and the US Military increases its presence in the region, the need for MMS is likely to increase.
Acknowledgments—We appreciate the proofreading support by Mark Bivens, MBA, MSc (Tokyo, Japan), as well as the technical support from Ben Tallon, MBChB, and Robyn Mason (both in Tauranga, New Zealand) to start MMS at our clinic.
Margin-controlled surgery for squamous cell carcinoma (SCC) on the lower lip was first performed by Dr. Frederic Mohs on June 30, 1936. Since then, thousands of skin cancer surgeons have refined and adopted the technique. Due to the high cure rate and sparing of normal tissue, Mohs micrographic surgery (MMS) has become the gold standard treatment for facial and special-site nonmelanoma skin cancer worldwide. Mohs micrographic surgery is performed on more than 876,000 tumors annually in the United States.1 Among 3.5 million Americans diagnosed with nonmelanoma skin cancer in 2006, one-quarter were treated with MMS.2 In Japan, basal cell carcinoma (BCC) is the most common skin malignancy, with an incidence of 3.34 cases per 100,000 individuals; SCC is the second most common, with an incidence of 2.5 cases per 100,000 individuals.3
The essential element that makes MMS unique is the careful microscopic examination of the entire margin of the removed specimen. Tissue processing is done with careful en face orientation to ensure that circumferential and deep margins are entirely visible. The surgeon interprets the slides and proceeds to remove the additional tumor as necessary. Because the same physician performs both the surgery and the pathologic assessment throughout the procedure, a precise correlation between the microscopic and surgical findings can be made. The surgeon can begin with smaller margins, removing minimal healthy tissue while removing all the cancer cells, which results in the smallest-possible skin defect and the best prognosis for the malignancy (Figure 1).
At the only facility in Japan offering MMS, the lead author (S.S.) has treated 52 lesions with MMS in 46 patients (2020-2022). Of these patients, 40 were White, 5 were Japanese, and 1 was of African descent. In this case series, we present 5 Japanese patients who had BCC treated with MMS.
Case Series
Patient 1—A 50-year-old Japanese woman presented to dermatology with a brown papule on the nasal tip of 1.25 year’s duration (Figure 2). A biopsy revealed infiltrative BCC (Figure 3), and the patient was referred to the dermatology department at a nearby university hospital. Because the BCC was an aggressive variant, wide local excision (WLE) with subsequent flap reconstruction was recommended as well as radiation therapy. The patient learned about MMS through an internet search and refused both options, seeking MMS treatment at our clinic. Although Japanese health insurance does not cover MMS, the patient had supplemental private insurance that did cover the cost. She provided consent to undergo the procedure. Physical examination revealed a 7.5×6-mm, brown-red macule with ill-defined borders on the tip of the nose. We used a 1.5-mm margin for the first stage of MMS (Figure 4A). The frozen section revealed that the tumor had been entirely excised in the first stage, leaving only a 10.5×9-mm skin defect that was reconstructed with a Dufourmentel flap (Figure 4B). No signs of recurrence were noted at 3.5-year follow-up, and the cosmetic outcome was favorable (Figure 4C). National Comprehensive Cancer Network guidelines recommend a margin greater than 4 mm for infiltrative BCCs4; therefore, our technique reduced the total defect by at least 4 mm in a cosmetically sensitive area. The patient also did not need radiation therapy, which reduced morbidity. She continues to be recurrence free at 3.5-year follow-up.
Patient 2—A 63-year-old Japanese man presented to dermatology with a brown macule on the right lower eyelid of 2 years’ duration. A biopsy of the lesion was positive for nodular BCC. After being advised to undergo WLE and extensive reconstruction with plastic surgery, the patient learned of MMS through an internet search and found our clinic. Physical examination revealed a 7×5-mm brown macule on the right lower eyelid. The patient had supplemental private insurance that covered the cost of MMS, and he provided consent for the procedure. A 1.5-mm margin was taken for the first stage, resulting in a 10×8-mm defect superficial to the orbicularis oculi muscle. The frozen section revealed residual tumor exposure in the dermis at the 9- to 10-o’clock position. A second-stage excision was performed to remove an additional 1.5 mm of skin at the 9- to 12-o’clock position with a thin layer of the orbicularis oculi muscle. The subsequent histologic examination revealed no residual BCC, and the final 13×9-mm skin defect was reconstructed with a rotation flap. There were no signs of recurrence at 2.5-year follow-up with an excellent cosmetic outcome.
Patient 3—A 73-year-old Japanese man presented to a local university dermatology clinic with a new papule on the nose. The dermatologist suggested WLE with 4-mm margins and reconstruction of the skin defect 2 weeks later by a plastic surgeon. The patient was not satisfied with the proposed surgical plan, which led him to learn about MMS on the internet; he subsequently found our clinic. Physical examination revealed a 4×3.5-mm brown papule on the tip of the nose. He understood the nature of MMS and chose to pay out-of-pocket because Japanese health insurance did not cover the procedure. We used a 2-mm margin for the first stage, which created a 7.5×7-mm skin defect. The frozen section pathology revealed no residual BCC at the cut surface. The skin defect was reconstructed with a Limberg rhombic flap. There were no signs of recurrence at 1.5-year follow-up with a favorable cosmetic outcome.
Patient 4—A 45-year-old man presented to a dermatology clinic with a papule on the right side of the nose of 1 year’s duration. A biopsy revealed the lesion was a nodular BCC. The dermatologist recommended WLE at a general hospital, but the patient refused after learning about MMS. He subsequently made an appointment with our clinic. Physical examination revealed a 7×4-mm white papule on the right side of the nose. The patient had private insurance that covered the cost of MMS. The first stage was performed with 1.5-mm margins and was clear of residual tumor. A Limberg rhombic flap from the adjacent cheek was used to repair the final 10×7-mm skin defect. There were no signs of recurrence at 1 year and 9 months’ follow-up with a favorable cosmetic outcome.
Patient 5—A 76-year-old Japanese woman presented to a university hospital near Tokyo with a black papule on the left cutaneous lip of 5 years’ duration. A biopsy revealed nodular BCC, and WLE with flap reconstruction was recommended. The patient’s son learned about MMS through internet research and referred her to our clinic. Physical examination revealed a 7×5-mm black papule on the left upper lip. The patient’s private insurance covered the cost of MMS, and she consented to the procedure. We used a 2-mm initial margin, and the immediate frozen section revealed no signs of BCC at the cut surface. The 11×9-mm skin defect was reconstructed with a Limberg rhombic flap. There were no signs of recurrence at 1.5-year follow-up with a favorable cosmetic outcome.
Comment
We presented 5 cases of MMS in Japanese patients with BCC. More than 7000 new cases of nonmelanoma skin cancer occur every year in Japan.3 Only 0.04% of these cases—the 5 cases presented here—were treated with MMS in Japan in 2020 and 2021, in contrast to 25% in the United States in 2006.2
MMS vs Other BCC Treatments—Mohs micrographic surgery offers 2 distinct advantages over conventional excision: an improved cure rate while achieving a smaller final defect size, generally leading to better cosmetic outcomes. Overall 5-year recurrence rates of BCC are 10% for conventional surgical excision vs 1% for MMS, while the recurrence rates for SCC are 8% and 3%, respectively.5 A study of well-demarcated BCCs smaller than 2 cm that were treated with MMS with 2-mm increments revealed that 95% of the cases were free of malignancy within a 4-mm margin of the normal-appearing skin surrounding the tumor.6 Several articles have reported a 95% cure rate or higher with conventional excision of localized BCC,7 but 4- to 5-mm excision margins are required, resulting in a greater skin defect and a lower cure rate compared to MMS.
Aggressive subtypes of BCC have a higher recurrence rate. Rowe et al8 reported the following 5-year recurrence rates: 5.6% for MMS, 17.4% for conventional surgical excision, 40.0% for curettage and electrodesiccation, and 9.8% for radiation therapy. Primary BCCs with high-risk histologic subtypes has a 10-year recurrence rate of 4.4% with MMS vs 12.2% with conventional excision.9 These findings reveal that MMS yields a better prognosis compared to traditional treatment methods for recurrent BCCs and BCCs of high-risk histologic subtypes.
The primary reason for the excellent cure rate seen in MMS is the ability to perform complete margin assessment. Peripheral and deep en face margin assessment (PDEMA) is crucial in achieving high cure rates with narrow margins. In WLE (Figure 1), vertical sectioning (also known as bread-loafing) does not achieve direct visualization of the entire surgical margin, as this technique only evaluates random sections and does not achieve PDEMA.10 The bread-loafing method is used almost exclusively in Japan and visualizes only 0.1% of the entire margin compared to 100% with MMS.11 Beyond the superior cure rate, the MMS technique often yields smaller final defects compared to WLE. All 5 of our patients achieved complete tumor removal while sparing more normal tissue compared to conventional WLE, which takes at least a 4-mm margin in all directions.
Barriers to Adopting MMS in Japan—There are many barriers to the broader adoption of MMS in Japan. A guideline of the Japanese Dermatological Association says, MMS “is complicated, requires special training for acquisition, and requires time and labor for implementation of a series of processes, and it has not gained wide acceptance in Japan because of these disadvantages.”3 There currently are no MMS training programs in Japan. We refute this statement from the Japanese Dermatological Association because, in our experience, only 1 surgeon plus a single histotechnician familiar with MMS is sufficient for a facility to offer the procedure (the lead author of this study [S.S.] acts as both the surgeon and the histotechnician). Another misconception among some physicians in Japan is that cancer on ethnically Japanese skin is uniquely suited to excision without microscopic verification of tumor clearance because the borders of the tumors are easily identified, which was based on good cure rates for the excision of well-demarcated pigmented BCCs in a Japanese cohort. This study of a Japanese cohort investigated the specimens with the conventional bread-loafing technique but not with the PDEMA.12
Eighty percent (4/5) of our patients presented with nodular BCC, and only 1 required a second stage. In comparison, we also treated 16 White patients with nodular BCC with MMS during the same period, and 31% (5/16) required more than 1 stage, with 1 patient requiring 3 stages. This cohort, however, is too small to demonstrate a statistically significant difference (S.S., unpublished data, 2020-2022).
A study in Singapore reported the postsurgical complication rate and 5-year recurrence rate for 481 tumors (92% BCC and 7.5% SCC). The median follow-up duration after MMS was 36 months, and the recurrence rate was 0.6%. The postsurgical complications included 11 (2.3%) cases with superficial tip necrosis of surgical flaps/grafts, 2 (0.4%) with mild wound dehiscence, 1 (0.2%) with minor surgical site bleeding, and 1 (0.2%) with minor wound infection.13 This study supports the notion that MMS is equally effective for Asian patients.
Awareness of MMS in Japan is lacking, and most Japanese dermatologists do not know about the technique. All 5 patients in our case series asked their dermatologists about alternative treatment options and were not offered MMS. In each case, the patients learned of the technique through internet research.
The lack of insurance reimbursement for MMS in Japan is another barrier. Because the national health insurance does not reimburse for MMS, the procedure is relatively unavailable to most Japanese citizens who cannot pay out-of-pocket for the treatment and do not have supplemental insurance. Mohs micrographic surgery may seem expensive compared to WLE followed by repair; however, in the authors’ experience, in Japan, excision without MMS may require general sedation and multiple surgeries to reconstruct larger skin defects, leading to greater morbidity and risk for the patient.
Conclusion
Mohs micrographic surgery in Japan is in its infancy, and further studies showing recurrence rates and long-term prognosis are needed. Such data should help increase awareness of MMS among Japanese physicians as an excellent treatment option for their patients. Furthermore, as Japan becomes more heterogenous as a society and the US Military increases its presence in the region, the need for MMS is likely to increase.
Acknowledgments—We appreciate the proofreading support by Mark Bivens, MBA, MSc (Tokyo, Japan), as well as the technical support from Ben Tallon, MBChB, and Robyn Mason (both in Tauranga, New Zealand) to start MMS at our clinic.
- Asgari MM, Olson J, Alam M. Needs assessment for Mohs micrographic surgery. Dermatol Clin. 2012;30:167-175. doi:10.1016/j.det.2011.08.010
- Connolly SM, Baker DR, Baker DR, et al. AAD/ACMS/ASDSA/ASMS 2012 appropriate use criteria for Mohs micrographic surgery: a report of the American Academy of Dermatology, American College of Mohs Surgery, American Society for Dermatologic Surgery Association, and the American Society for Mohs Surgery. J Am Acad Dermatol. 2012;67:531-550.
- Ansai SI, Umebayashi Y, Katsumata N, et al. Japanese Dermatological Association Guidelines: outlines of guidelines for cutaneous squamous cell carcinoma 2020. J Dermatol. 2021;48:E288-E311.
- Schmults CD, Blitzblau R, Aasi SZ, et at. Basal cell skin cancer, version 2.2024, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2023;21:1181-1203. doi:10.6004/jncn.2023.0056
- Snow SN, Gunkel J. Mohs surgery. In: Bolognia JL, Schaffer JV, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2017:2445-2455. doi:10.1016/b978-0-070-94171-3.00041-7
- Wolf DJ, Zitelli JA. Surgical margins for basal cell carcinoma. Arch Dermatol. 1987;123:340-344.
- Quazi SJ, Aslam N, Saleem H, et al. Surgical margin of excision in basal cell carcinoma: a systematic review of literature. Cureus. 2020;12:E9211.
- Rowe DE, Carroll RJ, Day Jus CL. Mohs surgery is the treatment of choice for recurrent (previously treated) basal cell carcinoma. J Dermatol Surg Oncol. 1989;15:424-431.
- Van Loo, Mosterd K, Krekels GA. Surgical excision versus Mohs’ micrographic surgery for basal cell carcinoma of the face. Eur J Cancer. 2014;50:3011-3020.
- Schmults CD, Blitzblau R, Aasi SZ, et al. NCCN Guidelines Insights: Squamous Cell Skin Cancer, Version 1.2022. J Natl Compr Canc Netw. 2021;19:1382-1394.
- Hui AM, Jacobson M, Markowitz O, et al. Mohs micrographic surgery for the treatment of melanoma. Dermatol Clin. 2012;30:503-515.
- Ito T, Inatomi Y, Nagae K, et al. Narrow-margin excision is a safe, reliable treatment for well-defined, primary pigmented basal cell carcinoma: an analysis of 288 lesions in Japan. J Eur Acad Dermatol Venereol. 2015;29:1828-1831.
- Ho WYB, Zhao X, Tan WPM. Mohs micrographic surgery in Singapore: a long-term follow-up review. Ann Acad Med Singap. 2021;50:922-923.
- Asgari MM, Olson J, Alam M. Needs assessment for Mohs micrographic surgery. Dermatol Clin. 2012;30:167-175. doi:10.1016/j.det.2011.08.010
- Connolly SM, Baker DR, Baker DR, et al. AAD/ACMS/ASDSA/ASMS 2012 appropriate use criteria for Mohs micrographic surgery: a report of the American Academy of Dermatology, American College of Mohs Surgery, American Society for Dermatologic Surgery Association, and the American Society for Mohs Surgery. J Am Acad Dermatol. 2012;67:531-550.
- Ansai SI, Umebayashi Y, Katsumata N, et al. Japanese Dermatological Association Guidelines: outlines of guidelines for cutaneous squamous cell carcinoma 2020. J Dermatol. 2021;48:E288-E311.
- Schmults CD, Blitzblau R, Aasi SZ, et at. Basal cell skin cancer, version 2.2024, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2023;21:1181-1203. doi:10.6004/jncn.2023.0056
- Snow SN, Gunkel J. Mohs surgery. In: Bolognia JL, Schaffer JV, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2017:2445-2455. doi:10.1016/b978-0-070-94171-3.00041-7
- Wolf DJ, Zitelli JA. Surgical margins for basal cell carcinoma. Arch Dermatol. 1987;123:340-344.
- Quazi SJ, Aslam N, Saleem H, et al. Surgical margin of excision in basal cell carcinoma: a systematic review of literature. Cureus. 2020;12:E9211.
- Rowe DE, Carroll RJ, Day Jus CL. Mohs surgery is the treatment of choice for recurrent (previously treated) basal cell carcinoma. J Dermatol Surg Oncol. 1989;15:424-431.
- Van Loo, Mosterd K, Krekels GA. Surgical excision versus Mohs’ micrographic surgery for basal cell carcinoma of the face. Eur J Cancer. 2014;50:3011-3020.
- Schmults CD, Blitzblau R, Aasi SZ, et al. NCCN Guidelines Insights: Squamous Cell Skin Cancer, Version 1.2022. J Natl Compr Canc Netw. 2021;19:1382-1394.
- Hui AM, Jacobson M, Markowitz O, et al. Mohs micrographic surgery for the treatment of melanoma. Dermatol Clin. 2012;30:503-515.
- Ito T, Inatomi Y, Nagae K, et al. Narrow-margin excision is a safe, reliable treatment for well-defined, primary pigmented basal cell carcinoma: an analysis of 288 lesions in Japan. J Eur Acad Dermatol Venereol. 2015;29:1828-1831.
- Ho WYB, Zhao X, Tan WPM. Mohs micrographic surgery in Singapore: a long-term follow-up review. Ann Acad Med Singap. 2021;50:922-923.
Practice Points
- Mohs micrographic surgery (MMS) is a safe and effective treatment method for nonmelanoma skin cancer. In some cases, this procedure is superior to standard wide local excision and repair.
- For the broader adaptation of this vital technique in Japan—where MMS is not well established—increased awareness of treatment outcomes among Japanese physicians is needed.