User login
Low-risk adenomas may not elevate risk of CRC-related death
Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.
These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.
“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.
To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.
After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).
Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).
The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.
The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.
“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.
Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.
“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”
One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.
Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.
These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.
“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.
To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.
After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).
Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).
The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.
The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.
“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.
Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.
“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”
One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.
Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.
These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.
“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.
To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.
After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).
Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).
The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.
The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.
“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.
Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.
“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”
One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.
FROM GASTROENTEROLOGY
Surveillance endoscopy in Barrett’s may perform better than expected
For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.
Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.
This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.
“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”
On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.
To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.
Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
Technology challenged by technique
The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.
“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.
The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).
“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.
“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”
The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.
“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.
The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.
For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.
Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.
This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.
“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”
On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.
To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.
Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
Technology challenged by technique
The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.
“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.
The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).
“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.
“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”
The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.
“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.
The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.
For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.
Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.
This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.
“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”
On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.
To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.
Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
Technology challenged by technique
The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.
“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.
The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).
“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.
“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”
The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.
“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.
The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Lasting norovirus immunity may depend on T cells
Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.
These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.
They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”
Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.
“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.
To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.
Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.
“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.
To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.
Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.
“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.
Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.
“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”
The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.
Understanding the immune correlates of protection for norovirus is important for the development and evaluation of candidate vaccines and to better clarify the variation in host susceptibility to infection.
Prior research on the human immune response to norovirus infection has largely focused on the antibody response. There is less known about the antinorovirus T cell response, which can target and clear virus-infected cells. Notably, anti-viral CD8+ T cells are critical for control of norovirus infection in mouse models, which suggests a similarly important role in humans. In this study by Dr. Pattekar and colleagues, the authors generated human norovirus-specific peptides covering the entire viral proteome, and then they used these peptides to identify and characterize norovirus-specific CD8+ T cells from the blood, spleen, lymph nodes, and intestinal lamina propria of human donors who were not actively infected by norovirus. The authors identified virus-specific memory T cells in the blood and intestines. Further, they found several HLA class I restricted virus epitopes that are highly conserved amongst the most commonly circulating GII.4 noroviruses. These norovirus-specific T cells represented about 0.5% of all cells and reveal that norovirus induces a durable population of memory T cells.
Further research is needed to determine whether norovirus-specific CD8+ T cells are necessary or sufficient for preventing norovirus infection and disease in people. This important study provides novel tools and increases our understanding of cell-mediated immunity to human norovirus infection that will influence future vaccine design and evaluation for this important human pathogen.
Craig B. Wilen, MD, PhD, is assistant professor of laboratory medicine and immunobiology at Yale University, New Haven, Conn. He does not have any conflicts to disclose.
Understanding the immune correlates of protection for norovirus is important for the development and evaluation of candidate vaccines and to better clarify the variation in host susceptibility to infection.
Prior research on the human immune response to norovirus infection has largely focused on the antibody response. There is less known about the antinorovirus T cell response, which can target and clear virus-infected cells. Notably, anti-viral CD8+ T cells are critical for control of norovirus infection in mouse models, which suggests a similarly important role in humans. In this study by Dr. Pattekar and colleagues, the authors generated human norovirus-specific peptides covering the entire viral proteome, and then they used these peptides to identify and characterize norovirus-specific CD8+ T cells from the blood, spleen, lymph nodes, and intestinal lamina propria of human donors who were not actively infected by norovirus. The authors identified virus-specific memory T cells in the blood and intestines. Further, they found several HLA class I restricted virus epitopes that are highly conserved amongst the most commonly circulating GII.4 noroviruses. These norovirus-specific T cells represented about 0.5% of all cells and reveal that norovirus induces a durable population of memory T cells.
Further research is needed to determine whether norovirus-specific CD8+ T cells are necessary or sufficient for preventing norovirus infection and disease in people. This important study provides novel tools and increases our understanding of cell-mediated immunity to human norovirus infection that will influence future vaccine design and evaluation for this important human pathogen.
Craig B. Wilen, MD, PhD, is assistant professor of laboratory medicine and immunobiology at Yale University, New Haven, Conn. He does not have any conflicts to disclose.
Understanding the immune correlates of protection for norovirus is important for the development and evaluation of candidate vaccines and to better clarify the variation in host susceptibility to infection.
Prior research on the human immune response to norovirus infection has largely focused on the antibody response. There is less known about the antinorovirus T cell response, which can target and clear virus-infected cells. Notably, anti-viral CD8+ T cells are critical for control of norovirus infection in mouse models, which suggests a similarly important role in humans. In this study by Dr. Pattekar and colleagues, the authors generated human norovirus-specific peptides covering the entire viral proteome, and then they used these peptides to identify and characterize norovirus-specific CD8+ T cells from the blood, spleen, lymph nodes, and intestinal lamina propria of human donors who were not actively infected by norovirus. The authors identified virus-specific memory T cells in the blood and intestines. Further, they found several HLA class I restricted virus epitopes that are highly conserved amongst the most commonly circulating GII.4 noroviruses. These norovirus-specific T cells represented about 0.5% of all cells and reveal that norovirus induces a durable population of memory T cells.
Further research is needed to determine whether norovirus-specific CD8+ T cells are necessary or sufficient for preventing norovirus infection and disease in people. This important study provides novel tools and increases our understanding of cell-mediated immunity to human norovirus infection that will influence future vaccine design and evaluation for this important human pathogen.
Craig B. Wilen, MD, PhD, is assistant professor of laboratory medicine and immunobiology at Yale University, New Haven, Conn. He does not have any conflicts to disclose.
Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.
These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.
They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”
Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.
“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.
To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.
Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.
“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.
To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.
Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.
“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.
Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.
“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”
The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.
Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.
These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.
They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”
Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.
“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.
To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.
Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.
“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.
To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.
Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.
“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.
Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.
“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”
The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.
FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY
Low-risk adenomas may not elevate risk of CRC-related death
Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.
These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.
“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.
To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.
After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).
Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).
The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.
The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.
“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.
Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.
“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”
One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.
Despite evidence suggesting that colorectal cancer (CRC) incidence and mortality can be decreased through the endoscopic removal of adenomatous polyps, the question remains as to whether further endoscopic surveillance is necessary after polypectomy and, if so, how often. The most recent iteration of the United States Multi-Society Task Force guidelines endorsed a lengthening of the surveillance interval following the removal of low-risk adenomas (LRAs), defined as 1-2 tubular adenomas <10 mm with low-grade dysplasia, while maintaining a shorter interval for high-risk adenomas (HRAs), defined as advanced adenomas (villous histology, high-grade dysplasia, or >10 mm) or >3 adenomas.
Dr. Duvvuri and colleagues present the results of a systematic review and meta-analysis of studies examining metachronous CRC incidence and mortality following index colonoscopy. They found a small but statistically significant increase in the incidence of CRC but no significant difference in CRC mortality when comparing patients with LRAs to those with no adenomas. In contrast, they found both a statistically and clinically significant difference in CRC incidence/mortality when comparing patients with HRAs to both those with no adenomas and those with LRAs. They concluded that these results support a recommendation for no difference in follow-up surveillance between patients with LRAs and no adenomas but do support more frequent surveillance for patients with HRAs at index colonoscopy.
Future studies should better examine the timing of neoplasm incidence/recurrence following adenoma removal and also examine metachronous CRC incidence/mortality in patients with sessile serrated lesions at index colonoscopy.
Reid M. Ness, MD, MPH, AGAF, is an associate professor in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center and at the VA Tennessee Valley Healthcare System, Nashville, campus. He is an investigator in the Vanderbilt-Ingram Cancer Center. Dr. Ness has no financial relationships to disclose.
Despite evidence suggesting that colorectal cancer (CRC) incidence and mortality can be decreased through the endoscopic removal of adenomatous polyps, the question remains as to whether further endoscopic surveillance is necessary after polypectomy and, if so, how often. The most recent iteration of the United States Multi-Society Task Force guidelines endorsed a lengthening of the surveillance interval following the removal of low-risk adenomas (LRAs), defined as 1-2 tubular adenomas <10 mm with low-grade dysplasia, while maintaining a shorter interval for high-risk adenomas (HRAs), defined as advanced adenomas (villous histology, high-grade dysplasia, or >10 mm) or >3 adenomas.
Dr. Duvvuri and colleagues present the results of a systematic review and meta-analysis of studies examining metachronous CRC incidence and mortality following index colonoscopy. They found a small but statistically significant increase in the incidence of CRC but no significant difference in CRC mortality when comparing patients with LRAs to those with no adenomas. In contrast, they found both a statistically and clinically significant difference in CRC incidence/mortality when comparing patients with HRAs to both those with no adenomas and those with LRAs. They concluded that these results support a recommendation for no difference in follow-up surveillance between patients with LRAs and no adenomas but do support more frequent surveillance for patients with HRAs at index colonoscopy.
Future studies should better examine the timing of neoplasm incidence/recurrence following adenoma removal and also examine metachronous CRC incidence/mortality in patients with sessile serrated lesions at index colonoscopy.
Reid M. Ness, MD, MPH, AGAF, is an associate professor in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center and at the VA Tennessee Valley Healthcare System, Nashville, campus. He is an investigator in the Vanderbilt-Ingram Cancer Center. Dr. Ness has no financial relationships to disclose.
Despite evidence suggesting that colorectal cancer (CRC) incidence and mortality can be decreased through the endoscopic removal of adenomatous polyps, the question remains as to whether further endoscopic surveillance is necessary after polypectomy and, if so, how often. The most recent iteration of the United States Multi-Society Task Force guidelines endorsed a lengthening of the surveillance interval following the removal of low-risk adenomas (LRAs), defined as 1-2 tubular adenomas <10 mm with low-grade dysplasia, while maintaining a shorter interval for high-risk adenomas (HRAs), defined as advanced adenomas (villous histology, high-grade dysplasia, or >10 mm) or >3 adenomas.
Dr. Duvvuri and colleagues present the results of a systematic review and meta-analysis of studies examining metachronous CRC incidence and mortality following index colonoscopy. They found a small but statistically significant increase in the incidence of CRC but no significant difference in CRC mortality when comparing patients with LRAs to those with no adenomas. In contrast, they found both a statistically and clinically significant difference in CRC incidence/mortality when comparing patients with HRAs to both those with no adenomas and those with LRAs. They concluded that these results support a recommendation for no difference in follow-up surveillance between patients with LRAs and no adenomas but do support more frequent surveillance for patients with HRAs at index colonoscopy.
Future studies should better examine the timing of neoplasm incidence/recurrence following adenoma removal and also examine metachronous CRC incidence/mortality in patients with sessile serrated lesions at index colonoscopy.
Reid M. Ness, MD, MPH, AGAF, is an associate professor in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center and at the VA Tennessee Valley Healthcare System, Nashville, campus. He is an investigator in the Vanderbilt-Ingram Cancer Center. Dr. Ness has no financial relationships to disclose.
Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.
These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.
“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.
To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.
After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).
Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).
The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.
The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.
“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.
Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.
“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”
One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.
Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.
These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.
“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.
To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.
After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).
Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).
The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.
The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.
“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.
Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.
“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”
One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.
FROM GASTROENTEROLOGY
Surveillance endoscopy in Barrett’s may perform better than expected
For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.
Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.
This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.
“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”
On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.
To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.
Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
Technology challenged by technique
The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.
“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.
The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).
“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.
“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”
The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.
“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.
The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.
The current study by Dr. Dhaliwal and colleagues evaluates the neoplasia detection rate (NDR) for high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC) during surveillance endoscopy, which is a proposed novel quality metric for BE. Within a population cohort, the investigators found the NDR was 4.9%, and this did not increase significantly during the study period from 1991 to 2019. Gastroenterologists were more likely to report visible abnormalities during endoscopy and this was a significant predictor of neoplasia detection in a multivariable model. However, the overall rate of missed HGD or EAC was 13%, and this was not associated with procedural specialty. Interestingly, even with only 57% adherence to Seattle protocol in this study, there was no association with missed lesions.
Despite advances in endoscopic imaging and measures establishing quality for biopsy technique, there remains substantial room for improvement in the endoscopic management of patients with BE. While unable to evaluate all factors associated with neoplasia detection, the authors have provided an important real-world benchmark for NDR. Further study is needed to establish the connection between NDR and missed dysplasia, as well as its impact on outcomes such as EAC staging and mortality. Critically, understanding the role of specialized training and other factors such as inspection time to improve NDR is needed.
David A. Leiman, MD, MSHP, is the chair of the AGA Quality Committee. He is an assistant professor of medicine at Duke University, Durham, N.C., where he serves as director of esophageal research and quality. He has no conflicts.
The current study by Dr. Dhaliwal and colleagues evaluates the neoplasia detection rate (NDR) for high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC) during surveillance endoscopy, which is a proposed novel quality metric for BE. Within a population cohort, the investigators found the NDR was 4.9%, and this did not increase significantly during the study period from 1991 to 2019. Gastroenterologists were more likely to report visible abnormalities during endoscopy and this was a significant predictor of neoplasia detection in a multivariable model. However, the overall rate of missed HGD or EAC was 13%, and this was not associated with procedural specialty. Interestingly, even with only 57% adherence to Seattle protocol in this study, there was no association with missed lesions.
Despite advances in endoscopic imaging and measures establishing quality for biopsy technique, there remains substantial room for improvement in the endoscopic management of patients with BE. While unable to evaluate all factors associated with neoplasia detection, the authors have provided an important real-world benchmark for NDR. Further study is needed to establish the connection between NDR and missed dysplasia, as well as its impact on outcomes such as EAC staging and mortality. Critically, understanding the role of specialized training and other factors such as inspection time to improve NDR is needed.
David A. Leiman, MD, MSHP, is the chair of the AGA Quality Committee. He is an assistant professor of medicine at Duke University, Durham, N.C., where he serves as director of esophageal research and quality. He has no conflicts.
The current study by Dr. Dhaliwal and colleagues evaluates the neoplasia detection rate (NDR) for high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC) during surveillance endoscopy, which is a proposed novel quality metric for BE. Within a population cohort, the investigators found the NDR was 4.9%, and this did not increase significantly during the study period from 1991 to 2019. Gastroenterologists were more likely to report visible abnormalities during endoscopy and this was a significant predictor of neoplasia detection in a multivariable model. However, the overall rate of missed HGD or EAC was 13%, and this was not associated with procedural specialty. Interestingly, even with only 57% adherence to Seattle protocol in this study, there was no association with missed lesions.
Despite advances in endoscopic imaging and measures establishing quality for biopsy technique, there remains substantial room for improvement in the endoscopic management of patients with BE. While unable to evaluate all factors associated with neoplasia detection, the authors have provided an important real-world benchmark for NDR. Further study is needed to establish the connection between NDR and missed dysplasia, as well as its impact on outcomes such as EAC staging and mortality. Critically, understanding the role of specialized training and other factors such as inspection time to improve NDR is needed.
David A. Leiman, MD, MSHP, is the chair of the AGA Quality Committee. He is an assistant professor of medicine at Duke University, Durham, N.C., where he serves as director of esophageal research and quality. He has no conflicts.
For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.
Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.
This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.
“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”
On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.
To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.
Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
Technology challenged by technique
The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.
“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.
The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).
“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.
“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”
The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.
“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.
The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.
For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.
Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.
This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.
“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”
On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.
To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.
Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
Technology challenged by technique
The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.
“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.
The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).
“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.
“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”
The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.
“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.
The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Lasting norovirus immunity may depend on T cells
Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.
These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.
They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”
Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.
“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.
To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.
Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.
“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.
To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.
Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.
“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.
Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.
“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”
The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.
Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.
These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.
They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”
Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.
“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.
To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.
Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.
“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.
To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.
Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.
“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.
Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.
“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”
The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.
Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.
These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.
They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”
Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.
“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.
To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.
Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.
“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.
To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.
Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.
“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.
Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.
“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”
The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.
FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY
Pediatric NAFLD almost always stems from excess body weight, not other etiologies
Nonalcoholic fatty liver disease (NAFLD) in children is almost always caused by excess body weight, not other etiologies, based on a retrospective analysis of 900 patients.
Just 2% of children with overweight or obesity and suspected NAFLD had other causes of liver disease, and none tested positive for autoimmune hepatitis (AIH), reported lead author Toshifumi Yodoshi, MD, PhD, of Cincinnati Children’s Hospital Medical Center, and colleagues.
“Currently, recommended testing of patients with suspected NAFLD includes ruling out the following conditions: AIH, Wilson disease, hemochromatosis, alpha-1 antitrypsin [A1AT] deficiency, viral hepatitis, celiac disease, and thyroid dysfunction,” the investigators wrote in Pediatrics.
Yet evidence supporting this particular battery of tests is scant; just one previous pediatric study has estimated the prevalence of other liver diseases among children with suspected NAFLD. The study showed that the second-most common etiology, after NAFLD, was AIH, at a rate of 4%.
But “the generalizability of these findings is uncertain,” noted Dr. Yodoshi and colleagues, as the study was conducted at one tertiary center in the western United States, among a population that was predominantly Hispanic.
This uncertainty spurred the present study, which was conducted at two pediatric centers: Cincinnati Children’s Hospital Medical Center (2009-2017) and Yale New Haven (Conn.) Children’s Hospital (2012-2017).
The final analysis involved 900 patients aged 18 years or younger with suspected NAFLD based on hepatic steatosis detected via imaging and/or elevated serum aminotransferases. Demographically, a slight majority of the patients were boys (63%), and approximately one-quarter (26%) were Hispanic. Median BMI z score was 2.45, with three out of four patients (76%) exhibiting severe obesity. Out of 900 patients, 358 (40%) underwent liver biopsy, among whom 46% had confirmed nonalcoholic steatohepatitis.
All patients underwent testing to exclude the aforementioned conditions using various diagnostics, revealing that just 2% of the population had etiologies other than NAFLD. Specifically, 11 children had thyroid dysfunction (1.2%), 3 had celiac disease (0.4%), 3 had A1AT deficiency (0.4%), 1 had hemophagocytic lymphohistiocytosis, and 1 had Hodgkin’s lymphoma. None of the children had Wilson disease, hepatitis B or C, or AIH.
Dr. Yodoshi and colleagues highlighted the latter finding, noting that 13% of the patients had autoantibodies for AIH, but “none met composite criteria.” This contrasts with the previous study from 2013, which found an AIH rate of 4%.
“Nonetheless,” the investigators went on, “NAFLD remains a diagnosis of exclusion, and key conditions that require specific treatments must be ruled out in the workup of patients with suspected NAFLD. In the future, the cost-effectiveness of this approach will need to be investigated.”
Interpreting the findings, Francis E. Rushton, MD, of Beaufort (S.C.) Memorial Hospital emphasized the implications for preventive and interventional health care.
“This study showing an absence of etiologies other than obesity in overweight children with NAFLD provides further impetus for pediatricians to work on both preventive and treatment regimens for weight issues,” Dr. Rushton said. “Linking community-based initiatives focused on adequate nutritional support with pediatric clinical support services is critical in solving issues related to overweight in children. Tracking BMI over time and developing healthy habit goals for patients are key parts of clinical interventions.”
The study was funded by the National Institutes of Health. The investigators reported no conflicts of interest.
Nonalcoholic fatty liver disease (NAFLD) in children is almost always caused by excess body weight, not other etiologies, based on a retrospective analysis of 900 patients.
Just 2% of children with overweight or obesity and suspected NAFLD had other causes of liver disease, and none tested positive for autoimmune hepatitis (AIH), reported lead author Toshifumi Yodoshi, MD, PhD, of Cincinnati Children’s Hospital Medical Center, and colleagues.
“Currently, recommended testing of patients with suspected NAFLD includes ruling out the following conditions: AIH, Wilson disease, hemochromatosis, alpha-1 antitrypsin [A1AT] deficiency, viral hepatitis, celiac disease, and thyroid dysfunction,” the investigators wrote in Pediatrics.
Yet evidence supporting this particular battery of tests is scant; just one previous pediatric study has estimated the prevalence of other liver diseases among children with suspected NAFLD. The study showed that the second-most common etiology, after NAFLD, was AIH, at a rate of 4%.
But “the generalizability of these findings is uncertain,” noted Dr. Yodoshi and colleagues, as the study was conducted at one tertiary center in the western United States, among a population that was predominantly Hispanic.
This uncertainty spurred the present study, which was conducted at two pediatric centers: Cincinnati Children’s Hospital Medical Center (2009-2017) and Yale New Haven (Conn.) Children’s Hospital (2012-2017).
The final analysis involved 900 patients aged 18 years or younger with suspected NAFLD based on hepatic steatosis detected via imaging and/or elevated serum aminotransferases. Demographically, a slight majority of the patients were boys (63%), and approximately one-quarter (26%) were Hispanic. Median BMI z score was 2.45, with three out of four patients (76%) exhibiting severe obesity. Out of 900 patients, 358 (40%) underwent liver biopsy, among whom 46% had confirmed nonalcoholic steatohepatitis.
All patients underwent testing to exclude the aforementioned conditions using various diagnostics, revealing that just 2% of the population had etiologies other than NAFLD. Specifically, 11 children had thyroid dysfunction (1.2%), 3 had celiac disease (0.4%), 3 had A1AT deficiency (0.4%), 1 had hemophagocytic lymphohistiocytosis, and 1 had Hodgkin’s lymphoma. None of the children had Wilson disease, hepatitis B or C, or AIH.
Dr. Yodoshi and colleagues highlighted the latter finding, noting that 13% of the patients had autoantibodies for AIH, but “none met composite criteria.” This contrasts with the previous study from 2013, which found an AIH rate of 4%.
“Nonetheless,” the investigators went on, “NAFLD remains a diagnosis of exclusion, and key conditions that require specific treatments must be ruled out in the workup of patients with suspected NAFLD. In the future, the cost-effectiveness of this approach will need to be investigated.”
Interpreting the findings, Francis E. Rushton, MD, of Beaufort (S.C.) Memorial Hospital emphasized the implications for preventive and interventional health care.
“This study showing an absence of etiologies other than obesity in overweight children with NAFLD provides further impetus for pediatricians to work on both preventive and treatment regimens for weight issues,” Dr. Rushton said. “Linking community-based initiatives focused on adequate nutritional support with pediatric clinical support services is critical in solving issues related to overweight in children. Tracking BMI over time and developing healthy habit goals for patients are key parts of clinical interventions.”
The study was funded by the National Institutes of Health. The investigators reported no conflicts of interest.
Nonalcoholic fatty liver disease (NAFLD) in children is almost always caused by excess body weight, not other etiologies, based on a retrospective analysis of 900 patients.
Just 2% of children with overweight or obesity and suspected NAFLD had other causes of liver disease, and none tested positive for autoimmune hepatitis (AIH), reported lead author Toshifumi Yodoshi, MD, PhD, of Cincinnati Children’s Hospital Medical Center, and colleagues.
“Currently, recommended testing of patients with suspected NAFLD includes ruling out the following conditions: AIH, Wilson disease, hemochromatosis, alpha-1 antitrypsin [A1AT] deficiency, viral hepatitis, celiac disease, and thyroid dysfunction,” the investigators wrote in Pediatrics.
Yet evidence supporting this particular battery of tests is scant; just one previous pediatric study has estimated the prevalence of other liver diseases among children with suspected NAFLD. The study showed that the second-most common etiology, after NAFLD, was AIH, at a rate of 4%.
But “the generalizability of these findings is uncertain,” noted Dr. Yodoshi and colleagues, as the study was conducted at one tertiary center in the western United States, among a population that was predominantly Hispanic.
This uncertainty spurred the present study, which was conducted at two pediatric centers: Cincinnati Children’s Hospital Medical Center (2009-2017) and Yale New Haven (Conn.) Children’s Hospital (2012-2017).
The final analysis involved 900 patients aged 18 years or younger with suspected NAFLD based on hepatic steatosis detected via imaging and/or elevated serum aminotransferases. Demographically, a slight majority of the patients were boys (63%), and approximately one-quarter (26%) were Hispanic. Median BMI z score was 2.45, with three out of four patients (76%) exhibiting severe obesity. Out of 900 patients, 358 (40%) underwent liver biopsy, among whom 46% had confirmed nonalcoholic steatohepatitis.
All patients underwent testing to exclude the aforementioned conditions using various diagnostics, revealing that just 2% of the population had etiologies other than NAFLD. Specifically, 11 children had thyroid dysfunction (1.2%), 3 had celiac disease (0.4%), 3 had A1AT deficiency (0.4%), 1 had hemophagocytic lymphohistiocytosis, and 1 had Hodgkin’s lymphoma. None of the children had Wilson disease, hepatitis B or C, or AIH.
Dr. Yodoshi and colleagues highlighted the latter finding, noting that 13% of the patients had autoantibodies for AIH, but “none met composite criteria.” This contrasts with the previous study from 2013, which found an AIH rate of 4%.
“Nonetheless,” the investigators went on, “NAFLD remains a diagnosis of exclusion, and key conditions that require specific treatments must be ruled out in the workup of patients with suspected NAFLD. In the future, the cost-effectiveness of this approach will need to be investigated.”
Interpreting the findings, Francis E. Rushton, MD, of Beaufort (S.C.) Memorial Hospital emphasized the implications for preventive and interventional health care.
“This study showing an absence of etiologies other than obesity in overweight children with NAFLD provides further impetus for pediatricians to work on both preventive and treatment regimens for weight issues,” Dr. Rushton said. “Linking community-based initiatives focused on adequate nutritional support with pediatric clinical support services is critical in solving issues related to overweight in children. Tracking BMI over time and developing healthy habit goals for patients are key parts of clinical interventions.”
The study was funded by the National Institutes of Health. The investigators reported no conflicts of interest.
FROM PEDIATRICS
Maternal caffeine consumption, even small amounts, may reduce neonatal size
For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.
That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.
“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.
Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.
Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.
“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.
Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).
Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.
Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.
Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.
Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).
Caffeine plasma concentrations supported these findings.
Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = –0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.
“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.
Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”
She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.
“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”
According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”
Still, he urged a cautious interpretation from a clinical perspective.
“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”
Dr. Silver suggested that the findings deserve additional investigation.
“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”
The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.
For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.
That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.
“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.
Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.
Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.
“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.
Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).
Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.
Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.
Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.
Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).
Caffeine plasma concentrations supported these findings.
Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = –0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.
“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.
Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”
She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.
“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”
According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”
Still, he urged a cautious interpretation from a clinical perspective.
“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”
Dr. Silver suggested that the findings deserve additional investigation.
“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”
The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.
For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.
That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.
“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.
Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.
Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.
“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.
Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).
Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.
Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.
Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.
Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).
Caffeine plasma concentrations supported these findings.
Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = –0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.
“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.
Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”
She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.
“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”
According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”
Still, he urged a cautious interpretation from a clinical perspective.
“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”
Dr. Silver suggested that the findings deserve additional investigation.
“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”
The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.
FROM JAMA NETWORK OPEN
Preterm infant supine sleep positioning becoming more common, but racial/ethnic disparities remain
Although supine sleep positioning of preterm infants is becoming more common, racial disparities remain, according to a retrospective analysis involving more than 66,000 mothers.
Non-Hispanic Black preterm infants were 39%-56% less likely to sleep on their backs than were non-Hispanic White preterm infants, reported lead author Sunah S. Hwang, MD, MPH, of the University Colorado, Aurora, and colleagues.
According to the investigators, these findings may explain, in part, why the risk of sudden unexpected infant death (SUID) is more than twofold higher among non-Hispanic Black preterm infants than non-Hispanic White preterm infants.
“During the first year of life, one of the most effective and modifiable parental behaviors that may reduce the risk for SUID is adhering to safe infant sleep practices, including supine sleep positioning or back-sleeping,” wrote Dr. Hwang and colleagues. The report is in the Journal of Pediatrics. “For the healthy-term population, research on the racial/ethnic disparity in adherence to safe sleep practices is robust, but for preterm infants who are at much higher risk for SUID, less is known.”
To address this knowledge gap, the investigators conducted a retrospective study using data from the Pregnancy Risk Assessment Monitoring System (PRAMS), a population-based perinatal surveillance system. The final dataset involved 66,131 mothers who gave birth to preterm infants in 16 states between 2000 and 2015. The sample size was weighted to 1,020,986 mothers.
The investigators evaluated annual marginal prevalence of supine sleep positioning among two cohorts: early preterm infants (gestational age less than 34 weeks) and late preterm infants (gestational age 34-36 weeks). The primary outcome was rate of supine sleep positioning, a practice that must have been followed consistently, excluding other positions (i.e. prone or side). Mothers were grouped by race/ethnicity into four categories: non-Hispanic Black, non-Hispanic White, Hispanic, and other. Several other maternal and infant characteristics were recorded, including marital status, maternal age, education, insurance prior to birth, history of previous live birth, insurance, method of delivery, birth weight, and sex.
From 2000 to 2015, the overall adjusted odds of supine sleep positioning increased by 8.5% in the early preterm group and 5.2% in the late preterm group. This intergroup difference may be due to disparate levels of in-hospital education, the investigators suggested.
“Perhaps the longer NICU hospitalization for early preterm infants compared with late preterm infants affords greater opportunities for parental education and engagement about safe sleep practices,” they wrote.
Among early preterm infants, odds percentages increased by 7.3%, 7.7%, and 10.0% for non-Hispanic Black, Hispanic, and non-Hispanic White mothers, respectively. For late preterm infants, respective rates increased by 5.9%, 4.8%, and 5.8% for non-Hispanic Black, Hispanic, and non-Hispanic White mothers.
Despite these improvements, racial disparities were still observed. Non-Hispanic Black mothers reported lower rates of supine sleep positioning for both early preterm infants (odds ratio [OR], 0.61; P less than .0001) and late preterm infants (OR, 0.44; P less than .0001) compared with non-Hispanic White mothers.
These disparities seem “to be in line with racial/ethnic disparity trends in infant mortality and in SUID rates that have persisted for decades among infants,” the investigators wrote.
To a lesser degree, and lacking statistical significance, Hispanic mothers reported lower odds of supine sleep positioning than the odds of White mothers for both early preterm infants (OR, 0.80; P = .1670) and late preterm infants (OR, 0.81; P = .1054).
According to Dr. Hwang and colleagues, more specific demographic data are needed to accurately describe supine sleep positioning rates among Hispanic mothers, partly because of the heterogeneity of this cohort.
“A large body of literature has shown significant variability by immigrant status and country of origin in several infant health outcomes among the Hispanic population,” the investigators wrote. “This study was unable to stratify the Hispanic cohort by these characteristics and thus the distribution of supine sleep positioning prevalence across different Hispanic subgroups could not be demonstrated in this study.”
The investigators also suggested that interventional studies are needed.
“Additional efforts to understand the barriers and facilitators to SSP [supine sleep positioning] adherence among all preterm infant caregivers, particularly non-Hispanic Black and Hispanic parents, are needed so that novel interventions can then be developed,” they wrote.
According to Denice Cora-Bramble, MD, MBA, chief diversity officer at Children’s National Hospital and professor of pediatrics at George Washington University, Washington, the observed improvements in supine sleep positioning may predict lower rates of infant mortality, but more work in the area is needed.
“In spite of improvement in infants’ supine sleep positioning during the study period, racial/ethnic disparities persisted among non-Hispanic Blacks and Hispanics,” Dr. Cora-Bramble said. “That there was improvement among the populations included in the study is significant because of the associated and expected decrease in infant mortality. However, the study results need to be evaluated within the context of [the study’s] limitations, such as the inclusion of only sixteen states in the data analysis. More research is needed to understand and effectively address the disparities highlighted in the study.”
The investigators and Dr. Cora-Bramble reported no conflicts of interest.
Although supine sleep positioning of preterm infants is becoming more common, racial disparities remain, according to a retrospective analysis involving more than 66,000 mothers.
Non-Hispanic Black preterm infants were 39%-56% less likely to sleep on their backs than were non-Hispanic White preterm infants, reported lead author Sunah S. Hwang, MD, MPH, of the University Colorado, Aurora, and colleagues.
According to the investigators, these findings may explain, in part, why the risk of sudden unexpected infant death (SUID) is more than twofold higher among non-Hispanic Black preterm infants than non-Hispanic White preterm infants.
“During the first year of life, one of the most effective and modifiable parental behaviors that may reduce the risk for SUID is adhering to safe infant sleep practices, including supine sleep positioning or back-sleeping,” wrote Dr. Hwang and colleagues. The report is in the Journal of Pediatrics. “For the healthy-term population, research on the racial/ethnic disparity in adherence to safe sleep practices is robust, but for preterm infants who are at much higher risk for SUID, less is known.”
To address this knowledge gap, the investigators conducted a retrospective study using data from the Pregnancy Risk Assessment Monitoring System (PRAMS), a population-based perinatal surveillance system. The final dataset involved 66,131 mothers who gave birth to preterm infants in 16 states between 2000 and 2015. The sample size was weighted to 1,020,986 mothers.
The investigators evaluated annual marginal prevalence of supine sleep positioning among two cohorts: early preterm infants (gestational age less than 34 weeks) and late preterm infants (gestational age 34-36 weeks). The primary outcome was rate of supine sleep positioning, a practice that must have been followed consistently, excluding other positions (i.e. prone or side). Mothers were grouped by race/ethnicity into four categories: non-Hispanic Black, non-Hispanic White, Hispanic, and other. Several other maternal and infant characteristics were recorded, including marital status, maternal age, education, insurance prior to birth, history of previous live birth, insurance, method of delivery, birth weight, and sex.
From 2000 to 2015, the overall adjusted odds of supine sleep positioning increased by 8.5% in the early preterm group and 5.2% in the late preterm group. This intergroup difference may be due to disparate levels of in-hospital education, the investigators suggested.
“Perhaps the longer NICU hospitalization for early preterm infants compared with late preterm infants affords greater opportunities for parental education and engagement about safe sleep practices,” they wrote.
Among early preterm infants, odds percentages increased by 7.3%, 7.7%, and 10.0% for non-Hispanic Black, Hispanic, and non-Hispanic White mothers, respectively. For late preterm infants, respective rates increased by 5.9%, 4.8%, and 5.8% for non-Hispanic Black, Hispanic, and non-Hispanic White mothers.
Despite these improvements, racial disparities were still observed. Non-Hispanic Black mothers reported lower rates of supine sleep positioning for both early preterm infants (odds ratio [OR], 0.61; P less than .0001) and late preterm infants (OR, 0.44; P less than .0001) compared with non-Hispanic White mothers.
These disparities seem “to be in line with racial/ethnic disparity trends in infant mortality and in SUID rates that have persisted for decades among infants,” the investigators wrote.
To a lesser degree, and lacking statistical significance, Hispanic mothers reported lower odds of supine sleep positioning than the odds of White mothers for both early preterm infants (OR, 0.80; P = .1670) and late preterm infants (OR, 0.81; P = .1054).
According to Dr. Hwang and colleagues, more specific demographic data are needed to accurately describe supine sleep positioning rates among Hispanic mothers, partly because of the heterogeneity of this cohort.
“A large body of literature has shown significant variability by immigrant status and country of origin in several infant health outcomes among the Hispanic population,” the investigators wrote. “This study was unable to stratify the Hispanic cohort by these characteristics and thus the distribution of supine sleep positioning prevalence across different Hispanic subgroups could not be demonstrated in this study.”
The investigators also suggested that interventional studies are needed.
“Additional efforts to understand the barriers and facilitators to SSP [supine sleep positioning] adherence among all preterm infant caregivers, particularly non-Hispanic Black and Hispanic parents, are needed so that novel interventions can then be developed,” they wrote.
According to Denice Cora-Bramble, MD, MBA, chief diversity officer at Children’s National Hospital and professor of pediatrics at George Washington University, Washington, the observed improvements in supine sleep positioning may predict lower rates of infant mortality, but more work in the area is needed.
“In spite of improvement in infants’ supine sleep positioning during the study period, racial/ethnic disparities persisted among non-Hispanic Blacks and Hispanics,” Dr. Cora-Bramble said. “That there was improvement among the populations included in the study is significant because of the associated and expected decrease in infant mortality. However, the study results need to be evaluated within the context of [the study’s] limitations, such as the inclusion of only sixteen states in the data analysis. More research is needed to understand and effectively address the disparities highlighted in the study.”
The investigators and Dr. Cora-Bramble reported no conflicts of interest.
Although supine sleep positioning of preterm infants is becoming more common, racial disparities remain, according to a retrospective analysis involving more than 66,000 mothers.
Non-Hispanic Black preterm infants were 39%-56% less likely to sleep on their backs than were non-Hispanic White preterm infants, reported lead author Sunah S. Hwang, MD, MPH, of the University Colorado, Aurora, and colleagues.
According to the investigators, these findings may explain, in part, why the risk of sudden unexpected infant death (SUID) is more than twofold higher among non-Hispanic Black preterm infants than non-Hispanic White preterm infants.
“During the first year of life, one of the most effective and modifiable parental behaviors that may reduce the risk for SUID is adhering to safe infant sleep practices, including supine sleep positioning or back-sleeping,” wrote Dr. Hwang and colleagues. The report is in the Journal of Pediatrics. “For the healthy-term population, research on the racial/ethnic disparity in adherence to safe sleep practices is robust, but for preterm infants who are at much higher risk for SUID, less is known.”
To address this knowledge gap, the investigators conducted a retrospective study using data from the Pregnancy Risk Assessment Monitoring System (PRAMS), a population-based perinatal surveillance system. The final dataset involved 66,131 mothers who gave birth to preterm infants in 16 states between 2000 and 2015. The sample size was weighted to 1,020,986 mothers.
The investigators evaluated annual marginal prevalence of supine sleep positioning among two cohorts: early preterm infants (gestational age less than 34 weeks) and late preterm infants (gestational age 34-36 weeks). The primary outcome was rate of supine sleep positioning, a practice that must have been followed consistently, excluding other positions (i.e. prone or side). Mothers were grouped by race/ethnicity into four categories: non-Hispanic Black, non-Hispanic White, Hispanic, and other. Several other maternal and infant characteristics were recorded, including marital status, maternal age, education, insurance prior to birth, history of previous live birth, insurance, method of delivery, birth weight, and sex.
From 2000 to 2015, the overall adjusted odds of supine sleep positioning increased by 8.5% in the early preterm group and 5.2% in the late preterm group. This intergroup difference may be due to disparate levels of in-hospital education, the investigators suggested.
“Perhaps the longer NICU hospitalization for early preterm infants compared with late preterm infants affords greater opportunities for parental education and engagement about safe sleep practices,” they wrote.
Among early preterm infants, odds percentages increased by 7.3%, 7.7%, and 10.0% for non-Hispanic Black, Hispanic, and non-Hispanic White mothers, respectively. For late preterm infants, respective rates increased by 5.9%, 4.8%, and 5.8% for non-Hispanic Black, Hispanic, and non-Hispanic White mothers.
Despite these improvements, racial disparities were still observed. Non-Hispanic Black mothers reported lower rates of supine sleep positioning for both early preterm infants (odds ratio [OR], 0.61; P less than .0001) and late preterm infants (OR, 0.44; P less than .0001) compared with non-Hispanic White mothers.
These disparities seem “to be in line with racial/ethnic disparity trends in infant mortality and in SUID rates that have persisted for decades among infants,” the investigators wrote.
To a lesser degree, and lacking statistical significance, Hispanic mothers reported lower odds of supine sleep positioning than the odds of White mothers for both early preterm infants (OR, 0.80; P = .1670) and late preterm infants (OR, 0.81; P = .1054).
According to Dr. Hwang and colleagues, more specific demographic data are needed to accurately describe supine sleep positioning rates among Hispanic mothers, partly because of the heterogeneity of this cohort.
“A large body of literature has shown significant variability by immigrant status and country of origin in several infant health outcomes among the Hispanic population,” the investigators wrote. “This study was unable to stratify the Hispanic cohort by these characteristics and thus the distribution of supine sleep positioning prevalence across different Hispanic subgroups could not be demonstrated in this study.”
The investigators also suggested that interventional studies are needed.
“Additional efforts to understand the barriers and facilitators to SSP [supine sleep positioning] adherence among all preterm infant caregivers, particularly non-Hispanic Black and Hispanic parents, are needed so that novel interventions can then be developed,” they wrote.
According to Denice Cora-Bramble, MD, MBA, chief diversity officer at Children’s National Hospital and professor of pediatrics at George Washington University, Washington, the observed improvements in supine sleep positioning may predict lower rates of infant mortality, but more work in the area is needed.
“In spite of improvement in infants’ supine sleep positioning during the study period, racial/ethnic disparities persisted among non-Hispanic Blacks and Hispanics,” Dr. Cora-Bramble said. “That there was improvement among the populations included in the study is significant because of the associated and expected decrease in infant mortality. However, the study results need to be evaluated within the context of [the study’s] limitations, such as the inclusion of only sixteen states in the data analysis. More research is needed to understand and effectively address the disparities highlighted in the study.”
The investigators and Dr. Cora-Bramble reported no conflicts of interest.
FROM JOURNAL OF PEDIATRICS
Time is of the essence: DST up for debate again
Seasonal time change is now up for consideration in the U.S. Congress, prompting sleep medicine specialists to weigh in on the health impact of a major policy change.
As lawmakers in Washington propose an end to seasonal time changes by permanently establishing daylight saving time (DST), the American Academy of Sleep Medicine (AASM) is pushing for a Congressional hearing so scientists can present evidence in favor of converse legislation – to make standard time the new norm.
According to the AASM, ; however, the switch from standard time to DST incurs more risk.
“Current evidence best supports the adoption of year-round standard time, which aligns best with human circadian biology and provides distinct benefits for public health and safety,” the AASM noted in a 2020 position statement on DST.
The statement cites a number of studies that have reported associations between the switch to DST and acute, negative health outcomes, including higher rates of hospital admission, cardiovascular morbidity, atrial fibrillation, and stroke. The time shift has been associated with a spectrum of cellular, metabolic, and circadian derangements, from increased production of inflammatory markers, to higher blood pressure, and loss of sleep. These biological effects may have far-reaching consequences, including increased rates of fatal motor accidents in the days following the time change, and even increased volatility in the stock market, which may stem from cognitive deficits.
U.S. Senator Marco Rubio (R-Fla.) and others in the U.S. Congress have reintroduced the 2019 Sunshine Protection Act, legislation that would make DST permanent across the country. According to a statement on Sen. Rubio’s website, “The bill reflects the Florida legislature’s 2018 enactment of year-round DST; however, for Florida’s change to apply, a change in the federal statute is required. Fifteen other states – Arkansas, Alabama, California, Delaware, Georgia, Idaho, Louisiana, Maine, Ohio, Oregon, South Carolina, Tennessee, Utah, Washington, and Wyoming – have passed similar laws, resolutions, or voter initiatives, and dozens more are looking. The legislation, if enacted, would apply to those states [that] currently participate in DST, which most states observe for eight months out of the year.”
A stitch in time
“The sudden change in clock time disrupts sleep/wake patterns, decreasing total sleep time and sleep quality, leading to decrements in daytime cognition,” said Kannan Ramar, MBBS, MD, president of the AASM and a sleep medicine specialist at Mayo Clinic, Rochester, Minn.
Emphasizing this point, Dr. Ramar noted a recent study that reported an 18% increase in “patient safety-related incidents associated with human error” among health care workers within a week of the spring time change.
“Irregular bedtimes and wake times disrupt the timing of our circadian rhythms, which can lead to symptoms of insomnia or long-term, excessive daytime sleepiness. Lack of sleep can lead to numerous adverse effects on our minds, including decreased cognitive function, trouble concentrating, and general moodiness,” Dr. Ramar said.
He noted that these impacts may be more significant among certain individuals.
“The daylight saving time changes can be especially problematic for any populations that already experience chronic insufficient sleep or other sleep difficulties,” Dr. Ramar said. “Populations at greatest risk include teenagers, who tend to experience chronic sleep restriction during the school week, and night shift workers, who often struggle to sleep well during daytime hours.”
While fewer studies have evaluated the long-term effects of seasonal time changes, the AASM position statement cited evidence that “the body clock does not adjust to daylight saving time after several months,” possibly because “daylight saving time is less well-aligned with intrinsic human circadian physiology, and it disrupts the natural seasonal adjustment of the human clock due to the effect of late-evening light on the circadian rhythm.”
According to the AASM, permanent DST, as proposed by Sen. Rubio and colleagues, could “result in permanent phase delay, a condition that can also lead to a perpetual discrepancy between the innate biological clock and the extrinsic environmental clock, as well as chronic sleep loss due to early morning social demands that truncate the opportunity to sleep.” This mismatch between sleep/wake cycles and social demands, known as “social jet lag,” has been associated with chronic health risks, including metabolic syndrome, obesity, depression, and cardiovascular disease.
Cardiac impacts of seasonal time change
Muhammad Adeel Rishi, MD, a sleep specialist at Mayo Clinic, Eau Claire, Wis., and lead author of the AASM position statement, highlighted cardiovascular risks in a written statement for this article, noting increased rates of heart attack following the spring time change, and a higher risk of atrial fibrillation.
“Mayo Clinic has not taken a position on this issue,” Dr. Rishi noted. Still, he advocated for permanent standard time as the author of the AASM position statement and vice chair of the AASM public safety committee.
Jay Chudow, MD, and Andrew K. Krumerman, MD, of Montefiore Medical Center, New York, lead author and principal author, respectively, of a recent study that reported increased rates of atrial fibrillation admissions after DST transitions, had the same stance.
“We support elimination of seasonal time changes from a health perspective,” they wrote in a joint comment. “There is mounting evidence of a negative health impact with these seasonal time changes related to effects on sleep and circadian rhythm. Our work found the spring change was associated with more admissions for atrial fibrillation. This added to prior evidence of increased cardiovascular events related to these time changes. If physicians counsel patients on reducing risk factors for disease, shouldn’t we do the same as a society?”
Pros and cons
Not all sleep experts are convinced. Mary Jo Farmer, MD, PhD, FCCP, a sleep specialist and director of pulmonary hypertension services at Baystate Medical Center, and assistant professor of medicine at the University of Massachusetts, Springfield, considers perspectives from both sides of the issue.
“Daylight saving time promotes active lifestyles as people engage in more outdoor activities after work and school, [and] daylight saving time produces economic and safety benefits to society as retail revenues are higher and crimes are lower,” Dr. Farmer said. “Alternatively, moving the clocks forward is a cost burden to the U.S. economy when health issues, decreased productivity, and workplace injuries are considered.”
If one time system is permanently established, Dr. Farmer anticipates divided opinions from patients with sleep issues, regardless of which system is chosen.
“I can tell you, I have a cohort of sleep patients who prefer more evening light and look forward to the spring time change to daylight saving time,” she said. “However, they would not want the sun coming up at 9:00 a.m. in the winter months if we stayed on daylight saving time year-round. Similarly, patients would not want the sun coming up at 4:00 a.m. on the longest day of the year if we stayed on standard time all year round.”
Dr. Farmer called for more research before a decision is made.
“I suggest we need more information about the dangers of staying on daylight saving or standard time year-round because perhaps the current strategy of keeping morning light consistent is not so bad,” she said.
Time for a Congressional hearing?
According to Dr. Ramar, the time is now for a Congressional hearing, as lawmakers and the public need to be adequately informed when considering new legislation.
“There are public misconceptions about daylight saving time and standard time,” Dr. Ramar said. “People often like the idea of daylight saving time because they think it provides more light, and they dislike the concept of standard time because they think it provides more darkness. The reality is that neither time system provides more light or darkness than the other; it is only the timing that changes.”
Until new legislation is introduced, Dr. Ramar offered some practical advice for navigating seasonal time shifts.
“Beginning 2-3 days before the time change, it can be helpful to gradually adjust sleep and wake times, as well as other daily routines such as meal times,” he said. “After the time change, going outside for some morning light can help adjust the timing of your internal body clock.”
The investigators reported no conflicts of interest.
Seasonal time change is now up for consideration in the U.S. Congress, prompting sleep medicine specialists to weigh in on the health impact of a major policy change.
As lawmakers in Washington propose an end to seasonal time changes by permanently establishing daylight saving time (DST), the American Academy of Sleep Medicine (AASM) is pushing for a Congressional hearing so scientists can present evidence in favor of converse legislation – to make standard time the new norm.
According to the AASM, ; however, the switch from standard time to DST incurs more risk.
“Current evidence best supports the adoption of year-round standard time, which aligns best with human circadian biology and provides distinct benefits for public health and safety,” the AASM noted in a 2020 position statement on DST.
The statement cites a number of studies that have reported associations between the switch to DST and acute, negative health outcomes, including higher rates of hospital admission, cardiovascular morbidity, atrial fibrillation, and stroke. The time shift has been associated with a spectrum of cellular, metabolic, and circadian derangements, from increased production of inflammatory markers, to higher blood pressure, and loss of sleep. These biological effects may have far-reaching consequences, including increased rates of fatal motor accidents in the days following the time change, and even increased volatility in the stock market, which may stem from cognitive deficits.
U.S. Senator Marco Rubio (R-Fla.) and others in the U.S. Congress have reintroduced the 2019 Sunshine Protection Act, legislation that would make DST permanent across the country. According to a statement on Sen. Rubio’s website, “The bill reflects the Florida legislature’s 2018 enactment of year-round DST; however, for Florida’s change to apply, a change in the federal statute is required. Fifteen other states – Arkansas, Alabama, California, Delaware, Georgia, Idaho, Louisiana, Maine, Ohio, Oregon, South Carolina, Tennessee, Utah, Washington, and Wyoming – have passed similar laws, resolutions, or voter initiatives, and dozens more are looking. The legislation, if enacted, would apply to those states [that] currently participate in DST, which most states observe for eight months out of the year.”
A stitch in time
“The sudden change in clock time disrupts sleep/wake patterns, decreasing total sleep time and sleep quality, leading to decrements in daytime cognition,” said Kannan Ramar, MBBS, MD, president of the AASM and a sleep medicine specialist at Mayo Clinic, Rochester, Minn.
Emphasizing this point, Dr. Ramar noted a recent study that reported an 18% increase in “patient safety-related incidents associated with human error” among health care workers within a week of the spring time change.
“Irregular bedtimes and wake times disrupt the timing of our circadian rhythms, which can lead to symptoms of insomnia or long-term, excessive daytime sleepiness. Lack of sleep can lead to numerous adverse effects on our minds, including decreased cognitive function, trouble concentrating, and general moodiness,” Dr. Ramar said.
He noted that these impacts may be more significant among certain individuals.
“The daylight saving time changes can be especially problematic for any populations that already experience chronic insufficient sleep or other sleep difficulties,” Dr. Ramar said. “Populations at greatest risk include teenagers, who tend to experience chronic sleep restriction during the school week, and night shift workers, who often struggle to sleep well during daytime hours.”
While fewer studies have evaluated the long-term effects of seasonal time changes, the AASM position statement cited evidence that “the body clock does not adjust to daylight saving time after several months,” possibly because “daylight saving time is less well-aligned with intrinsic human circadian physiology, and it disrupts the natural seasonal adjustment of the human clock due to the effect of late-evening light on the circadian rhythm.”
According to the AASM, permanent DST, as proposed by Sen. Rubio and colleagues, could “result in permanent phase delay, a condition that can also lead to a perpetual discrepancy between the innate biological clock and the extrinsic environmental clock, as well as chronic sleep loss due to early morning social demands that truncate the opportunity to sleep.” This mismatch between sleep/wake cycles and social demands, known as “social jet lag,” has been associated with chronic health risks, including metabolic syndrome, obesity, depression, and cardiovascular disease.
Cardiac impacts of seasonal time change
Muhammad Adeel Rishi, MD, a sleep specialist at Mayo Clinic, Eau Claire, Wis., and lead author of the AASM position statement, highlighted cardiovascular risks in a written statement for this article, noting increased rates of heart attack following the spring time change, and a higher risk of atrial fibrillation.
“Mayo Clinic has not taken a position on this issue,” Dr. Rishi noted. Still, he advocated for permanent standard time as the author of the AASM position statement and vice chair of the AASM public safety committee.
Jay Chudow, MD, and Andrew K. Krumerman, MD, of Montefiore Medical Center, New York, lead author and principal author, respectively, of a recent study that reported increased rates of atrial fibrillation admissions after DST transitions, had the same stance.
“We support elimination of seasonal time changes from a health perspective,” they wrote in a joint comment. “There is mounting evidence of a negative health impact with these seasonal time changes related to effects on sleep and circadian rhythm. Our work found the spring change was associated with more admissions for atrial fibrillation. This added to prior evidence of increased cardiovascular events related to these time changes. If physicians counsel patients on reducing risk factors for disease, shouldn’t we do the same as a society?”
Pros and cons
Not all sleep experts are convinced. Mary Jo Farmer, MD, PhD, FCCP, a sleep specialist and director of pulmonary hypertension services at Baystate Medical Center, and assistant professor of medicine at the University of Massachusetts, Springfield, considers perspectives from both sides of the issue.
“Daylight saving time promotes active lifestyles as people engage in more outdoor activities after work and school, [and] daylight saving time produces economic and safety benefits to society as retail revenues are higher and crimes are lower,” Dr. Farmer said. “Alternatively, moving the clocks forward is a cost burden to the U.S. economy when health issues, decreased productivity, and workplace injuries are considered.”
If one time system is permanently established, Dr. Farmer anticipates divided opinions from patients with sleep issues, regardless of which system is chosen.
“I can tell you, I have a cohort of sleep patients who prefer more evening light and look forward to the spring time change to daylight saving time,” she said. “However, they would not want the sun coming up at 9:00 a.m. in the winter months if we stayed on daylight saving time year-round. Similarly, patients would not want the sun coming up at 4:00 a.m. on the longest day of the year if we stayed on standard time all year round.”
Dr. Farmer called for more research before a decision is made.
“I suggest we need more information about the dangers of staying on daylight saving or standard time year-round because perhaps the current strategy of keeping morning light consistent is not so bad,” she said.
Time for a Congressional hearing?
According to Dr. Ramar, the time is now for a Congressional hearing, as lawmakers and the public need to be adequately informed when considering new legislation.
“There are public misconceptions about daylight saving time and standard time,” Dr. Ramar said. “People often like the idea of daylight saving time because they think it provides more light, and they dislike the concept of standard time because they think it provides more darkness. The reality is that neither time system provides more light or darkness than the other; it is only the timing that changes.”
Until new legislation is introduced, Dr. Ramar offered some practical advice for navigating seasonal time shifts.
“Beginning 2-3 days before the time change, it can be helpful to gradually adjust sleep and wake times, as well as other daily routines such as meal times,” he said. “After the time change, going outside for some morning light can help adjust the timing of your internal body clock.”
The investigators reported no conflicts of interest.
Seasonal time change is now up for consideration in the U.S. Congress, prompting sleep medicine specialists to weigh in on the health impact of a major policy change.
As lawmakers in Washington propose an end to seasonal time changes by permanently establishing daylight saving time (DST), the American Academy of Sleep Medicine (AASM) is pushing for a Congressional hearing so scientists can present evidence in favor of converse legislation – to make standard time the new norm.
According to the AASM, ; however, the switch from standard time to DST incurs more risk.
“Current evidence best supports the adoption of year-round standard time, which aligns best with human circadian biology and provides distinct benefits for public health and safety,” the AASM noted in a 2020 position statement on DST.
The statement cites a number of studies that have reported associations between the switch to DST and acute, negative health outcomes, including higher rates of hospital admission, cardiovascular morbidity, atrial fibrillation, and stroke. The time shift has been associated with a spectrum of cellular, metabolic, and circadian derangements, from increased production of inflammatory markers, to higher blood pressure, and loss of sleep. These biological effects may have far-reaching consequences, including increased rates of fatal motor accidents in the days following the time change, and even increased volatility in the stock market, which may stem from cognitive deficits.
U.S. Senator Marco Rubio (R-Fla.) and others in the U.S. Congress have reintroduced the 2019 Sunshine Protection Act, legislation that would make DST permanent across the country. According to a statement on Sen. Rubio’s website, “The bill reflects the Florida legislature’s 2018 enactment of year-round DST; however, for Florida’s change to apply, a change in the federal statute is required. Fifteen other states – Arkansas, Alabama, California, Delaware, Georgia, Idaho, Louisiana, Maine, Ohio, Oregon, South Carolina, Tennessee, Utah, Washington, and Wyoming – have passed similar laws, resolutions, or voter initiatives, and dozens more are looking. The legislation, if enacted, would apply to those states [that] currently participate in DST, which most states observe for eight months out of the year.”
A stitch in time
“The sudden change in clock time disrupts sleep/wake patterns, decreasing total sleep time and sleep quality, leading to decrements in daytime cognition,” said Kannan Ramar, MBBS, MD, president of the AASM and a sleep medicine specialist at Mayo Clinic, Rochester, Minn.
Emphasizing this point, Dr. Ramar noted a recent study that reported an 18% increase in “patient safety-related incidents associated with human error” among health care workers within a week of the spring time change.
“Irregular bedtimes and wake times disrupt the timing of our circadian rhythms, which can lead to symptoms of insomnia or long-term, excessive daytime sleepiness. Lack of sleep can lead to numerous adverse effects on our minds, including decreased cognitive function, trouble concentrating, and general moodiness,” Dr. Ramar said.
He noted that these impacts may be more significant among certain individuals.
“The daylight saving time changes can be especially problematic for any populations that already experience chronic insufficient sleep or other sleep difficulties,” Dr. Ramar said. “Populations at greatest risk include teenagers, who tend to experience chronic sleep restriction during the school week, and night shift workers, who often struggle to sleep well during daytime hours.”
While fewer studies have evaluated the long-term effects of seasonal time changes, the AASM position statement cited evidence that “the body clock does not adjust to daylight saving time after several months,” possibly because “daylight saving time is less well-aligned with intrinsic human circadian physiology, and it disrupts the natural seasonal adjustment of the human clock due to the effect of late-evening light on the circadian rhythm.”
According to the AASM, permanent DST, as proposed by Sen. Rubio and colleagues, could “result in permanent phase delay, a condition that can also lead to a perpetual discrepancy between the innate biological clock and the extrinsic environmental clock, as well as chronic sleep loss due to early morning social demands that truncate the opportunity to sleep.” This mismatch between sleep/wake cycles and social demands, known as “social jet lag,” has been associated with chronic health risks, including metabolic syndrome, obesity, depression, and cardiovascular disease.
Cardiac impacts of seasonal time change
Muhammad Adeel Rishi, MD, a sleep specialist at Mayo Clinic, Eau Claire, Wis., and lead author of the AASM position statement, highlighted cardiovascular risks in a written statement for this article, noting increased rates of heart attack following the spring time change, and a higher risk of atrial fibrillation.
“Mayo Clinic has not taken a position on this issue,” Dr. Rishi noted. Still, he advocated for permanent standard time as the author of the AASM position statement and vice chair of the AASM public safety committee.
Jay Chudow, MD, and Andrew K. Krumerman, MD, of Montefiore Medical Center, New York, lead author and principal author, respectively, of a recent study that reported increased rates of atrial fibrillation admissions after DST transitions, had the same stance.
“We support elimination of seasonal time changes from a health perspective,” they wrote in a joint comment. “There is mounting evidence of a negative health impact with these seasonal time changes related to effects on sleep and circadian rhythm. Our work found the spring change was associated with more admissions for atrial fibrillation. This added to prior evidence of increased cardiovascular events related to these time changes. If physicians counsel patients on reducing risk factors for disease, shouldn’t we do the same as a society?”
Pros and cons
Not all sleep experts are convinced. Mary Jo Farmer, MD, PhD, FCCP, a sleep specialist and director of pulmonary hypertension services at Baystate Medical Center, and assistant professor of medicine at the University of Massachusetts, Springfield, considers perspectives from both sides of the issue.
“Daylight saving time promotes active lifestyles as people engage in more outdoor activities after work and school, [and] daylight saving time produces economic and safety benefits to society as retail revenues are higher and crimes are lower,” Dr. Farmer said. “Alternatively, moving the clocks forward is a cost burden to the U.S. economy when health issues, decreased productivity, and workplace injuries are considered.”
If one time system is permanently established, Dr. Farmer anticipates divided opinions from patients with sleep issues, regardless of which system is chosen.
“I can tell you, I have a cohort of sleep patients who prefer more evening light and look forward to the spring time change to daylight saving time,” she said. “However, they would not want the sun coming up at 9:00 a.m. in the winter months if we stayed on daylight saving time year-round. Similarly, patients would not want the sun coming up at 4:00 a.m. on the longest day of the year if we stayed on standard time all year round.”
Dr. Farmer called for more research before a decision is made.
“I suggest we need more information about the dangers of staying on daylight saving or standard time year-round because perhaps the current strategy of keeping morning light consistent is not so bad,” she said.
Time for a Congressional hearing?
According to Dr. Ramar, the time is now for a Congressional hearing, as lawmakers and the public need to be adequately informed when considering new legislation.
“There are public misconceptions about daylight saving time and standard time,” Dr. Ramar said. “People often like the idea of daylight saving time because they think it provides more light, and they dislike the concept of standard time because they think it provides more darkness. The reality is that neither time system provides more light or darkness than the other; it is only the timing that changes.”
Until new legislation is introduced, Dr. Ramar offered some practical advice for navigating seasonal time shifts.
“Beginning 2-3 days before the time change, it can be helpful to gradually adjust sleep and wake times, as well as other daily routines such as meal times,” he said. “After the time change, going outside for some morning light can help adjust the timing of your internal body clock.”
The investigators reported no conflicts of interest.