Obstetric units place twice as many wrong-patient orders as medical-surgical units

Article Type
Changed
Thu, 07/15/2021 - 12:18

 

Clinicians in obstetric units place nearly twice as many wrong-patient orders as their medical-surgical counterparts, based on a retrospective look at more than 1.3 million orders.

These findings suggest that obstetric patients are at particular risk for this type of medical error, and that steps are needed to address obstetric clinical culture, work flow, and electronic medical record interfaces, reported lead author Adina R. Kern-Goldberger, MD, of the department of obstetrics and gynecology at the University of Pennsylvania, Philadelphia, and colleagues.

The root of the issue may come from the very nature of obstetrics, and the homogeneity of the patient population, they wrote in Obstetrics & Gynecology.

“Obstetrics is a unique clinical environment because all patients are admitted with a common diagnosis – pregnancy – and have much more overlap in demographic characteristics than a typical inpatient unit given that they are all females of reproductive age,” the investigators wrote. “The labor and delivery environment also is distinct in the hospital given its dynamic tempo and unpredictable work flow. There also is the added risk of neonates typically being registered in the hospital record under the mother’s name after birth. This generates abundant opportunity for errors in order placement, both between obstetric patients and between postpartum patients and their newborns.”

To determine the relative magnitude of this risk, Dr. Kern-Goldberger and colleagues analyzed EMRs from 45,436 obstetric patients and 12,915 medical-surgical patients at “a large, urban, integrated health system in New York City,” including 1,329,463 order sessions placed between 2016 and 2018.

The primary outcome was near-miss wrong-patient orders, which were identified by the Wrong-Patient Retract-and-Reorder measure.

“The measure uses an electronic query to detect retract-and-reorder events, defined as one or more orders placed for patient A, canceled by the same clinician within 10 minutes, and reordered by the same clinician for patient B within the next 10 minutes,” the investigators wrote.In obstetric units, 79.5 wrong-patient orders were placed per 100,000 order sessions, which was 98% higher than the rate of 42.3 wrong-patient orders per 100,000 order sessions in medical-surgical units (odds ratio, 1.98; 95% confidence interval, 1.64-2.39), a disparity that was observed across clinician types and times of day.Advanced practice clinicians in obstetrics placed 47.3 wrong-patient orders per 100,000 order sessions, which was significantly lower than that of their colleagues: attending physicians (127.0 per 100,000) and house staff (119.9 per 100,000).

Wrong-patient orders in obstetrics most often involved medication (73.2 per 100,000), particularly nifedipine, antibiotics, tocolytics, and nonoxytocin uterotonics. The “other” category, including but not limited to lab studies and nursing orders, was associated with 51.0 wrong-patient orders per 100,000 order sessions, while errors in diagnostic imaging orders followed distantly behind, at a rate of 5.7 per 1000,000.

“Although the obstetric clinical environment – particularly labor and delivery – is vibrant and frequently chaotic, it is critical to establish a calm, orderly, and safe culture around order entry,” the investigators wrote. “This, combined with efforts to improve house staff work flow and to optimize EMR interfaces, is likely to help mitigate the threat of wrong order errors to patient care and ultimately improve maternal health and safety.”

According to Catherine D. Cansino, MD, associate clinical professor of obstetrics and gynecology at UC Davis (Calif.) Health, the findings highlight the value of medical informatics while revealing a need to improve EMR interfaces.

“Medical informatics is a growing field and expertise among ob.gyns. is very important,” Dr. Cansino said in an interview. “This study by Kern-Goldberger and colleagues highlights the vulnerability of our EMR systems (and our patients, indirectly) when medical informatics systems are not optimized. The investigators present a study that advocates for greater emphasis on optimizing such systems in obstetrics units, especially in the context of high acuity settings such as obstetrics, compared to medical-surgical units. Appropriately, the study highlights the avoided harm when correcting medical errors for obstetric patients since such errors potentially affect both the delivering patient and the newborn.”

The study was funded by AHRQ. One coauthor disclosed funding from the Icahn School of Medicine at Mount Sinai, Georgetown University, the National Institutes of Health – Office of Scientific Review, and the Social Science Research Council. Another reported funding from Roche.

Publications
Topics
Sections

 

Clinicians in obstetric units place nearly twice as many wrong-patient orders as their medical-surgical counterparts, based on a retrospective look at more than 1.3 million orders.

These findings suggest that obstetric patients are at particular risk for this type of medical error, and that steps are needed to address obstetric clinical culture, work flow, and electronic medical record interfaces, reported lead author Adina R. Kern-Goldberger, MD, of the department of obstetrics and gynecology at the University of Pennsylvania, Philadelphia, and colleagues.

The root of the issue may come from the very nature of obstetrics, and the homogeneity of the patient population, they wrote in Obstetrics & Gynecology.

“Obstetrics is a unique clinical environment because all patients are admitted with a common diagnosis – pregnancy – and have much more overlap in demographic characteristics than a typical inpatient unit given that they are all females of reproductive age,” the investigators wrote. “The labor and delivery environment also is distinct in the hospital given its dynamic tempo and unpredictable work flow. There also is the added risk of neonates typically being registered in the hospital record under the mother’s name after birth. This generates abundant opportunity for errors in order placement, both between obstetric patients and between postpartum patients and their newborns.”

To determine the relative magnitude of this risk, Dr. Kern-Goldberger and colleagues analyzed EMRs from 45,436 obstetric patients and 12,915 medical-surgical patients at “a large, urban, integrated health system in New York City,” including 1,329,463 order sessions placed between 2016 and 2018.

The primary outcome was near-miss wrong-patient orders, which were identified by the Wrong-Patient Retract-and-Reorder measure.

“The measure uses an electronic query to detect retract-and-reorder events, defined as one or more orders placed for patient A, canceled by the same clinician within 10 minutes, and reordered by the same clinician for patient B within the next 10 minutes,” the investigators wrote.In obstetric units, 79.5 wrong-patient orders were placed per 100,000 order sessions, which was 98% higher than the rate of 42.3 wrong-patient orders per 100,000 order sessions in medical-surgical units (odds ratio, 1.98; 95% confidence interval, 1.64-2.39), a disparity that was observed across clinician types and times of day.Advanced practice clinicians in obstetrics placed 47.3 wrong-patient orders per 100,000 order sessions, which was significantly lower than that of their colleagues: attending physicians (127.0 per 100,000) and house staff (119.9 per 100,000).

Wrong-patient orders in obstetrics most often involved medication (73.2 per 100,000), particularly nifedipine, antibiotics, tocolytics, and nonoxytocin uterotonics. The “other” category, including but not limited to lab studies and nursing orders, was associated with 51.0 wrong-patient orders per 100,000 order sessions, while errors in diagnostic imaging orders followed distantly behind, at a rate of 5.7 per 1000,000.

“Although the obstetric clinical environment – particularly labor and delivery – is vibrant and frequently chaotic, it is critical to establish a calm, orderly, and safe culture around order entry,” the investigators wrote. “This, combined with efforts to improve house staff work flow and to optimize EMR interfaces, is likely to help mitigate the threat of wrong order errors to patient care and ultimately improve maternal health and safety.”

According to Catherine D. Cansino, MD, associate clinical professor of obstetrics and gynecology at UC Davis (Calif.) Health, the findings highlight the value of medical informatics while revealing a need to improve EMR interfaces.

“Medical informatics is a growing field and expertise among ob.gyns. is very important,” Dr. Cansino said in an interview. “This study by Kern-Goldberger and colleagues highlights the vulnerability of our EMR systems (and our patients, indirectly) when medical informatics systems are not optimized. The investigators present a study that advocates for greater emphasis on optimizing such systems in obstetrics units, especially in the context of high acuity settings such as obstetrics, compared to medical-surgical units. Appropriately, the study highlights the avoided harm when correcting medical errors for obstetric patients since such errors potentially affect both the delivering patient and the newborn.”

The study was funded by AHRQ. One coauthor disclosed funding from the Icahn School of Medicine at Mount Sinai, Georgetown University, the National Institutes of Health – Office of Scientific Review, and the Social Science Research Council. Another reported funding from Roche.

 

Clinicians in obstetric units place nearly twice as many wrong-patient orders as their medical-surgical counterparts, based on a retrospective look at more than 1.3 million orders.

These findings suggest that obstetric patients are at particular risk for this type of medical error, and that steps are needed to address obstetric clinical culture, work flow, and electronic medical record interfaces, reported lead author Adina R. Kern-Goldberger, MD, of the department of obstetrics and gynecology at the University of Pennsylvania, Philadelphia, and colleagues.

The root of the issue may come from the very nature of obstetrics, and the homogeneity of the patient population, they wrote in Obstetrics & Gynecology.

“Obstetrics is a unique clinical environment because all patients are admitted with a common diagnosis – pregnancy – and have much more overlap in demographic characteristics than a typical inpatient unit given that they are all females of reproductive age,” the investigators wrote. “The labor and delivery environment also is distinct in the hospital given its dynamic tempo and unpredictable work flow. There also is the added risk of neonates typically being registered in the hospital record under the mother’s name after birth. This generates abundant opportunity for errors in order placement, both between obstetric patients and between postpartum patients and their newborns.”

To determine the relative magnitude of this risk, Dr. Kern-Goldberger and colleagues analyzed EMRs from 45,436 obstetric patients and 12,915 medical-surgical patients at “a large, urban, integrated health system in New York City,” including 1,329,463 order sessions placed between 2016 and 2018.

The primary outcome was near-miss wrong-patient orders, which were identified by the Wrong-Patient Retract-and-Reorder measure.

“The measure uses an electronic query to detect retract-and-reorder events, defined as one or more orders placed for patient A, canceled by the same clinician within 10 minutes, and reordered by the same clinician for patient B within the next 10 minutes,” the investigators wrote.In obstetric units, 79.5 wrong-patient orders were placed per 100,000 order sessions, which was 98% higher than the rate of 42.3 wrong-patient orders per 100,000 order sessions in medical-surgical units (odds ratio, 1.98; 95% confidence interval, 1.64-2.39), a disparity that was observed across clinician types and times of day.Advanced practice clinicians in obstetrics placed 47.3 wrong-patient orders per 100,000 order sessions, which was significantly lower than that of their colleagues: attending physicians (127.0 per 100,000) and house staff (119.9 per 100,000).

Wrong-patient orders in obstetrics most often involved medication (73.2 per 100,000), particularly nifedipine, antibiotics, tocolytics, and nonoxytocin uterotonics. The “other” category, including but not limited to lab studies and nursing orders, was associated with 51.0 wrong-patient orders per 100,000 order sessions, while errors in diagnostic imaging orders followed distantly behind, at a rate of 5.7 per 1000,000.

“Although the obstetric clinical environment – particularly labor and delivery – is vibrant and frequently chaotic, it is critical to establish a calm, orderly, and safe culture around order entry,” the investigators wrote. “This, combined with efforts to improve house staff work flow and to optimize EMR interfaces, is likely to help mitigate the threat of wrong order errors to patient care and ultimately improve maternal health and safety.”

According to Catherine D. Cansino, MD, associate clinical professor of obstetrics and gynecology at UC Davis (Calif.) Health, the findings highlight the value of medical informatics while revealing a need to improve EMR interfaces.

“Medical informatics is a growing field and expertise among ob.gyns. is very important,” Dr. Cansino said in an interview. “This study by Kern-Goldberger and colleagues highlights the vulnerability of our EMR systems (and our patients, indirectly) when medical informatics systems are not optimized. The investigators present a study that advocates for greater emphasis on optimizing such systems in obstetrics units, especially in the context of high acuity settings such as obstetrics, compared to medical-surgical units. Appropriately, the study highlights the avoided harm when correcting medical errors for obstetric patients since such errors potentially affect both the delivering patient and the newborn.”

The study was funded by AHRQ. One coauthor disclosed funding from the Icahn School of Medicine at Mount Sinai, Georgetown University, the National Institutes of Health – Office of Scientific Review, and the Social Science Research Council. Another reported funding from Roche.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM OBSTETRICS & GYNECOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

South Asian ancestry associated with twice the risk of heart disease

Article Type
Changed
Tue, 05/03/2022 - 15:05

Individuals of South Asian ancestry face twice the risk of heart disease, compared with individuals of European descent, yet existing risk calculators fail to account for this disparity, according to the results of a new study.

These findings confirm previous reports and practice guidelines that identify South Asian ancestry as a risk enhancer for atherosclerotic cardiovascular disease (ASCVD), suggesting that earlier heart disease screening and prevention is warranted in this patient population, lead author Aniruddh P. Patel, MD, research fellow at the Center for Genomic Medicine, Massachusetts General Hospital, Boston, and colleagues said.

“Previous studies in multiple countries have estimated a 1.7- to 4-fold higher risk of ASCVD among South Asian individuals, compared with other ancestries, but have important potential limitations,” Dr. Patel and colleagues wrote in the paper on their prospective cohort study, published in Circulation.

The INTERHEART case-control study, for example, which assessed risk factors for acute myocardial infarction among more than 15,000 cases from 52 countries, is now 2 decades old, and “may not reflect recent advances in cardiovascular disease prevention,” the investigators wrote.

Most studies in the area have been small and retrospective, they added, and have not adequately assessed emerging risk factors, such as prediabetes, which appear to play a relatively greater role in the development of heart disease among South Asians.
 

Methods and results

To address this knowledge gap, Dr. Patel and colleagues analyzed data from the UK Biobank prospective cohort study, including 449,349 middle-aged participants of European ancestry and 8,124 similarly aged participants of South Asian descent who did not have heart disease upon enrollment. Respective rates of incident ASCVD (i.e., MI, ischemic stroke, or coronary revascularization) were analyzed in the context of a variety of lifestyle, anthropometric, and clinical factors.

After a median follow-up of 11.1 years, individuals of South Asian descent had an incident ASCVD rate of 6.8%, compared with 4.4% for individuals of European descent, representing twice the relative risk (adjusted hazard ratio, 2.03; 95% CI, 1.86-2.22; P < .001). Even after accounting for all covariates, risk of ASCVD remained 45% higher for South Asian individuals (aHR, 1.45; 95% CI, 1.28-1.65; P < .001). This elevation in risk was not captured by existing risk calculators, including the American College of Cardiology/American Heart Association Pooled Cohort Equations, or the QRISK3 equations.

The findings were “largely consistent across a range of age, sex, and clinical subgroups,” and “confirm and extend previous reports that hypertension, diabetes, and central adiposity are the leading associations in this observed disparity,” the investigators wrote.
 

Two diabetes subtypes are more prevalent in South Asians

Hypertension, diabetes, and central adiposity do not fully explain South Asians’ higher risk for ASCVD, wrote Namratha R. Kandula, MD, of Northwestern University Medical Center, Chicago, and Alka M. Kanaya, MD, of the University of California, San Francisco, in an accompanying editorial published in Circulation.

Some of the undetected risk may stem from unique diabetes disease factors, Dr. Kandula and Dr. Kanaya added.

“Newer data have demonstrated distinct subtypes of type 2 diabetes, with South Asians having a higher prevalence of both a severe insulin resistant with obesity subtype and a less recognized severe insulin deficient subtype,” they wrote. “Importantly, both of these more prevalent diabetes subtypes in South Asians were associated with a higher incidence of coronary artery calcium, a marker of subclinical atherosclerosis and strong predictor of future ASCVD, compared to other diabetes subtypes.”
 

 

 

Diabetes rate is higher for South Asians in the U.S.

Although the present study was conducted in the United Kingdom, the findings likely apply to individuals of South Asian ancestry living in the United States, according to principal author Amit V. Khera, MD, associate director of the precision medicine unit at the Center for Genomic Medicine, Massachusetts General Hospital.

“There are already more than 5 million individuals of South Asian ancestry in the U.S. and this represents one of the fastest-growing ethnic subgroups,” Dr. Khera said in an interview. “As in our study of individuals in the U.K., South Asians in the U.S. suffer from diabetes at much higher rates – 23% versus 12% – and this often occurs even in the absence of obesity.”

Dr. Khera noted that the 2019 ACC/AHA Guideline on the Primary Prevention of Cardiovascular Disease identify South Asian ancestry as a risk-enhancing factor, calling this a “stopgap measure.” More work is needed, he said, in the research arena and in the clinic.
 

Zero South Asians included in studies used to develop risk estimator

“I think the first step is to simply acknowledge that the risk estimators we use in clinical practice have important limitations when it comes to diverse patient populations,” Dr. Khera said in an interview. “We saw this in our study, where – despite a more than doubling of risk – the predicted risk based on the equations used in primary care showed no difference. This risk estimator was developed based on legacy cohort studies, in [which] zero South Asians were included. Despite important differences across race/ethnicity, the current state-of-the-art in the U.S. is to use one equation for Black individuals and another for all other ethnicities.”

Experts suggest steps for reducing heart disease risk

While risk modeling remains suboptimal, Dr. Khera suggested that clinicians can take immediate steps to reduce the risk of heart disease among individuals with South Asian ancestry.

“Despite all of the uncertainty – we still don’t have a complete understanding of why the risk is so high – there are still several things primary care doctors can do for their patients,” Dr. Khera said.

Foremost, he recommended lifestyle and dietary counseling.

“Dietary counseling is particularly effective if put in the context of cultural norms,” Dr. Khera said. “Many South Asians consider fruit juices or white rice to be healthy, when they lead to rapid spikes in blood sugar.”

Dr. Khera also advised earlier heart disease screening, such as coronary calcium scanning “sometime between age 40-50 years,” noting that positive test results may motivate patients to start or adhere to medications, such as cholesterol-lowering therapies. If necessary, clinicians can also refer to heart centers for South Asian patients, which are becoming increasingly common.

Dr. Cheryl A.M. Anderson

According to Cheryl A.M. Anderson, PhD, chair of the AHA’s Council on Epidemiology and Prevention, and professor and dean of the Herbert Wertheim School of Public Health and Human Longevity Science at the University of California, San Diego, the current study suggests that heart disease management strategies for South Asian patients may be lacking.

“We have had a tradition of preventing or trying to treat heart disease in a fashion that doesn’t yet account for the increased risk that might be prevalent in those of South Asian ancestry,” Dr. Anderson said in an interview.

She suggested that improving associated risk-analysis tools could be beneficial, although the tools themselves, in the context of race or ethnicity, may present their own risks.

“We want to be mindful of potential adverse implications from having everything linked to one’s ancestry, which I think this tool doesn’t do,” Dr. Anderson said, referring to the AHA/ACC Pooled Cohort Equations. “But in sort of the bigger picture of things, we always want to expand and refine our toolkit.”

Dr. Rajesh Dash

According to Rajesh Dash, MD, PhD, associate professor, cardiologist, and director of the South Asian Translational Heart Initiative (SSATHI) Prevention Clinic and CardioClick Telemedicine Clinic at Stanford (Calif.) HealthCare, the science supports more active risk mitigation strategies for South Asian patients, and the AHA and the ACC “are the two entities that need to lead the way.”

“Certainly the American Heart Association and the American College of Cardiology should be taking a more active leadership role in this,” Dr. Dash said in an interview.

In 2018, the AHA issued a scientific statement about the elevated risk of ASCVD among South Asian individuals, “but it did not rise to the level of specific recommendations, and did not necessarily go as far as to incorporate new screening parameters for that population,” Dr. Dash said. He also noted that the most recent AHA/ACC guideline identifies South Asian ancestry as a risk-enhancing feature, a statement similarly lacking in actionable value.

“That does not definitively lead someone like a primary care physician to a decision to start a statin, or to be more aggressive with a diagnostic workup, like a stress test, for instance, for a patient who they otherwise would not have done one in had they not been South Asian,” Dr. Dash said.

The steps taken by the AHA and the ACC are “a healthy step forward,” he noted, “but not nearly the degree of attention or vigilance that is required, as well as the level of action that is required to change the narrative for the population.”

At the SSATHI Prevention Clinic, Dr. Dash and colleagues aren’t waiting for the narrative to change, and are already taking a more aggressive approach.

The clinic has an average patient age of 41 years, “easily 15 years younger than the average age in most cardiology clinics,” Dr. Dash said, based on the fact that approximately two-thirds of heart attacks in South Asian individuals occur under the age of 55. “We have to look earlier.”

The SSATHI Prevention Clinic screens for both traditional and emerging risk factors, and Dr. Dash suggested that primary care doctors should do the same.

“If you have a South Asian patient as a primary care physician, you should be aggressively looking for risk factors, traditional ones to start, like elevated cholesterol, hypertension, diabetes, or – and I would argue strongly – prediabetes or insulin resistance.”

Dr. Dash also recommended looking into family history, and considering screening for inflammatory biomarkers, the latter of which are commonly elevated at an earlier age among South Asian individuals, and may have a relatively greater prognostic impact.

To encourage broader implementation of this kind of approach, Dr. Dash called for more large-scale studies. Ideally, these would be randomized clinical trials, but, realistically, real-world datasets may be the answer.

The study was supported by the National Heart, Lung, and Blood Institute; the Broad Institute at MIT and Harvard; the National Human Genome Research Institute; and others. The investigators disclosed relationships with IBM Research, Sanofi, Amgen, and others. Dr. Dash disclosed relationships with HealthPals and AstraZeneca. Dr. Anderson reported no relevant conflicts of interest.

Publications
Topics
Sections

Individuals of South Asian ancestry face twice the risk of heart disease, compared with individuals of European descent, yet existing risk calculators fail to account for this disparity, according to the results of a new study.

These findings confirm previous reports and practice guidelines that identify South Asian ancestry as a risk enhancer for atherosclerotic cardiovascular disease (ASCVD), suggesting that earlier heart disease screening and prevention is warranted in this patient population, lead author Aniruddh P. Patel, MD, research fellow at the Center for Genomic Medicine, Massachusetts General Hospital, Boston, and colleagues said.

“Previous studies in multiple countries have estimated a 1.7- to 4-fold higher risk of ASCVD among South Asian individuals, compared with other ancestries, but have important potential limitations,” Dr. Patel and colleagues wrote in the paper on their prospective cohort study, published in Circulation.

The INTERHEART case-control study, for example, which assessed risk factors for acute myocardial infarction among more than 15,000 cases from 52 countries, is now 2 decades old, and “may not reflect recent advances in cardiovascular disease prevention,” the investigators wrote.

Most studies in the area have been small and retrospective, they added, and have not adequately assessed emerging risk factors, such as prediabetes, which appear to play a relatively greater role in the development of heart disease among South Asians.
 

Methods and results

To address this knowledge gap, Dr. Patel and colleagues analyzed data from the UK Biobank prospective cohort study, including 449,349 middle-aged participants of European ancestry and 8,124 similarly aged participants of South Asian descent who did not have heart disease upon enrollment. Respective rates of incident ASCVD (i.e., MI, ischemic stroke, or coronary revascularization) were analyzed in the context of a variety of lifestyle, anthropometric, and clinical factors.

After a median follow-up of 11.1 years, individuals of South Asian descent had an incident ASCVD rate of 6.8%, compared with 4.4% for individuals of European descent, representing twice the relative risk (adjusted hazard ratio, 2.03; 95% CI, 1.86-2.22; P < .001). Even after accounting for all covariates, risk of ASCVD remained 45% higher for South Asian individuals (aHR, 1.45; 95% CI, 1.28-1.65; P < .001). This elevation in risk was not captured by existing risk calculators, including the American College of Cardiology/American Heart Association Pooled Cohort Equations, or the QRISK3 equations.

The findings were “largely consistent across a range of age, sex, and clinical subgroups,” and “confirm and extend previous reports that hypertension, diabetes, and central adiposity are the leading associations in this observed disparity,” the investigators wrote.
 

Two diabetes subtypes are more prevalent in South Asians

Hypertension, diabetes, and central adiposity do not fully explain South Asians’ higher risk for ASCVD, wrote Namratha R. Kandula, MD, of Northwestern University Medical Center, Chicago, and Alka M. Kanaya, MD, of the University of California, San Francisco, in an accompanying editorial published in Circulation.

Some of the undetected risk may stem from unique diabetes disease factors, Dr. Kandula and Dr. Kanaya added.

“Newer data have demonstrated distinct subtypes of type 2 diabetes, with South Asians having a higher prevalence of both a severe insulin resistant with obesity subtype and a less recognized severe insulin deficient subtype,” they wrote. “Importantly, both of these more prevalent diabetes subtypes in South Asians were associated with a higher incidence of coronary artery calcium, a marker of subclinical atherosclerosis and strong predictor of future ASCVD, compared to other diabetes subtypes.”
 

 

 

Diabetes rate is higher for South Asians in the U.S.

Although the present study was conducted in the United Kingdom, the findings likely apply to individuals of South Asian ancestry living in the United States, according to principal author Amit V. Khera, MD, associate director of the precision medicine unit at the Center for Genomic Medicine, Massachusetts General Hospital.

“There are already more than 5 million individuals of South Asian ancestry in the U.S. and this represents one of the fastest-growing ethnic subgroups,” Dr. Khera said in an interview. “As in our study of individuals in the U.K., South Asians in the U.S. suffer from diabetes at much higher rates – 23% versus 12% – and this often occurs even in the absence of obesity.”

Dr. Khera noted that the 2019 ACC/AHA Guideline on the Primary Prevention of Cardiovascular Disease identify South Asian ancestry as a risk-enhancing factor, calling this a “stopgap measure.” More work is needed, he said, in the research arena and in the clinic.
 

Zero South Asians included in studies used to develop risk estimator

“I think the first step is to simply acknowledge that the risk estimators we use in clinical practice have important limitations when it comes to diverse patient populations,” Dr. Khera said in an interview. “We saw this in our study, where – despite a more than doubling of risk – the predicted risk based on the equations used in primary care showed no difference. This risk estimator was developed based on legacy cohort studies, in [which] zero South Asians were included. Despite important differences across race/ethnicity, the current state-of-the-art in the U.S. is to use one equation for Black individuals and another for all other ethnicities.”

Experts suggest steps for reducing heart disease risk

While risk modeling remains suboptimal, Dr. Khera suggested that clinicians can take immediate steps to reduce the risk of heart disease among individuals with South Asian ancestry.

“Despite all of the uncertainty – we still don’t have a complete understanding of why the risk is so high – there are still several things primary care doctors can do for their patients,” Dr. Khera said.

Foremost, he recommended lifestyle and dietary counseling.

“Dietary counseling is particularly effective if put in the context of cultural norms,” Dr. Khera said. “Many South Asians consider fruit juices or white rice to be healthy, when they lead to rapid spikes in blood sugar.”

Dr. Khera also advised earlier heart disease screening, such as coronary calcium scanning “sometime between age 40-50 years,” noting that positive test results may motivate patients to start or adhere to medications, such as cholesterol-lowering therapies. If necessary, clinicians can also refer to heart centers for South Asian patients, which are becoming increasingly common.

Dr. Cheryl A.M. Anderson

According to Cheryl A.M. Anderson, PhD, chair of the AHA’s Council on Epidemiology and Prevention, and professor and dean of the Herbert Wertheim School of Public Health and Human Longevity Science at the University of California, San Diego, the current study suggests that heart disease management strategies for South Asian patients may be lacking.

“We have had a tradition of preventing or trying to treat heart disease in a fashion that doesn’t yet account for the increased risk that might be prevalent in those of South Asian ancestry,” Dr. Anderson said in an interview.

She suggested that improving associated risk-analysis tools could be beneficial, although the tools themselves, in the context of race or ethnicity, may present their own risks.

“We want to be mindful of potential adverse implications from having everything linked to one’s ancestry, which I think this tool doesn’t do,” Dr. Anderson said, referring to the AHA/ACC Pooled Cohort Equations. “But in sort of the bigger picture of things, we always want to expand and refine our toolkit.”

Dr. Rajesh Dash

According to Rajesh Dash, MD, PhD, associate professor, cardiologist, and director of the South Asian Translational Heart Initiative (SSATHI) Prevention Clinic and CardioClick Telemedicine Clinic at Stanford (Calif.) HealthCare, the science supports more active risk mitigation strategies for South Asian patients, and the AHA and the ACC “are the two entities that need to lead the way.”

“Certainly the American Heart Association and the American College of Cardiology should be taking a more active leadership role in this,” Dr. Dash said in an interview.

In 2018, the AHA issued a scientific statement about the elevated risk of ASCVD among South Asian individuals, “but it did not rise to the level of specific recommendations, and did not necessarily go as far as to incorporate new screening parameters for that population,” Dr. Dash said. He also noted that the most recent AHA/ACC guideline identifies South Asian ancestry as a risk-enhancing feature, a statement similarly lacking in actionable value.

“That does not definitively lead someone like a primary care physician to a decision to start a statin, or to be more aggressive with a diagnostic workup, like a stress test, for instance, for a patient who they otherwise would not have done one in had they not been South Asian,” Dr. Dash said.

The steps taken by the AHA and the ACC are “a healthy step forward,” he noted, “but not nearly the degree of attention or vigilance that is required, as well as the level of action that is required to change the narrative for the population.”

At the SSATHI Prevention Clinic, Dr. Dash and colleagues aren’t waiting for the narrative to change, and are already taking a more aggressive approach.

The clinic has an average patient age of 41 years, “easily 15 years younger than the average age in most cardiology clinics,” Dr. Dash said, based on the fact that approximately two-thirds of heart attacks in South Asian individuals occur under the age of 55. “We have to look earlier.”

The SSATHI Prevention Clinic screens for both traditional and emerging risk factors, and Dr. Dash suggested that primary care doctors should do the same.

“If you have a South Asian patient as a primary care physician, you should be aggressively looking for risk factors, traditional ones to start, like elevated cholesterol, hypertension, diabetes, or – and I would argue strongly – prediabetes or insulin resistance.”

Dr. Dash also recommended looking into family history, and considering screening for inflammatory biomarkers, the latter of which are commonly elevated at an earlier age among South Asian individuals, and may have a relatively greater prognostic impact.

To encourage broader implementation of this kind of approach, Dr. Dash called for more large-scale studies. Ideally, these would be randomized clinical trials, but, realistically, real-world datasets may be the answer.

The study was supported by the National Heart, Lung, and Blood Institute; the Broad Institute at MIT and Harvard; the National Human Genome Research Institute; and others. The investigators disclosed relationships with IBM Research, Sanofi, Amgen, and others. Dr. Dash disclosed relationships with HealthPals and AstraZeneca. Dr. Anderson reported no relevant conflicts of interest.

Individuals of South Asian ancestry face twice the risk of heart disease, compared with individuals of European descent, yet existing risk calculators fail to account for this disparity, according to the results of a new study.

These findings confirm previous reports and practice guidelines that identify South Asian ancestry as a risk enhancer for atherosclerotic cardiovascular disease (ASCVD), suggesting that earlier heart disease screening and prevention is warranted in this patient population, lead author Aniruddh P. Patel, MD, research fellow at the Center for Genomic Medicine, Massachusetts General Hospital, Boston, and colleagues said.

“Previous studies in multiple countries have estimated a 1.7- to 4-fold higher risk of ASCVD among South Asian individuals, compared with other ancestries, but have important potential limitations,” Dr. Patel and colleagues wrote in the paper on their prospective cohort study, published in Circulation.

The INTERHEART case-control study, for example, which assessed risk factors for acute myocardial infarction among more than 15,000 cases from 52 countries, is now 2 decades old, and “may not reflect recent advances in cardiovascular disease prevention,” the investigators wrote.

Most studies in the area have been small and retrospective, they added, and have not adequately assessed emerging risk factors, such as prediabetes, which appear to play a relatively greater role in the development of heart disease among South Asians.
 

Methods and results

To address this knowledge gap, Dr. Patel and colleagues analyzed data from the UK Biobank prospective cohort study, including 449,349 middle-aged participants of European ancestry and 8,124 similarly aged participants of South Asian descent who did not have heart disease upon enrollment. Respective rates of incident ASCVD (i.e., MI, ischemic stroke, or coronary revascularization) were analyzed in the context of a variety of lifestyle, anthropometric, and clinical factors.

After a median follow-up of 11.1 years, individuals of South Asian descent had an incident ASCVD rate of 6.8%, compared with 4.4% for individuals of European descent, representing twice the relative risk (adjusted hazard ratio, 2.03; 95% CI, 1.86-2.22; P < .001). Even after accounting for all covariates, risk of ASCVD remained 45% higher for South Asian individuals (aHR, 1.45; 95% CI, 1.28-1.65; P < .001). This elevation in risk was not captured by existing risk calculators, including the American College of Cardiology/American Heart Association Pooled Cohort Equations, or the QRISK3 equations.

The findings were “largely consistent across a range of age, sex, and clinical subgroups,” and “confirm and extend previous reports that hypertension, diabetes, and central adiposity are the leading associations in this observed disparity,” the investigators wrote.
 

Two diabetes subtypes are more prevalent in South Asians

Hypertension, diabetes, and central adiposity do not fully explain South Asians’ higher risk for ASCVD, wrote Namratha R. Kandula, MD, of Northwestern University Medical Center, Chicago, and Alka M. Kanaya, MD, of the University of California, San Francisco, in an accompanying editorial published in Circulation.

Some of the undetected risk may stem from unique diabetes disease factors, Dr. Kandula and Dr. Kanaya added.

“Newer data have demonstrated distinct subtypes of type 2 diabetes, with South Asians having a higher prevalence of both a severe insulin resistant with obesity subtype and a less recognized severe insulin deficient subtype,” they wrote. “Importantly, both of these more prevalent diabetes subtypes in South Asians were associated with a higher incidence of coronary artery calcium, a marker of subclinical atherosclerosis and strong predictor of future ASCVD, compared to other diabetes subtypes.”
 

 

 

Diabetes rate is higher for South Asians in the U.S.

Although the present study was conducted in the United Kingdom, the findings likely apply to individuals of South Asian ancestry living in the United States, according to principal author Amit V. Khera, MD, associate director of the precision medicine unit at the Center for Genomic Medicine, Massachusetts General Hospital.

“There are already more than 5 million individuals of South Asian ancestry in the U.S. and this represents one of the fastest-growing ethnic subgroups,” Dr. Khera said in an interview. “As in our study of individuals in the U.K., South Asians in the U.S. suffer from diabetes at much higher rates – 23% versus 12% – and this often occurs even in the absence of obesity.”

Dr. Khera noted that the 2019 ACC/AHA Guideline on the Primary Prevention of Cardiovascular Disease identify South Asian ancestry as a risk-enhancing factor, calling this a “stopgap measure.” More work is needed, he said, in the research arena and in the clinic.
 

Zero South Asians included in studies used to develop risk estimator

“I think the first step is to simply acknowledge that the risk estimators we use in clinical practice have important limitations when it comes to diverse patient populations,” Dr. Khera said in an interview. “We saw this in our study, where – despite a more than doubling of risk – the predicted risk based on the equations used in primary care showed no difference. This risk estimator was developed based on legacy cohort studies, in [which] zero South Asians were included. Despite important differences across race/ethnicity, the current state-of-the-art in the U.S. is to use one equation for Black individuals and another for all other ethnicities.”

Experts suggest steps for reducing heart disease risk

While risk modeling remains suboptimal, Dr. Khera suggested that clinicians can take immediate steps to reduce the risk of heart disease among individuals with South Asian ancestry.

“Despite all of the uncertainty – we still don’t have a complete understanding of why the risk is so high – there are still several things primary care doctors can do for their patients,” Dr. Khera said.

Foremost, he recommended lifestyle and dietary counseling.

“Dietary counseling is particularly effective if put in the context of cultural norms,” Dr. Khera said. “Many South Asians consider fruit juices or white rice to be healthy, when they lead to rapid spikes in blood sugar.”

Dr. Khera also advised earlier heart disease screening, such as coronary calcium scanning “sometime between age 40-50 years,” noting that positive test results may motivate patients to start or adhere to medications, such as cholesterol-lowering therapies. If necessary, clinicians can also refer to heart centers for South Asian patients, which are becoming increasingly common.

Dr. Cheryl A.M. Anderson

According to Cheryl A.M. Anderson, PhD, chair of the AHA’s Council on Epidemiology and Prevention, and professor and dean of the Herbert Wertheim School of Public Health and Human Longevity Science at the University of California, San Diego, the current study suggests that heart disease management strategies for South Asian patients may be lacking.

“We have had a tradition of preventing or trying to treat heart disease in a fashion that doesn’t yet account for the increased risk that might be prevalent in those of South Asian ancestry,” Dr. Anderson said in an interview.

She suggested that improving associated risk-analysis tools could be beneficial, although the tools themselves, in the context of race or ethnicity, may present their own risks.

“We want to be mindful of potential adverse implications from having everything linked to one’s ancestry, which I think this tool doesn’t do,” Dr. Anderson said, referring to the AHA/ACC Pooled Cohort Equations. “But in sort of the bigger picture of things, we always want to expand and refine our toolkit.”

Dr. Rajesh Dash

According to Rajesh Dash, MD, PhD, associate professor, cardiologist, and director of the South Asian Translational Heart Initiative (SSATHI) Prevention Clinic and CardioClick Telemedicine Clinic at Stanford (Calif.) HealthCare, the science supports more active risk mitigation strategies for South Asian patients, and the AHA and the ACC “are the two entities that need to lead the way.”

“Certainly the American Heart Association and the American College of Cardiology should be taking a more active leadership role in this,” Dr. Dash said in an interview.

In 2018, the AHA issued a scientific statement about the elevated risk of ASCVD among South Asian individuals, “but it did not rise to the level of specific recommendations, and did not necessarily go as far as to incorporate new screening parameters for that population,” Dr. Dash said. He also noted that the most recent AHA/ACC guideline identifies South Asian ancestry as a risk-enhancing feature, a statement similarly lacking in actionable value.

“That does not definitively lead someone like a primary care physician to a decision to start a statin, or to be more aggressive with a diagnostic workup, like a stress test, for instance, for a patient who they otherwise would not have done one in had they not been South Asian,” Dr. Dash said.

The steps taken by the AHA and the ACC are “a healthy step forward,” he noted, “but not nearly the degree of attention or vigilance that is required, as well as the level of action that is required to change the narrative for the population.”

At the SSATHI Prevention Clinic, Dr. Dash and colleagues aren’t waiting for the narrative to change, and are already taking a more aggressive approach.

The clinic has an average patient age of 41 years, “easily 15 years younger than the average age in most cardiology clinics,” Dr. Dash said, based on the fact that approximately two-thirds of heart attacks in South Asian individuals occur under the age of 55. “We have to look earlier.”

The SSATHI Prevention Clinic screens for both traditional and emerging risk factors, and Dr. Dash suggested that primary care doctors should do the same.

“If you have a South Asian patient as a primary care physician, you should be aggressively looking for risk factors, traditional ones to start, like elevated cholesterol, hypertension, diabetes, or – and I would argue strongly – prediabetes or insulin resistance.”

Dr. Dash also recommended looking into family history, and considering screening for inflammatory biomarkers, the latter of which are commonly elevated at an earlier age among South Asian individuals, and may have a relatively greater prognostic impact.

To encourage broader implementation of this kind of approach, Dr. Dash called for more large-scale studies. Ideally, these would be randomized clinical trials, but, realistically, real-world datasets may be the answer.

The study was supported by the National Heart, Lung, and Blood Institute; the Broad Institute at MIT and Harvard; the National Human Genome Research Institute; and others. The investigators disclosed relationships with IBM Research, Sanofi, Amgen, and others. Dr. Dash disclosed relationships with HealthPals and AstraZeneca. Dr. Anderson reported no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CIRCULATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Mutational signature may reveal underlying link between red meat and CRC

Article Type
Changed
Mon, 07/12/2021 - 12:15

 

A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.

This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.

“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”

To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.

This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.

According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”

Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.

“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
 

Early findings, important implications

Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.

“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”

For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.

“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.

“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
 

 

 

Possible biomarker?

According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.

“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”

The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.

Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.

“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.

Publications
Topics
Sections

 

A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.

This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.

“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”

To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.

This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.

According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”

Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.

“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
 

Early findings, important implications

Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.

“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”

For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.

“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.

“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
 

 

 

Possible biomarker?

According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.

“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”

The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.

Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.

“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.

 

A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.

This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.

“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”

To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.

This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.

According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”

Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.

“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
 

Early findings, important implications

Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.

“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”

For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.

“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.

“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
 

 

 

Possible biomarker?

According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.

“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”

The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.

Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.

“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANCER DISCOVERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA Clinical Practice Update: Early complications after bariatric/metabolic surgery

Article Type
Changed
Wed, 08/04/2021 - 08:01

 

The American Gastroenterological Association recently published a clinical practice update concerning endoscopic evaluation and management of early complications after bariatric/metabolic surgery.

The seven best practice advice statements, based on available evidence and expert opinion, range from a general call for high familiarity with available interventions to specific approaches for managing postoperative leaks.

According to lead author Vivek Kumbhari, MD, PhD, director of advanced endoscopy, department of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla., and colleagues, the update was written in consideration of increasing rates of bariatric/metabolic surgery.

“Bariatric/metabolic surgery is unmatched with respect to its weight loss and metabolic benefits,” the investigators wrote in Clinical Gastroenterology and Hepatology. “The selection criteria will continue to broaden, likely resulting in increasing numbers of less robust patients undergoing surgery (e.g., children, elderly, and those with significant cardiorespiratory comorbidities).”

Although the 90-day overall complication rate across all patients undergoing bariatric/metabolic surgery is only 4%, Dr. Kumbhari and colleagues noted that this rate is considerably higher, at 20.1%, among patients aged older than 65 years.

“As utilization escalates, so will the number of patients who suffer early complications,” they wrote.

The first three items of best practice advice describe who should be managing complications after bariatric/metabolic surgery, and how.

Foremost, Dr. Kumbhari and colleagues called for a multidisciplinary approach; they suggested that endoscopists should work closely with related specialists, such as bariatric/metabolic surgeons and interventional radiologists.

“Timely communication between the endoscopist, radiologist, surgeon, nutritionists, and inpatient medical team or primary care physician will result in efficient, effective care with prompt escalation and deescalation,” they wrote. “Daily communication is advised.”

The next two best practice advice statements encourage high familiarity with endoscopic treatments, postsurgical anatomy, interventional radiology, and surgical interventions, including risks and benefits of each approach.

“The endoscopist should ... have expertise in interventional endoscopy techniques, including but not limited to using concomitant fluoroscopy, stent deployment and retrieval, pneumatic balloon dilation, incisional therapies, endoscopic suturing, and managing percutaneous drains,” the investigators wrote. “Having the ability to perform a wide array of therapies will enhance the likelihood that the optimal endoscopic strategy will be employed, as opposed to simply performing a technique with which the endoscopist has experience.”

Following these best practices, Dr. Kumbhari and colleagues advised screening patients with postoperative complications for comorbidities, both medical in nature (such as infection) and psychological.

“Patients often have higher depression and anxiety scores, as well as a lower physical quality of life, and medical teams sometimes neglect the patient’s psychological state,” they wrote. “It is imperative that the multidisciplinary team recognize and acknowledge the patient’s psychological comorbidities and engage expertise to manage them.”

Next, the investigators advised that endoscopic intervention should be considered regardless of time interval since surgery, including the immediate postoperative period.

“Endoscopy is often indicated as the initial therapeutic modality, and it can safely be performed,” Dr. Kumbhari and colleagues wrote. “When endoscopy is performed, it is advised to use carbon dioxide for insufflation. Caution should be used when advancing the endoscope into the small bowel, as it is best to minimize pressure along the fresh staple lines. In cases in which the patient is critically ill or the interventional endoscopist does not have extensive experience with such a scenario, the endoscopy should be performed in the operating room with a surgeon present (preferably the surgeon who performed the operation).”

Dr. Kumbhari and colleagues discussed functional stenosis, which can precipitate and propagate leaks. They noted that “downstream stenosis is frequently seen at the level of the incisura angularis or in the proximal stomach when a sleeve gastrectomy is performed in a patient with a prior laparoscopic adjustable gastric band.”

To address stenosis, the update calls for “aggressive dilation” using a large pneumatic balloon, preferably with fluoroscopy to make sure the distal end of the balloon does not cross the pylorus. The investigators noted that endoscopic suturing may be needed if a tear involving the muscularis propria is encountered.

Lastly, the clinical practice update offers comprehensive guidance for managing staple-line leaks, which “most commonly occur along the staple line of the proximal stomach.”

As leaks are thought to stem from ischemia, “most leaks are not present upon completion of the operation, and they develop over the subsequent weeks, often in the setting of downstream stenosis,” the investigators wrote.

To guide management of staple-line leaks, the investigators presented a treatment algorithm that incorporates defect size, time since surgery, and presence or absence of stenosis.

For example, a defect smaller than 10 mm occurring within 6 weeks of surgery and lacking stenosis may be managed with a percutaneous drain and diversion. In contrast, a defect of similar size, also without stenosis, but occurring later than 6 weeks after the initial procedure, should be managed with endoscopic internal drainage or vacuum therapy.

“Clinicians should recognize that the goal for endoscopic management of staple-line leaks is often not necessarily initial closure of the leak site, but rather techniques to promote drainage of material from the perigastric collection into the gastric lumen such that the leak site closes by secondary intention,” wrote Dr. Kumbhari and colleagues.

The clinical practice update was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Boston Scientific, Medtronic, Apollo Endosurgery, and others.

Publications
Topics
Sections

 

The American Gastroenterological Association recently published a clinical practice update concerning endoscopic evaluation and management of early complications after bariatric/metabolic surgery.

The seven best practice advice statements, based on available evidence and expert opinion, range from a general call for high familiarity with available interventions to specific approaches for managing postoperative leaks.

According to lead author Vivek Kumbhari, MD, PhD, director of advanced endoscopy, department of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla., and colleagues, the update was written in consideration of increasing rates of bariatric/metabolic surgery.

“Bariatric/metabolic surgery is unmatched with respect to its weight loss and metabolic benefits,” the investigators wrote in Clinical Gastroenterology and Hepatology. “The selection criteria will continue to broaden, likely resulting in increasing numbers of less robust patients undergoing surgery (e.g., children, elderly, and those with significant cardiorespiratory comorbidities).”

Although the 90-day overall complication rate across all patients undergoing bariatric/metabolic surgery is only 4%, Dr. Kumbhari and colleagues noted that this rate is considerably higher, at 20.1%, among patients aged older than 65 years.

“As utilization escalates, so will the number of patients who suffer early complications,” they wrote.

The first three items of best practice advice describe who should be managing complications after bariatric/metabolic surgery, and how.

Foremost, Dr. Kumbhari and colleagues called for a multidisciplinary approach; they suggested that endoscopists should work closely with related specialists, such as bariatric/metabolic surgeons and interventional radiologists.

“Timely communication between the endoscopist, radiologist, surgeon, nutritionists, and inpatient medical team or primary care physician will result in efficient, effective care with prompt escalation and deescalation,” they wrote. “Daily communication is advised.”

The next two best practice advice statements encourage high familiarity with endoscopic treatments, postsurgical anatomy, interventional radiology, and surgical interventions, including risks and benefits of each approach.

“The endoscopist should ... have expertise in interventional endoscopy techniques, including but not limited to using concomitant fluoroscopy, stent deployment and retrieval, pneumatic balloon dilation, incisional therapies, endoscopic suturing, and managing percutaneous drains,” the investigators wrote. “Having the ability to perform a wide array of therapies will enhance the likelihood that the optimal endoscopic strategy will be employed, as opposed to simply performing a technique with which the endoscopist has experience.”

Following these best practices, Dr. Kumbhari and colleagues advised screening patients with postoperative complications for comorbidities, both medical in nature (such as infection) and psychological.

“Patients often have higher depression and anxiety scores, as well as a lower physical quality of life, and medical teams sometimes neglect the patient’s psychological state,” they wrote. “It is imperative that the multidisciplinary team recognize and acknowledge the patient’s psychological comorbidities and engage expertise to manage them.”

Next, the investigators advised that endoscopic intervention should be considered regardless of time interval since surgery, including the immediate postoperative period.

“Endoscopy is often indicated as the initial therapeutic modality, and it can safely be performed,” Dr. Kumbhari and colleagues wrote. “When endoscopy is performed, it is advised to use carbon dioxide for insufflation. Caution should be used when advancing the endoscope into the small bowel, as it is best to minimize pressure along the fresh staple lines. In cases in which the patient is critically ill or the interventional endoscopist does not have extensive experience with such a scenario, the endoscopy should be performed in the operating room with a surgeon present (preferably the surgeon who performed the operation).”

Dr. Kumbhari and colleagues discussed functional stenosis, which can precipitate and propagate leaks. They noted that “downstream stenosis is frequently seen at the level of the incisura angularis or in the proximal stomach when a sleeve gastrectomy is performed in a patient with a prior laparoscopic adjustable gastric band.”

To address stenosis, the update calls for “aggressive dilation” using a large pneumatic balloon, preferably with fluoroscopy to make sure the distal end of the balloon does not cross the pylorus. The investigators noted that endoscopic suturing may be needed if a tear involving the muscularis propria is encountered.

Lastly, the clinical practice update offers comprehensive guidance for managing staple-line leaks, which “most commonly occur along the staple line of the proximal stomach.”

As leaks are thought to stem from ischemia, “most leaks are not present upon completion of the operation, and they develop over the subsequent weeks, often in the setting of downstream stenosis,” the investigators wrote.

To guide management of staple-line leaks, the investigators presented a treatment algorithm that incorporates defect size, time since surgery, and presence or absence of stenosis.

For example, a defect smaller than 10 mm occurring within 6 weeks of surgery and lacking stenosis may be managed with a percutaneous drain and diversion. In contrast, a defect of similar size, also without stenosis, but occurring later than 6 weeks after the initial procedure, should be managed with endoscopic internal drainage or vacuum therapy.

“Clinicians should recognize that the goal for endoscopic management of staple-line leaks is often not necessarily initial closure of the leak site, but rather techniques to promote drainage of material from the perigastric collection into the gastric lumen such that the leak site closes by secondary intention,” wrote Dr. Kumbhari and colleagues.

The clinical practice update was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Boston Scientific, Medtronic, Apollo Endosurgery, and others.

 

The American Gastroenterological Association recently published a clinical practice update concerning endoscopic evaluation and management of early complications after bariatric/metabolic surgery.

The seven best practice advice statements, based on available evidence and expert opinion, range from a general call for high familiarity with available interventions to specific approaches for managing postoperative leaks.

According to lead author Vivek Kumbhari, MD, PhD, director of advanced endoscopy, department of gastroenterology and hepatology, Mayo Clinic, Jacksonville, Fla., and colleagues, the update was written in consideration of increasing rates of bariatric/metabolic surgery.

“Bariatric/metabolic surgery is unmatched with respect to its weight loss and metabolic benefits,” the investigators wrote in Clinical Gastroenterology and Hepatology. “The selection criteria will continue to broaden, likely resulting in increasing numbers of less robust patients undergoing surgery (e.g., children, elderly, and those with significant cardiorespiratory comorbidities).”

Although the 90-day overall complication rate across all patients undergoing bariatric/metabolic surgery is only 4%, Dr. Kumbhari and colleagues noted that this rate is considerably higher, at 20.1%, among patients aged older than 65 years.

“As utilization escalates, so will the number of patients who suffer early complications,” they wrote.

The first three items of best practice advice describe who should be managing complications after bariatric/metabolic surgery, and how.

Foremost, Dr. Kumbhari and colleagues called for a multidisciplinary approach; they suggested that endoscopists should work closely with related specialists, such as bariatric/metabolic surgeons and interventional radiologists.

“Timely communication between the endoscopist, radiologist, surgeon, nutritionists, and inpatient medical team or primary care physician will result in efficient, effective care with prompt escalation and deescalation,” they wrote. “Daily communication is advised.”

The next two best practice advice statements encourage high familiarity with endoscopic treatments, postsurgical anatomy, interventional radiology, and surgical interventions, including risks and benefits of each approach.

“The endoscopist should ... have expertise in interventional endoscopy techniques, including but not limited to using concomitant fluoroscopy, stent deployment and retrieval, pneumatic balloon dilation, incisional therapies, endoscopic suturing, and managing percutaneous drains,” the investigators wrote. “Having the ability to perform a wide array of therapies will enhance the likelihood that the optimal endoscopic strategy will be employed, as opposed to simply performing a technique with which the endoscopist has experience.”

Following these best practices, Dr. Kumbhari and colleagues advised screening patients with postoperative complications for comorbidities, both medical in nature (such as infection) and psychological.

“Patients often have higher depression and anxiety scores, as well as a lower physical quality of life, and medical teams sometimes neglect the patient’s psychological state,” they wrote. “It is imperative that the multidisciplinary team recognize and acknowledge the patient’s psychological comorbidities and engage expertise to manage them.”

Next, the investigators advised that endoscopic intervention should be considered regardless of time interval since surgery, including the immediate postoperative period.

“Endoscopy is often indicated as the initial therapeutic modality, and it can safely be performed,” Dr. Kumbhari and colleagues wrote. “When endoscopy is performed, it is advised to use carbon dioxide for insufflation. Caution should be used when advancing the endoscope into the small bowel, as it is best to minimize pressure along the fresh staple lines. In cases in which the patient is critically ill or the interventional endoscopist does not have extensive experience with such a scenario, the endoscopy should be performed in the operating room with a surgeon present (preferably the surgeon who performed the operation).”

Dr. Kumbhari and colleagues discussed functional stenosis, which can precipitate and propagate leaks. They noted that “downstream stenosis is frequently seen at the level of the incisura angularis or in the proximal stomach when a sleeve gastrectomy is performed in a patient with a prior laparoscopic adjustable gastric band.”

To address stenosis, the update calls for “aggressive dilation” using a large pneumatic balloon, preferably with fluoroscopy to make sure the distal end of the balloon does not cross the pylorus. The investigators noted that endoscopic suturing may be needed if a tear involving the muscularis propria is encountered.

Lastly, the clinical practice update offers comprehensive guidance for managing staple-line leaks, which “most commonly occur along the staple line of the proximal stomach.”

As leaks are thought to stem from ischemia, “most leaks are not present upon completion of the operation, and they develop over the subsequent weeks, often in the setting of downstream stenosis,” the investigators wrote.

To guide management of staple-line leaks, the investigators presented a treatment algorithm that incorporates defect size, time since surgery, and presence or absence of stenosis.

For example, a defect smaller than 10 mm occurring within 6 weeks of surgery and lacking stenosis may be managed with a percutaneous drain and diversion. In contrast, a defect of similar size, also without stenosis, but occurring later than 6 weeks after the initial procedure, should be managed with endoscopic internal drainage or vacuum therapy.

“Clinicians should recognize that the goal for endoscopic management of staple-line leaks is often not necessarily initial closure of the leak site, but rather techniques to promote drainage of material from the perigastric collection into the gastric lumen such that the leak site closes by secondary intention,” wrote Dr. Kumbhari and colleagues.

The clinical practice update was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Boston Scientific, Medtronic, Apollo Endosurgery, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Mutational signature may reveal underlying link between red meat and CRC

Article Type
Changed
Mon, 07/12/2021 - 08:55

 

A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.

This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.

“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”

To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.

This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.

According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”

Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.

“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
 

Early findings, important implications

Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.

“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”

For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.

“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.

“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
 

 

 

Possible biomarker?

According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.

“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”

The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.

Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.

“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.

Publications
Topics
Sections

 

A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.

This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.

“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”

To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.

This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.

According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”

Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.

“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
 

Early findings, important implications

Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.

“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”

For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.

“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.

“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
 

 

 

Possible biomarker?

According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.

“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”

The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.

Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.

“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.

 

A mechanistic link between red meat consumption and colorectal cancer (CRC) has been identified in the form of an alkylating mutational signature, according to investigators.

This is the first time a colorectal mutational signature has been associated with a component of diet, which demonstrates the value of large-scale molecular epidemiologic studies and suggests potential for early, precision dietary intervention, reported lead author Carino Gurjao, MSc, of the Dana-Farber Cancer Institute and Harvard Medical School, both in Boston, and colleagues.

“Red meat consumption has been consistently linked to the incidence of colorectal cancer,” the investigators wrote in Cancer Discovery. “The suggested mechanism is mutagenesis through alkylating damage induced by N-nitroso-compounds (NOCs), which are metabolic products of blood heme iron or meat nitrites/nitrates. Nevertheless, this mutational damage is yet to be observed directly in patients’ tumors.”

To this end, the investigators turned to three long-term, large-scale, prospective cohort studies: the Nurses’ Health Studies I and II, and the Health Professionals Follow-up Study. These databases include nearly 300,000 individuals with follow-up dating back as far as 1976. The investigators identified 900 cases of primary, untreated CRC with adequate tissue for analysis, then, for each case, performed whole exome sequencing on both tumor tissue and normal colorectal tissue.

This revealed an alkylating mutational signature previously undescribed in CRC that was significantly associated with consumption of red meat prior to diagnosis, but not other dietary or lifestyle factors. The signature occurred most frequently in tumors and normal crypts in the distal colon and rectum.

According to the investigators, the presence of the alkylating signature in normal colorectal crypts “suggests that mutational changes due to such damage may start to occur early in the path of colorectal carcinogenesis.”

Further analysis showed that tumors harboring common KRAS and PIK3CA driver mutations had the highest levels of alkylating damage, with higher levels predicting worse survival.

“These results ... further implicate the role of red meat in CRC initiation and progression,” the investigators concluded.
 

Early findings, important implications

Cosenior author Kana Wu, MD, PhD, principal research scientist in the department of nutrition at Harvard School of Public Health, Boston, noted that these are early findings, although they may pave the way toward new dietary recommendations and methods of food production.

“While more detailed analysis needs to be conducted, and our results need to be confirmed in other studies, this study is a promising first step to better understand the biological mechanisms underlying the role of red and processed meats in colorectal cancers,” Dr. Wu said in an interview. “It is important to gain more insight into the biological mechanisms so we can improve dietary guidelines for cancer prevention and guide food reformulation efforts to lower cancer risk.”

For now, Dr. Wu predicted that standing dietary recommendations will remain unchanged.

“This study will not alter current diet recommendations to limit intake of red and processed meats,” Dr. Wu said, referring to similar recommendations across several organizations, including the American Heart Association, the World Cancer Research Fund/American Institute for Cancer Research, and the American Cancer Society.

“For example,” Dr. Wu said, “the WCRF/AICR recommends limiting consumption of red and processed meat to ‘no more than moderate amounts [12-18 ounces per week] of red meat, such as beef, pork, and lamb, and [to] eat little, if any, processed meat.’”
 

 

 

Possible biomarker?

According to Patricia Thompson-Carino, PhD, deputy director of the Stony Brook (N.Y.) Cancer Center, the study provides convincing evidence linking red meat consumption with development of CRC.

“Higher frequency of the signature in the distal colon is compelling for its consistency with epidemiologic evidence,” Dr. Thompson-Carino said in an interview. “Combined with the observed worse survival in patients harboring the signature and association with oncogenic KRAS and PIK3CA driver mutations, this study significantly elevates the biological plausibility that red meat is a modifiable source of NOC mutagenicity and carcinogenesis in humans.”

The signature could be used as a biomarker to detect exposure to NOCs, and susceptibility to CRC, she added.

Still, Dr. Thompson-Carino suggested that more work is needed to fully elucidate underlying mechanisms of action, which are needed to accurately shape dietary guidance.

“Key to advancing red meat dietary recommendations will be understanding the relationships between the new mutation signature and the NOCs derived from red meat and their source, whether endogenous [for example, intestinal N-nitrosation] or exogenous [for example, chemical preservation or charring],” she said. The study was supported by the National Institutes of Health, the Stand Up To Cancer Colorectal Cancer Dream Team Translational Research Grant (coadministered by the American Association for Cancer Research), the Project P Fund, and others. The investigators, Dr. Wu, and Dr. Thompson-Carino reported no conflicts of interest related to this study.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANCER DISCOVERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Early-onset CRC associated with longer survival

Article Type
Changed
Tue, 06/29/2021 - 14:33

 

Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.

This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.

“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.

To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.

“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.

Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).

In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).

Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.

“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
 

 

 

Cautious about interpretation

Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”

Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.

“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.

The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.

“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
 

Important but not surprising

It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.

“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”

Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.

Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.

“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”

The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.

Publications
Topics
Sections

 

Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.

This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.

“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.

To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.

“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.

Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).

In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).

Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.

“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
 

 

 

Cautious about interpretation

Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”

Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.

“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.

The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.

“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
 

Important but not surprising

It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.

“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”

Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.

Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.

“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”

The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.

 

Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.

This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.

“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.

To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.

“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.

Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).

In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).

Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.

“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
 

 

 

Cautious about interpretation

Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”

Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.

“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.

The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.

“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
 

Important but not surprising

It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.

“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”

Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.

Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.

“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”

The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Early-onset CRC associated with longer survival

Article Type
Changed
Mon, 06/28/2021 - 09:54

 

Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.

This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.

“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.

To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.

“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.

Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).

In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).

Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.

“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
 

 

 

Cautious about interpretation

Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”

Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.

“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.

The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.

“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
 

Important but not surprising

It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.

“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”

Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.

Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.

“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”

The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.

Publications
Topics
Sections

 

Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.

This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.

“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.

To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.

“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.

Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).

In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).

Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.

“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
 

 

 

Cautious about interpretation

Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”

Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.

“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.

The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.

“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
 

Important but not surprising

It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.

“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”

Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.

Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.

“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”

The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.

 

Individuals diagnosed with primary colorectal cancer (CRC) at less than 50 years of age have better survival outcomes than individuals diagnosed at 51-55 years, based on data from more than 750,000 patients.

This finding emphasizes the importance of early CRC detection in younger individuals, reported lead author En Cheng, MD, PhD, of Yale University, New Haven, Conn., and colleagues.

“Early-onset CRC (i.e., CRC diagnosed at age less than 50 years) has been characterized by unique clinical, genetic, and epigenetic characteristics, and thus it may be associated with different survival from CRC diagnosed among individuals older than 50 years,” the investigators wrote in JAMA Network Open. Previous studies comparing survival times across age groups have yielded inconsistent results.

To gain a better understanding, the investigator conducted a retrospective study using data from the National Cancer Database. Excluding patients with primary CRC who had concomitant diagnosis, history of other malignant tumors, noninvasive adenocarcinoma, or missing data, the final dataset included 769,871 patients. Early-onset CRC was defined by age less than 50 years, whereas later-onset CRC was defined by ages 51-55 years.

“Individuals diagnosed at age 50 years were excluded to minimize an apparent screening detection bias at age 50 years in our population, given that these individuals disproportionately presented with earlier stages,” the investigators wrote.

Initial comparisons across groups revealed several significant differences. Individuals in the early-onset group were more often women (47.3% vs. 43.8%; P < .001), members of races in the “other” category (6.9% vs. 5.9%; P < .001), and Medicaid patients (12.3% vs. 10.3%; P < .001). They were also more likely to be diagnosed with stage IV cancer (27.8% vs 24.1%; P < .001) and have rectal tumors (29.3% vs. 28.7%; P = .004).

In the unadjusted Kaplan-Meier analysis, patients with early-onset CRC had a lower 10-year survival rate (53.6%; 95% CI, 53.2%-54.0% vs. 54.3%; 95% CI, 53.8%-54.8%; P < .001). The fully adjusted model revealed significantly higher survival for early-onset patients, compared with later-onset patients (adjusted hazard ratio, 0.95; 95% CI, 0.93-0.96; P < .001) . This disparity deepened when adjusting only for stage (HR, 0.89; 95% CI, 0.88-0.90; P < .001).

Survival was longest among patients 35-39 years (aHR, 0.88; 95% CI, 0.84-0.92; P < .001), compared with those aged 51-55, and among early-onset individuals with stage I disease (a HR, 0.87; 95% CI, 0.81-0.93; P < .001) or stage II disease (a HR, 0.86; 95% CI, 0.82-0.90; P < .001), compared with those having the same stages of later-onset CRC. No survival advantage was observed among patients diagnosed at age 25 or younger or those with stage III or IV disease.

“Interestingly, hereditary nonpolyposis colorectal cancer, owing to underlying mismatch repair deficiency, is associated with superior survival and is often diagnosed in individuals from ages 35-45 years,” the investigators noted. “In contrast, adenomatous polyposis coli syndrome is more common in individuals who are diagnosed with CRC at age younger than 20 years (10%), compared with those diagnosed at later ages (0.1%), and adenomatous polyposis coli syndrome is not associated with a survival advantage. These high penetrance syndromes could partly account for the relative heterogeneity in survival across ages among individuals with early-onset CRC.”
 

 

 

Cautious about interpretation

Dr. Cheng and colleagues concluded their publication with a disclaimer: “Our finding of a survival advantage associated with early-onset CRC among younger individuals should be interpreted cautiously, given that the advantage had a small magnitude and was heterogeneous by age and stage,” they wrote. “Further study is needed to understand the underlying heterogeneity of survival by age and stage among individuals with early-onset CRC.”

Kirbi L. Yelorda, MD, of Stanford (Calif.) University, and colleagues, had a similar interpretation.

“These results offer support for effectiveness of treatment in patients diagnosed with CRC at younger ages; however, they must be interpreted within the context of epidemiological and biological factors,” Dr. Yelorda and colleagues wrote in an accompanying editorial.

The findings also suggest that the recent reduction in recommended screening age by the U.S. Preventive Services Task Force – from 50 years to 45 years – is warranted, they added, but screening younger patients remains unnecessary.

“While these results do not suggest that screening should start for patients younger than 45 years, they do support the benefit of early detection in young patients,” Dr. Yelorda and colleagues wrote, noting a “fairly low incidence rate” among individuals younger than 45, which is insufficient to justify the risk-to-benefit ratio and increased costs associated with expanded screening.
 

Important but not surprising

It’s “not surprising” that early-onset patients typically have better survival than later-onset patients, according to Joseph C. Anderson, MD, associate professor at White River Junction Veterans Affairs Medical Center, Hartford, Vt.; Geisel School of Medicine at Dartmouth, Hanover, N.H.; and the University of Connecticut, Farmington.

“They’re younger, have less comorbidities, and can tolerate chemotherapy,” Dr. Anderson said in an interview. “It’s not surprising that people do poorly with later stages. Younger people are no exception.”

Dr. Anderson, who previously coauthored an editorial weighing the pros and cons of earlier screening, noted that earlier screening is needed because of the rising incidence of late-stage diagnoses among younger patients, which, as the study found, are associated with worse outcomes.

Beyond adherence to screening recommendations, Dr. Anderson urged clinicians to be aggressive when doing a workup of CRC symptoms in younger patients, among whom delayed diagnoses are more common.

“We can’t just say it’s something more benign, like hemorrhoids, like we used to,” Dr. Anderson said. “Somebody who’s 30 years old and having rectal bleeding needs to be evaluated promptly – there can’t be a delay.”

The study was supported by the National Institutes of Health and Stand Up To Cancer (grant administered by the American Association for Cancer Research). The investigators disclosed relationships with Evergrande Group, Janssen, Revolution Medicines, and others. One editorialist reported serving as a member of the USPSTF when the guideline for colorectal cancer was developed, and being a coauthor on the guideline. No other disclosures were reported among editorialists. Dr. Anderson reported no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Average childbirth costs more than $3,000 out of pocket with private insurance

Article Type
Changed
Fri, 06/25/2021 - 15:29

 

Families with private health insurance pay around $3,000 for newborn delivery and hospitalization, while adding neonatal intensive care can push the bill closer to $5,000, based on a retrospective look at almost 400,000 episodes.

The findings suggest that privately insured families need prenatal financial counseling, as well as screening for financial hardship after delivery, reported lead author Kao-Ping Chua, MD, PhD, assistant professor and health policy researcher in the department of pediatrics and the Susan B. Meister Child Health Evaluation and Research Center at the University of Michigan, Ann Arbor, and colleagues.

“Concern is growing regarding the high and rising financial burden of childbirth for privately insured families,” the investigators wrote in Pediatrics. “Previous studies assessing this burden have focused on out-of-pocket spending for maternal care, including hospitalizations for delivery. However, there are no recent national data on out-of-pocket spending across the childbirth episode, including both deliveries and newborn hospitalizations.”

To address this knowledge gap, Dr. Chua and colleagues turned to Optum’s deidentified Clinformatics Data Mart, comprising 12 million privately insured individuals across the United States. The investigators identified 398,410 childbirth episodes occurring between 2016 and 2019. Each episode was defined as one delivery and at least one newborn hospitalization under the same family plan.

Out-of-pocket cost included copayment plus coinsurance and deductibles. Primary outcomes included mean total out-of-pocket spending and proportion of episodes exceeding $5,000 or $10,000. Subgroup analyses compared differences in spending between episodes involving neonatal intensive care or cesarean birth.

The mean out-of-pocket spending was $2,281 for delivery and $788 for newborn hospitalization, giving a total of $3,068 per childbirth episode. Coinsurance and deductibles accounted for much of that cost, at 55.8% and 42.1%, respectively, whereas copayments accounted for a relatively minor portion (2.2%).

Almost all episodes (95%) cost more than zero dollars, while 17.1% cost more than $5,000 and 1.0% cost more than $10,000. Total mean out-of-pocket spending was higher for episodes involving cesarean birth ($3,389) or neonatal intensive care ($4,969), the latter of which cost more than $10,000 in 8.8% of episodes.

“Because details on plan benefit design were unavailable, the generalizability of findings to all privately insured Americans is unclear,” the investigators noted. “However, the proportion of childbirth episodes covered by high-deductible health plans in this study is consistent with the prevalence of such plans among Americans with employer-sponsored insurance.”

The findings suggest that financial reform is needed, Dr. Chua and colleagues concluded.

“To avoid imposing undue financial burden on families, private insurers should improve childbirth coverage,” they wrote. “An incremental step would be providing first-dollar coverage of deliveries and newborn hospitalizations before deductibles are met. Ideally, however, insurers would waive most or all cost-sharing for these hospitalizations, consistent with the approach taken by Medicaid programs and many developed countries.”

Dr. Madeline Sutton

According to Madeline Sutton, MD, assistant professor of obstetrics and gynecology at Morehouse School of Medicine, Atlanta, the size of the study is commendable, but some details are lacking.

“Although the overall sample size allows for a robust analysis, deciding to not report the confidence intervals in this report does not allow for understanding of [the findings with] smaller sample sizes,” Dr. Sutton said in an interview.

(Dr. Chua and colleagues noted that they did not report confidence intervals because “all differences between subgroups were significant owing to large sample sizes.”)

“Still,” Dr. Sutton went on, “this is an important study that has implications for financial counseling that may need to be a part of preconceptional, prenatal, and postnatal visits for privately insured families to help with planning and to decrease the chances of childbirth-related financial hardships. Additionally, policy-level changes that decrease or eliminate these private insurance–related childbirth-episode costs that may negatively impact some families with lower incomes, are warranted.”

The study was funded by the National Institutes of Health. Dr. Chua disclosed a grant from the National Institute on Drug Abuse, while Dr. Moniz is supported by the Agency for Healthcare Research and Quality. Dr. Sutton had no relevant disclosures.

Publications
Topics
Sections

 

Families with private health insurance pay around $3,000 for newborn delivery and hospitalization, while adding neonatal intensive care can push the bill closer to $5,000, based on a retrospective look at almost 400,000 episodes.

The findings suggest that privately insured families need prenatal financial counseling, as well as screening for financial hardship after delivery, reported lead author Kao-Ping Chua, MD, PhD, assistant professor and health policy researcher in the department of pediatrics and the Susan B. Meister Child Health Evaluation and Research Center at the University of Michigan, Ann Arbor, and colleagues.

“Concern is growing regarding the high and rising financial burden of childbirth for privately insured families,” the investigators wrote in Pediatrics. “Previous studies assessing this burden have focused on out-of-pocket spending for maternal care, including hospitalizations for delivery. However, there are no recent national data on out-of-pocket spending across the childbirth episode, including both deliveries and newborn hospitalizations.”

To address this knowledge gap, Dr. Chua and colleagues turned to Optum’s deidentified Clinformatics Data Mart, comprising 12 million privately insured individuals across the United States. The investigators identified 398,410 childbirth episodes occurring between 2016 and 2019. Each episode was defined as one delivery and at least one newborn hospitalization under the same family plan.

Out-of-pocket cost included copayment plus coinsurance and deductibles. Primary outcomes included mean total out-of-pocket spending and proportion of episodes exceeding $5,000 or $10,000. Subgroup analyses compared differences in spending between episodes involving neonatal intensive care or cesarean birth.

The mean out-of-pocket spending was $2,281 for delivery and $788 for newborn hospitalization, giving a total of $3,068 per childbirth episode. Coinsurance and deductibles accounted for much of that cost, at 55.8% and 42.1%, respectively, whereas copayments accounted for a relatively minor portion (2.2%).

Almost all episodes (95%) cost more than zero dollars, while 17.1% cost more than $5,000 and 1.0% cost more than $10,000. Total mean out-of-pocket spending was higher for episodes involving cesarean birth ($3,389) or neonatal intensive care ($4,969), the latter of which cost more than $10,000 in 8.8% of episodes.

“Because details on plan benefit design were unavailable, the generalizability of findings to all privately insured Americans is unclear,” the investigators noted. “However, the proportion of childbirth episodes covered by high-deductible health plans in this study is consistent with the prevalence of such plans among Americans with employer-sponsored insurance.”

The findings suggest that financial reform is needed, Dr. Chua and colleagues concluded.

“To avoid imposing undue financial burden on families, private insurers should improve childbirth coverage,” they wrote. “An incremental step would be providing first-dollar coverage of deliveries and newborn hospitalizations before deductibles are met. Ideally, however, insurers would waive most or all cost-sharing for these hospitalizations, consistent with the approach taken by Medicaid programs and many developed countries.”

Dr. Madeline Sutton

According to Madeline Sutton, MD, assistant professor of obstetrics and gynecology at Morehouse School of Medicine, Atlanta, the size of the study is commendable, but some details are lacking.

“Although the overall sample size allows for a robust analysis, deciding to not report the confidence intervals in this report does not allow for understanding of [the findings with] smaller sample sizes,” Dr. Sutton said in an interview.

(Dr. Chua and colleagues noted that they did not report confidence intervals because “all differences between subgroups were significant owing to large sample sizes.”)

“Still,” Dr. Sutton went on, “this is an important study that has implications for financial counseling that may need to be a part of preconceptional, prenatal, and postnatal visits for privately insured families to help with planning and to decrease the chances of childbirth-related financial hardships. Additionally, policy-level changes that decrease or eliminate these private insurance–related childbirth-episode costs that may negatively impact some families with lower incomes, are warranted.”

The study was funded by the National Institutes of Health. Dr. Chua disclosed a grant from the National Institute on Drug Abuse, while Dr. Moniz is supported by the Agency for Healthcare Research and Quality. Dr. Sutton had no relevant disclosures.

 

Families with private health insurance pay around $3,000 for newborn delivery and hospitalization, while adding neonatal intensive care can push the bill closer to $5,000, based on a retrospective look at almost 400,000 episodes.

The findings suggest that privately insured families need prenatal financial counseling, as well as screening for financial hardship after delivery, reported lead author Kao-Ping Chua, MD, PhD, assistant professor and health policy researcher in the department of pediatrics and the Susan B. Meister Child Health Evaluation and Research Center at the University of Michigan, Ann Arbor, and colleagues.

“Concern is growing regarding the high and rising financial burden of childbirth for privately insured families,” the investigators wrote in Pediatrics. “Previous studies assessing this burden have focused on out-of-pocket spending for maternal care, including hospitalizations for delivery. However, there are no recent national data on out-of-pocket spending across the childbirth episode, including both deliveries and newborn hospitalizations.”

To address this knowledge gap, Dr. Chua and colleagues turned to Optum’s deidentified Clinformatics Data Mart, comprising 12 million privately insured individuals across the United States. The investigators identified 398,410 childbirth episodes occurring between 2016 and 2019. Each episode was defined as one delivery and at least one newborn hospitalization under the same family plan.

Out-of-pocket cost included copayment plus coinsurance and deductibles. Primary outcomes included mean total out-of-pocket spending and proportion of episodes exceeding $5,000 or $10,000. Subgroup analyses compared differences in spending between episodes involving neonatal intensive care or cesarean birth.

The mean out-of-pocket spending was $2,281 for delivery and $788 for newborn hospitalization, giving a total of $3,068 per childbirth episode. Coinsurance and deductibles accounted for much of that cost, at 55.8% and 42.1%, respectively, whereas copayments accounted for a relatively minor portion (2.2%).

Almost all episodes (95%) cost more than zero dollars, while 17.1% cost more than $5,000 and 1.0% cost more than $10,000. Total mean out-of-pocket spending was higher for episodes involving cesarean birth ($3,389) or neonatal intensive care ($4,969), the latter of which cost more than $10,000 in 8.8% of episodes.

“Because details on plan benefit design were unavailable, the generalizability of findings to all privately insured Americans is unclear,” the investigators noted. “However, the proportion of childbirth episodes covered by high-deductible health plans in this study is consistent with the prevalence of such plans among Americans with employer-sponsored insurance.”

The findings suggest that financial reform is needed, Dr. Chua and colleagues concluded.

“To avoid imposing undue financial burden on families, private insurers should improve childbirth coverage,” they wrote. “An incremental step would be providing first-dollar coverage of deliveries and newborn hospitalizations before deductibles are met. Ideally, however, insurers would waive most or all cost-sharing for these hospitalizations, consistent with the approach taken by Medicaid programs and many developed countries.”

Dr. Madeline Sutton

According to Madeline Sutton, MD, assistant professor of obstetrics and gynecology at Morehouse School of Medicine, Atlanta, the size of the study is commendable, but some details are lacking.

“Although the overall sample size allows for a robust analysis, deciding to not report the confidence intervals in this report does not allow for understanding of [the findings with] smaller sample sizes,” Dr. Sutton said in an interview.

(Dr. Chua and colleagues noted that they did not report confidence intervals because “all differences between subgroups were significant owing to large sample sizes.”)

“Still,” Dr. Sutton went on, “this is an important study that has implications for financial counseling that may need to be a part of preconceptional, prenatal, and postnatal visits for privately insured families to help with planning and to decrease the chances of childbirth-related financial hardships. Additionally, policy-level changes that decrease or eliminate these private insurance–related childbirth-episode costs that may negatively impact some families with lower incomes, are warranted.”

The study was funded by the National Institutes of Health. Dr. Chua disclosed a grant from the National Institute on Drug Abuse, while Dr. Moniz is supported by the Agency for Healthcare Research and Quality. Dr. Sutton had no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Bifidobacteria supplementation regulates newborn immune system

Article Type
Changed
Fri, 06/25/2021 - 12:26

 

Supplementing breastfed infants with bifidobacteria promotes development of a well-regulated immune system, theoretically reducing risk of immune-mediated conditions like allergies and asthma, according to investigators.

These findings support the importance of early gut colonization with beneficial microbes, an event that may affect the immune system throughout life, reported lead author Bethany M. Henrick, PhD, director of immunology and diagnostics at Evolve Biosystems, Davis, Calif., and adjunct assistant professor at the University of Nebraska, Lincoln, and colleagues.

“Dysbiosis of the infant gut microbiome is common in modern societies and a likely contributing factor to the increased incidences of immune-mediated disorders,” the investigators wrote in Cell. “Therefore, there is great interest in identifying microbial factors that can support healthier immune system imprinting and hopefully prevent cases of allergy, autoimmunity, and possibly other conditions involving the immune system.”

Prevailing theory suggests that the rising incidence of neonatal intestinal dysbiosis – which is typical in developed countries – may be caused by a variety of factors, including cesarean sections; modern hygiene practices; antibiotics, antiseptics, and other medications; diets high in fat and sugar; and infant formula.

According to Dr. Henrick and colleagues, a healthy gut microbiome plays the greatest role in immunological development during the first 3 months post partum; specifically, a lack of bifidobacteria during this time has been linked with increased risks of autoimmunity and enteric inflammation, although underlying immune mechanisms remain unclear.

Bifidobacteria also exemplify the symbiotic relationship between mothers, babies, and beneficial microbes. The investigators pointed out that breast milk contains human milk oligosaccharides (HMOs), which humans cannot digest, but are an excellent source of energy for bifidobacteria and other beneficial microbes, giving them a “selective nutritional advantage.”

Bifidobacteria should therefore be common residents within the infant gut, but this is often not now the case, leading Dr. Henrick and colleagues to zero in on the microbe, in hopes of determining the exactly how beneficial bacteria shape immune development.

It is only recently that the necessary knowledge and techniques to perform studies like this one have become available, the investigators wrote, noting a better understanding of cell-regulatory relationships, advances in immune profiling at the systems level, and new technology that allows for profiling small-volume samples from infants.

The present study involved a series of observational experiments and a small interventional trial.

First, the investigators conducted a wide array of blood- and fecal-based longitudinal analyses from 208 infants in Sweden to characterize immune cell expansion and microbiome colonization of the gut, with a focus on bifidobacteria.

Their results showed that infants lacking bifidobacteria, and HMO-utilization genes (which are expressed by bifidobacteria and other beneficial microbes), had higher levels of systemic inflammation, including increased T helper 2 (Th2) and Th17 responses.

“Infants not colonized by Bifidobacteriaceae or in cases where these microbes fail to expand during the first months of life there is evidence of systemic and intestinal inflammation, increased frequencies of activated immune cells, and reduced levels of regulatory cells indicative of systemic immune dysregulation,” the investigators wrote.

The interventional part of the study involved 60 breastfed infants in California. Twenty-nine of the newborns were given 1.8 x 1010 colony-forming units (CFUs) of B. longum subsp. infantis EVC001 daily from postnatal day 7 to day 28, while the remaining 31 infants were given no supplementation.

Fecal samples were collected on day 6 and day 60. At day 60, supplemented infants had high levels of HMO-utilization genes, plus significantly greater alpha diversity (P = .0001; Wilcoxon), compared with controls. Infants receiving EVC001 also had less inflammatory fecal cytokines, suggesting that microbes expressing HMO-utilization genes cause a shift away from proinflammatory Th2 and Th17 responses, and toward Th1.

Dr. Petter Brodin

“It is not the simple presence of bifidobacteria that is responsible for the immune effects but the metabolic partnership between the bacteria and HMOs,” the investigators noted.

According to principal investigator Petter Brodin, MD, PhD, professor of pediatric immunology at Karolinska Institutet, Solna, Sweden, the findings deserve further investigation.

“Our data indicate that substitution with beneficial microbes efficiently metabolizing HMOs could open a way to prevent cases of immune-mediated diseases, but larger, randomized trials aimed at this will be required to determine this potential,” Dr. Brodin said in an interview.

Dr. Carolynn Dude

Carolynn Dude, MD, PhD, assistant professor in the division of maternal-fetal medicine at Emory University, Atlanta, agreed that more work is needed.

“While this study provides some insight into the mechanisms that may set up a newborn for poor health outcomes later in life, the data is still very limited, and more long-term follow-up on these infants is needed before recommending any sort of bacterial supplementation to a newborn,” Dr. Dude said in an interview.

Dr. Brodin and colleagues are planning an array of related studies, including larger clinical trials; further investigations into mechanisms of action; comparisons between the present cohort and infants in Kenya, where immune-mediated diseases are rare; and evaluations of vaccine responses and infectious disease susceptibility.

The study was supported by the European Research Council, the Swedish Research Council, the Marianne & Marcus Wallenberg Foundation, and others. The investigators disclosed relationships with Cytodelics, Scailyte, Kancera, and others. Dr. Dude reported no relevant conflicts of interest.

Publications
Topics
Sections

 

Supplementing breastfed infants with bifidobacteria promotes development of a well-regulated immune system, theoretically reducing risk of immune-mediated conditions like allergies and asthma, according to investigators.

These findings support the importance of early gut colonization with beneficial microbes, an event that may affect the immune system throughout life, reported lead author Bethany M. Henrick, PhD, director of immunology and diagnostics at Evolve Biosystems, Davis, Calif., and adjunct assistant professor at the University of Nebraska, Lincoln, and colleagues.

“Dysbiosis of the infant gut microbiome is common in modern societies and a likely contributing factor to the increased incidences of immune-mediated disorders,” the investigators wrote in Cell. “Therefore, there is great interest in identifying microbial factors that can support healthier immune system imprinting and hopefully prevent cases of allergy, autoimmunity, and possibly other conditions involving the immune system.”

Prevailing theory suggests that the rising incidence of neonatal intestinal dysbiosis – which is typical in developed countries – may be caused by a variety of factors, including cesarean sections; modern hygiene practices; antibiotics, antiseptics, and other medications; diets high in fat and sugar; and infant formula.

According to Dr. Henrick and colleagues, a healthy gut microbiome plays the greatest role in immunological development during the first 3 months post partum; specifically, a lack of bifidobacteria during this time has been linked with increased risks of autoimmunity and enteric inflammation, although underlying immune mechanisms remain unclear.

Bifidobacteria also exemplify the symbiotic relationship between mothers, babies, and beneficial microbes. The investigators pointed out that breast milk contains human milk oligosaccharides (HMOs), which humans cannot digest, but are an excellent source of energy for bifidobacteria and other beneficial microbes, giving them a “selective nutritional advantage.”

Bifidobacteria should therefore be common residents within the infant gut, but this is often not now the case, leading Dr. Henrick and colleagues to zero in on the microbe, in hopes of determining the exactly how beneficial bacteria shape immune development.

It is only recently that the necessary knowledge and techniques to perform studies like this one have become available, the investigators wrote, noting a better understanding of cell-regulatory relationships, advances in immune profiling at the systems level, and new technology that allows for profiling small-volume samples from infants.

The present study involved a series of observational experiments and a small interventional trial.

First, the investigators conducted a wide array of blood- and fecal-based longitudinal analyses from 208 infants in Sweden to characterize immune cell expansion and microbiome colonization of the gut, with a focus on bifidobacteria.

Their results showed that infants lacking bifidobacteria, and HMO-utilization genes (which are expressed by bifidobacteria and other beneficial microbes), had higher levels of systemic inflammation, including increased T helper 2 (Th2) and Th17 responses.

“Infants not colonized by Bifidobacteriaceae or in cases where these microbes fail to expand during the first months of life there is evidence of systemic and intestinal inflammation, increased frequencies of activated immune cells, and reduced levels of regulatory cells indicative of systemic immune dysregulation,” the investigators wrote.

The interventional part of the study involved 60 breastfed infants in California. Twenty-nine of the newborns were given 1.8 x 1010 colony-forming units (CFUs) of B. longum subsp. infantis EVC001 daily from postnatal day 7 to day 28, while the remaining 31 infants were given no supplementation.

Fecal samples were collected on day 6 and day 60. At day 60, supplemented infants had high levels of HMO-utilization genes, plus significantly greater alpha diversity (P = .0001; Wilcoxon), compared with controls. Infants receiving EVC001 also had less inflammatory fecal cytokines, suggesting that microbes expressing HMO-utilization genes cause a shift away from proinflammatory Th2 and Th17 responses, and toward Th1.

Dr. Petter Brodin

“It is not the simple presence of bifidobacteria that is responsible for the immune effects but the metabolic partnership between the bacteria and HMOs,” the investigators noted.

According to principal investigator Petter Brodin, MD, PhD, professor of pediatric immunology at Karolinska Institutet, Solna, Sweden, the findings deserve further investigation.

“Our data indicate that substitution with beneficial microbes efficiently metabolizing HMOs could open a way to prevent cases of immune-mediated diseases, but larger, randomized trials aimed at this will be required to determine this potential,” Dr. Brodin said in an interview.

Dr. Carolynn Dude

Carolynn Dude, MD, PhD, assistant professor in the division of maternal-fetal medicine at Emory University, Atlanta, agreed that more work is needed.

“While this study provides some insight into the mechanisms that may set up a newborn for poor health outcomes later in life, the data is still very limited, and more long-term follow-up on these infants is needed before recommending any sort of bacterial supplementation to a newborn,” Dr. Dude said in an interview.

Dr. Brodin and colleagues are planning an array of related studies, including larger clinical trials; further investigations into mechanisms of action; comparisons between the present cohort and infants in Kenya, where immune-mediated diseases are rare; and evaluations of vaccine responses and infectious disease susceptibility.

The study was supported by the European Research Council, the Swedish Research Council, the Marianne & Marcus Wallenberg Foundation, and others. The investigators disclosed relationships with Cytodelics, Scailyte, Kancera, and others. Dr. Dude reported no relevant conflicts of interest.

 

Supplementing breastfed infants with bifidobacteria promotes development of a well-regulated immune system, theoretically reducing risk of immune-mediated conditions like allergies and asthma, according to investigators.

These findings support the importance of early gut colonization with beneficial microbes, an event that may affect the immune system throughout life, reported lead author Bethany M. Henrick, PhD, director of immunology and diagnostics at Evolve Biosystems, Davis, Calif., and adjunct assistant professor at the University of Nebraska, Lincoln, and colleagues.

“Dysbiosis of the infant gut microbiome is common in modern societies and a likely contributing factor to the increased incidences of immune-mediated disorders,” the investigators wrote in Cell. “Therefore, there is great interest in identifying microbial factors that can support healthier immune system imprinting and hopefully prevent cases of allergy, autoimmunity, and possibly other conditions involving the immune system.”

Prevailing theory suggests that the rising incidence of neonatal intestinal dysbiosis – which is typical in developed countries – may be caused by a variety of factors, including cesarean sections; modern hygiene practices; antibiotics, antiseptics, and other medications; diets high in fat and sugar; and infant formula.

According to Dr. Henrick and colleagues, a healthy gut microbiome plays the greatest role in immunological development during the first 3 months post partum; specifically, a lack of bifidobacteria during this time has been linked with increased risks of autoimmunity and enteric inflammation, although underlying immune mechanisms remain unclear.

Bifidobacteria also exemplify the symbiotic relationship between mothers, babies, and beneficial microbes. The investigators pointed out that breast milk contains human milk oligosaccharides (HMOs), which humans cannot digest, but are an excellent source of energy for bifidobacteria and other beneficial microbes, giving them a “selective nutritional advantage.”

Bifidobacteria should therefore be common residents within the infant gut, but this is often not now the case, leading Dr. Henrick and colleagues to zero in on the microbe, in hopes of determining the exactly how beneficial bacteria shape immune development.

It is only recently that the necessary knowledge and techniques to perform studies like this one have become available, the investigators wrote, noting a better understanding of cell-regulatory relationships, advances in immune profiling at the systems level, and new technology that allows for profiling small-volume samples from infants.

The present study involved a series of observational experiments and a small interventional trial.

First, the investigators conducted a wide array of blood- and fecal-based longitudinal analyses from 208 infants in Sweden to characterize immune cell expansion and microbiome colonization of the gut, with a focus on bifidobacteria.

Their results showed that infants lacking bifidobacteria, and HMO-utilization genes (which are expressed by bifidobacteria and other beneficial microbes), had higher levels of systemic inflammation, including increased T helper 2 (Th2) and Th17 responses.

“Infants not colonized by Bifidobacteriaceae or in cases where these microbes fail to expand during the first months of life there is evidence of systemic and intestinal inflammation, increased frequencies of activated immune cells, and reduced levels of regulatory cells indicative of systemic immune dysregulation,” the investigators wrote.

The interventional part of the study involved 60 breastfed infants in California. Twenty-nine of the newborns were given 1.8 x 1010 colony-forming units (CFUs) of B. longum subsp. infantis EVC001 daily from postnatal day 7 to day 28, while the remaining 31 infants were given no supplementation.

Fecal samples were collected on day 6 and day 60. At day 60, supplemented infants had high levels of HMO-utilization genes, plus significantly greater alpha diversity (P = .0001; Wilcoxon), compared with controls. Infants receiving EVC001 also had less inflammatory fecal cytokines, suggesting that microbes expressing HMO-utilization genes cause a shift away from proinflammatory Th2 and Th17 responses, and toward Th1.

Dr. Petter Brodin

“It is not the simple presence of bifidobacteria that is responsible for the immune effects but the metabolic partnership between the bacteria and HMOs,” the investigators noted.

According to principal investigator Petter Brodin, MD, PhD, professor of pediatric immunology at Karolinska Institutet, Solna, Sweden, the findings deserve further investigation.

“Our data indicate that substitution with beneficial microbes efficiently metabolizing HMOs could open a way to prevent cases of immune-mediated diseases, but larger, randomized trials aimed at this will be required to determine this potential,” Dr. Brodin said in an interview.

Dr. Carolynn Dude

Carolynn Dude, MD, PhD, assistant professor in the division of maternal-fetal medicine at Emory University, Atlanta, agreed that more work is needed.

“While this study provides some insight into the mechanisms that may set up a newborn for poor health outcomes later in life, the data is still very limited, and more long-term follow-up on these infants is needed before recommending any sort of bacterial supplementation to a newborn,” Dr. Dude said in an interview.

Dr. Brodin and colleagues are planning an array of related studies, including larger clinical trials; further investigations into mechanisms of action; comparisons between the present cohort and infants in Kenya, where immune-mediated diseases are rare; and evaluations of vaccine responses and infectious disease susceptibility.

The study was supported by the European Research Council, the Swedish Research Council, the Marianne & Marcus Wallenberg Foundation, and others. The investigators disclosed relationships with Cytodelics, Scailyte, Kancera, and others. Dr. Dude reported no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Memory benefit seen with antihypertensives crossing blood-brain barrier

Article Type
Changed
Mon, 06/21/2021 - 19:05

 

Antihypertensive medications that cross the blood-brain barrier (BBB) may be linked with less memory decline, compared with other drugs for high blood pressure, suggest the findings of a meta-analysis.

Over a 3-year period, cognitively normal older adults taking BBB-crossing antihypertensives demonstrated superior verbal memory, compared with similar individuals receiving non–BBB-crossing antihypertensives, reported lead author Jean K. Ho, PhD, of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, and colleagues.

According to the investigators, the findings add color to a known link between hypertension and neurologic degeneration, and may aid the search for new therapeutic targets.

“Hypertension is a well-established risk factor for cognitive decline and dementia, possibly through its effects on both cerebrovascular disease and Alzheimer’s disease,” Dr. Ho and colleagues wrote in Hypertension. “Studies of antihypertensive treatments have reported possible salutary effects on cognition and cerebrovascular disease, as well as Alzheimer’s disease neuropathology.”

In a previous study, individuals younger than 75 years exposed to antihypertensives had an 8% decreased risk of dementia per year of use, while another trial showed that intensive blood pressure–lowering therapy reduced mild cognitive impairment by 19%.

“Despite these encouraging findings ... larger meta-analytic studies have been hampered by the fact that pharmacokinetic properties are typically not considered in existing studies or routine clinical practice,” wrote Dr. Ho and colleagues. “The present study sought to fill this gap [in that it was] a large and longitudinal meta-analytic study of existing data recoded to assess the effects of BBB-crossing potential in renin-angiotensin system [RAS] treatments among hypertensive adults.”
 

Methods and results

The meta-analysis included randomized clinical trials, prospective cohort studies, and retrospective observational studies. The researchers assessed data on 12,849 individuals from 14 cohorts that received either BBB-crossing or non–BBB-crossing antihypertensives.

The BBB-crossing properties of RAS treatments were identified by a literature review. Of ACE inhibitors, captopril, fosinopril, lisinopril, perindopril, ramipril, and trandolapril were classified as BBB crossing, and benazepril, enalapril, moexipril, and quinapril were classified as non–BBB-crossing. Of ARBs, telmisartan and candesartan were considered BBB-crossing, and olmesartan, eprosartan, irbesartan, and losartan were tagged as non–BBB-crossing.

Cognition was assessed via the following seven domains: executive function, attention, verbal memory learning, language, mental status, recall, and processing speed.

Compared with individuals taking non–BBB-crossing antihypertensives, those taking BBB-crossing agents had significantly superior verbal memory (recall), with a maximum effect size of 0.07 (P = .03).

According to the investigators, this finding was particularly noteworthy, as the BBB-crossing group had relatively higher vascular risk burden and lower mean education level.

“These differences make it all the more remarkable that the BBB-crossing group displayed better memory ability over time despite these cognitive disadvantages,” the investigators wrote.

Still, not all the findings favored BBB-crossing agents. Individuals in the BBB-crossing group had relatively inferior attention ability, with a minimum effect size of –0.17 (P = .02).

The other cognitive measures were not significantly different between groups.
 

Clinicians may consider findings after accounting for other factors

Principal investigator Daniel A. Nation, PhD, associate professor of psychological science and a faculty member of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, suggested that the small difference in verbal memory between groups could be clinically significant over a longer period of time.

Dr. Daniel A. Nation

“Although the overall effect size was pretty small, if you look at how long it would take for someone [with dementia] to progress over many years of decline, it would actually end up being a pretty big effect,” Dr. Nation said in an interview. “Small effect sizes could actually end up preventing a lot of cases of dementia,” he added.

The conflicting results in the BBB-crossing group – better verbal memory but worse attention ability – were “surprising,” he noted.

“I sort of didn’t believe it at first,” Dr. Nation said, “because the memory finding is sort of replication – we’d observed the same exact effect on memory in a smaller sample in another study. ... The attention [finding], going another way, was a new thing.”

Dr. Nation suggested that the intergroup differences in attention ability may stem from idiosyncrasies of the tests used to measure that domain, which can be impacted by cardiovascular or brain vascular disease. Or it could be caused by something else entirely, he said, noting that further investigation is needed.

He added that the improvements in verbal memory within the BBB-crossing group could be caused by direct effects on the brain. He pointed out that certain ACE polymorphisms have been linked with Alzheimer’s disease risk, and those same polymorphisms, in animal models, lead to neurodegeneration, with reversal possible through administration of ACE inhibitors.

“It could be that what we’re observing has nothing really to do with blood pressure,” Dr. Nation explained. “This could be a neuronal effect on learning memory systems.”

He went on to suggest that clinicians may consider these findings when selecting antihypertensive agents for their patients, with the caveat that all other prescribing factors have already been taking to account.

“In the event that you’re going to give an ACE inhibitor or an angiotensin receptor blocker anyway, and it ends up being a somewhat arbitrary decision in terms of which specific drug you’re going to give, then perhaps this is a piece of information you would take into account – that one gets in the brain and one doesn’t – in somebody at risk for cognitive decline,” Dr. Nation said.
 

 

 

Exact mechanisms of action unknown

Hélène Girouard, PhD, assistant professor of pharmacology and physiology at the University of Montreal, said in an interview that the findings are “of considerable importance, knowing that brain alterations could begin as much as 30 years before manifestation of dementia.”

Dr. Hélène Girouard

Since 2003, Dr. Girouard has been studying the cognitive effects of antihypertensive medications. She noted that previous studies involving rodents “have shown beneficial effects [of BBB-crossing antihypertensive drugs] on cognition independent of their effects on blood pressure.”

The drugs’ exact mechanisms of action, however, remain elusive, according to Dr. Girouard, who offered several possible explanations, including amelioration of BBB disruption, brain inflammation, cerebral blood flow dysregulation, cholinergic dysfunction, and neurologic deficits. “Whether these mechanisms may explain Ho and colleagues’ observations remains to be established,” she added.

Andrea L. Schneider, MD, PhD, assistant professor of neurology at the University of Pennsylvania, Philadelphia, applauded the study, but ultimately suggested that more research is needed to impact clinical decision-making.

“The results of this important and well-done study suggest that further investigation into targeted mechanism-based approaches to selecting hypertension treatment agents, with a specific focus on cognitive outcomes, is warranted,” Dr. Schneider said in an interview. “Before changing clinical practice, further work is necessary to disentangle contributions of medication mechanism, comorbid vascular risk factors, and achieved blood pressure reduction, among others.”

The investigators disclosed support from the National Institutes of Health, the Alzheimer’s Association, the Waksman Foundation of Japan, and others. The interviewees reported no relevant conflicts of interest.

Publications
Topics
Sections

 

Antihypertensive medications that cross the blood-brain barrier (BBB) may be linked with less memory decline, compared with other drugs for high blood pressure, suggest the findings of a meta-analysis.

Over a 3-year period, cognitively normal older adults taking BBB-crossing antihypertensives demonstrated superior verbal memory, compared with similar individuals receiving non–BBB-crossing antihypertensives, reported lead author Jean K. Ho, PhD, of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, and colleagues.

According to the investigators, the findings add color to a known link between hypertension and neurologic degeneration, and may aid the search for new therapeutic targets.

“Hypertension is a well-established risk factor for cognitive decline and dementia, possibly through its effects on both cerebrovascular disease and Alzheimer’s disease,” Dr. Ho and colleagues wrote in Hypertension. “Studies of antihypertensive treatments have reported possible salutary effects on cognition and cerebrovascular disease, as well as Alzheimer’s disease neuropathology.”

In a previous study, individuals younger than 75 years exposed to antihypertensives had an 8% decreased risk of dementia per year of use, while another trial showed that intensive blood pressure–lowering therapy reduced mild cognitive impairment by 19%.

“Despite these encouraging findings ... larger meta-analytic studies have been hampered by the fact that pharmacokinetic properties are typically not considered in existing studies or routine clinical practice,” wrote Dr. Ho and colleagues. “The present study sought to fill this gap [in that it was] a large and longitudinal meta-analytic study of existing data recoded to assess the effects of BBB-crossing potential in renin-angiotensin system [RAS] treatments among hypertensive adults.”
 

Methods and results

The meta-analysis included randomized clinical trials, prospective cohort studies, and retrospective observational studies. The researchers assessed data on 12,849 individuals from 14 cohorts that received either BBB-crossing or non–BBB-crossing antihypertensives.

The BBB-crossing properties of RAS treatments were identified by a literature review. Of ACE inhibitors, captopril, fosinopril, lisinopril, perindopril, ramipril, and trandolapril were classified as BBB crossing, and benazepril, enalapril, moexipril, and quinapril were classified as non–BBB-crossing. Of ARBs, telmisartan and candesartan were considered BBB-crossing, and olmesartan, eprosartan, irbesartan, and losartan were tagged as non–BBB-crossing.

Cognition was assessed via the following seven domains: executive function, attention, verbal memory learning, language, mental status, recall, and processing speed.

Compared with individuals taking non–BBB-crossing antihypertensives, those taking BBB-crossing agents had significantly superior verbal memory (recall), with a maximum effect size of 0.07 (P = .03).

According to the investigators, this finding was particularly noteworthy, as the BBB-crossing group had relatively higher vascular risk burden and lower mean education level.

“These differences make it all the more remarkable that the BBB-crossing group displayed better memory ability over time despite these cognitive disadvantages,” the investigators wrote.

Still, not all the findings favored BBB-crossing agents. Individuals in the BBB-crossing group had relatively inferior attention ability, with a minimum effect size of –0.17 (P = .02).

The other cognitive measures were not significantly different between groups.
 

Clinicians may consider findings after accounting for other factors

Principal investigator Daniel A. Nation, PhD, associate professor of psychological science and a faculty member of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, suggested that the small difference in verbal memory between groups could be clinically significant over a longer period of time.

Dr. Daniel A. Nation

“Although the overall effect size was pretty small, if you look at how long it would take for someone [with dementia] to progress over many years of decline, it would actually end up being a pretty big effect,” Dr. Nation said in an interview. “Small effect sizes could actually end up preventing a lot of cases of dementia,” he added.

The conflicting results in the BBB-crossing group – better verbal memory but worse attention ability – were “surprising,” he noted.

“I sort of didn’t believe it at first,” Dr. Nation said, “because the memory finding is sort of replication – we’d observed the same exact effect on memory in a smaller sample in another study. ... The attention [finding], going another way, was a new thing.”

Dr. Nation suggested that the intergroup differences in attention ability may stem from idiosyncrasies of the tests used to measure that domain, which can be impacted by cardiovascular or brain vascular disease. Or it could be caused by something else entirely, he said, noting that further investigation is needed.

He added that the improvements in verbal memory within the BBB-crossing group could be caused by direct effects on the brain. He pointed out that certain ACE polymorphisms have been linked with Alzheimer’s disease risk, and those same polymorphisms, in animal models, lead to neurodegeneration, with reversal possible through administration of ACE inhibitors.

“It could be that what we’re observing has nothing really to do with blood pressure,” Dr. Nation explained. “This could be a neuronal effect on learning memory systems.”

He went on to suggest that clinicians may consider these findings when selecting antihypertensive agents for their patients, with the caveat that all other prescribing factors have already been taking to account.

“In the event that you’re going to give an ACE inhibitor or an angiotensin receptor blocker anyway, and it ends up being a somewhat arbitrary decision in terms of which specific drug you’re going to give, then perhaps this is a piece of information you would take into account – that one gets in the brain and one doesn’t – in somebody at risk for cognitive decline,” Dr. Nation said.
 

 

 

Exact mechanisms of action unknown

Hélène Girouard, PhD, assistant professor of pharmacology and physiology at the University of Montreal, said in an interview that the findings are “of considerable importance, knowing that brain alterations could begin as much as 30 years before manifestation of dementia.”

Dr. Hélène Girouard

Since 2003, Dr. Girouard has been studying the cognitive effects of antihypertensive medications. She noted that previous studies involving rodents “have shown beneficial effects [of BBB-crossing antihypertensive drugs] on cognition independent of their effects on blood pressure.”

The drugs’ exact mechanisms of action, however, remain elusive, according to Dr. Girouard, who offered several possible explanations, including amelioration of BBB disruption, brain inflammation, cerebral blood flow dysregulation, cholinergic dysfunction, and neurologic deficits. “Whether these mechanisms may explain Ho and colleagues’ observations remains to be established,” she added.

Andrea L. Schneider, MD, PhD, assistant professor of neurology at the University of Pennsylvania, Philadelphia, applauded the study, but ultimately suggested that more research is needed to impact clinical decision-making.

“The results of this important and well-done study suggest that further investigation into targeted mechanism-based approaches to selecting hypertension treatment agents, with a specific focus on cognitive outcomes, is warranted,” Dr. Schneider said in an interview. “Before changing clinical practice, further work is necessary to disentangle contributions of medication mechanism, comorbid vascular risk factors, and achieved blood pressure reduction, among others.”

The investigators disclosed support from the National Institutes of Health, the Alzheimer’s Association, the Waksman Foundation of Japan, and others. The interviewees reported no relevant conflicts of interest.

 

Antihypertensive medications that cross the blood-brain barrier (BBB) may be linked with less memory decline, compared with other drugs for high blood pressure, suggest the findings of a meta-analysis.

Over a 3-year period, cognitively normal older adults taking BBB-crossing antihypertensives demonstrated superior verbal memory, compared with similar individuals receiving non–BBB-crossing antihypertensives, reported lead author Jean K. Ho, PhD, of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, and colleagues.

According to the investigators, the findings add color to a known link between hypertension and neurologic degeneration, and may aid the search for new therapeutic targets.

“Hypertension is a well-established risk factor for cognitive decline and dementia, possibly through its effects on both cerebrovascular disease and Alzheimer’s disease,” Dr. Ho and colleagues wrote in Hypertension. “Studies of antihypertensive treatments have reported possible salutary effects on cognition and cerebrovascular disease, as well as Alzheimer’s disease neuropathology.”

In a previous study, individuals younger than 75 years exposed to antihypertensives had an 8% decreased risk of dementia per year of use, while another trial showed that intensive blood pressure–lowering therapy reduced mild cognitive impairment by 19%.

“Despite these encouraging findings ... larger meta-analytic studies have been hampered by the fact that pharmacokinetic properties are typically not considered in existing studies or routine clinical practice,” wrote Dr. Ho and colleagues. “The present study sought to fill this gap [in that it was] a large and longitudinal meta-analytic study of existing data recoded to assess the effects of BBB-crossing potential in renin-angiotensin system [RAS] treatments among hypertensive adults.”
 

Methods and results

The meta-analysis included randomized clinical trials, prospective cohort studies, and retrospective observational studies. The researchers assessed data on 12,849 individuals from 14 cohorts that received either BBB-crossing or non–BBB-crossing antihypertensives.

The BBB-crossing properties of RAS treatments were identified by a literature review. Of ACE inhibitors, captopril, fosinopril, lisinopril, perindopril, ramipril, and trandolapril were classified as BBB crossing, and benazepril, enalapril, moexipril, and quinapril were classified as non–BBB-crossing. Of ARBs, telmisartan and candesartan were considered BBB-crossing, and olmesartan, eprosartan, irbesartan, and losartan were tagged as non–BBB-crossing.

Cognition was assessed via the following seven domains: executive function, attention, verbal memory learning, language, mental status, recall, and processing speed.

Compared with individuals taking non–BBB-crossing antihypertensives, those taking BBB-crossing agents had significantly superior verbal memory (recall), with a maximum effect size of 0.07 (P = .03).

According to the investigators, this finding was particularly noteworthy, as the BBB-crossing group had relatively higher vascular risk burden and lower mean education level.

“These differences make it all the more remarkable that the BBB-crossing group displayed better memory ability over time despite these cognitive disadvantages,” the investigators wrote.

Still, not all the findings favored BBB-crossing agents. Individuals in the BBB-crossing group had relatively inferior attention ability, with a minimum effect size of –0.17 (P = .02).

The other cognitive measures were not significantly different between groups.
 

Clinicians may consider findings after accounting for other factors

Principal investigator Daniel A. Nation, PhD, associate professor of psychological science and a faculty member of the Institute for Memory Impairments and Neurological Disorders at the University of California, Irvine, suggested that the small difference in verbal memory between groups could be clinically significant over a longer period of time.

Dr. Daniel A. Nation

“Although the overall effect size was pretty small, if you look at how long it would take for someone [with dementia] to progress over many years of decline, it would actually end up being a pretty big effect,” Dr. Nation said in an interview. “Small effect sizes could actually end up preventing a lot of cases of dementia,” he added.

The conflicting results in the BBB-crossing group – better verbal memory but worse attention ability – were “surprising,” he noted.

“I sort of didn’t believe it at first,” Dr. Nation said, “because the memory finding is sort of replication – we’d observed the same exact effect on memory in a smaller sample in another study. ... The attention [finding], going another way, was a new thing.”

Dr. Nation suggested that the intergroup differences in attention ability may stem from idiosyncrasies of the tests used to measure that domain, which can be impacted by cardiovascular or brain vascular disease. Or it could be caused by something else entirely, he said, noting that further investigation is needed.

He added that the improvements in verbal memory within the BBB-crossing group could be caused by direct effects on the brain. He pointed out that certain ACE polymorphisms have been linked with Alzheimer’s disease risk, and those same polymorphisms, in animal models, lead to neurodegeneration, with reversal possible through administration of ACE inhibitors.

“It could be that what we’re observing has nothing really to do with blood pressure,” Dr. Nation explained. “This could be a neuronal effect on learning memory systems.”

He went on to suggest that clinicians may consider these findings when selecting antihypertensive agents for their patients, with the caveat that all other prescribing factors have already been taking to account.

“In the event that you’re going to give an ACE inhibitor or an angiotensin receptor blocker anyway, and it ends up being a somewhat arbitrary decision in terms of which specific drug you’re going to give, then perhaps this is a piece of information you would take into account – that one gets in the brain and one doesn’t – in somebody at risk for cognitive decline,” Dr. Nation said.
 

 

 

Exact mechanisms of action unknown

Hélène Girouard, PhD, assistant professor of pharmacology and physiology at the University of Montreal, said in an interview that the findings are “of considerable importance, knowing that brain alterations could begin as much as 30 years before manifestation of dementia.”

Dr. Hélène Girouard

Since 2003, Dr. Girouard has been studying the cognitive effects of antihypertensive medications. She noted that previous studies involving rodents “have shown beneficial effects [of BBB-crossing antihypertensive drugs] on cognition independent of their effects on blood pressure.”

The drugs’ exact mechanisms of action, however, remain elusive, according to Dr. Girouard, who offered several possible explanations, including amelioration of BBB disruption, brain inflammation, cerebral blood flow dysregulation, cholinergic dysfunction, and neurologic deficits. “Whether these mechanisms may explain Ho and colleagues’ observations remains to be established,” she added.

Andrea L. Schneider, MD, PhD, assistant professor of neurology at the University of Pennsylvania, Philadelphia, applauded the study, but ultimately suggested that more research is needed to impact clinical decision-making.

“The results of this important and well-done study suggest that further investigation into targeted mechanism-based approaches to selecting hypertension treatment agents, with a specific focus on cognitive outcomes, is warranted,” Dr. Schneider said in an interview. “Before changing clinical practice, further work is necessary to disentangle contributions of medication mechanism, comorbid vascular risk factors, and achieved blood pressure reduction, among others.”

The investigators disclosed support from the National Institutes of Health, the Alzheimer’s Association, the Waksman Foundation of Japan, and others. The interviewees reported no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HYPERTENSION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article