Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

Additional D1 biopsy increased diagnostic yield for celiac disease

Article Type
Changed
Display Headline
Additional D1 biopsy increased diagnostic yield for celiac disease

Among a large cohort of patients referred for endoscopy for suspected celiac disease as well as all upper gastrointestinal symptoms, a single additional D1 biopsy specimen from any site significantly increased the diagnostic yield for celiac disease, according to researchers.

Of 1,378 patients who had D2 and D1 biopsy specimens taken, 268 were newly diagnosed with celiac disease, and 26 had villous atrophy confined to D1, defined as ultrashort celiac disease (USCD). Compared with a standard D2 biopsy, an additional D1 biopsy increased the diagnostic yield by 9.7% (P less than .0001). Among the 26 diagnosed with USCD, 7 had normal D2 biopsy specimens, and 4 others had negative tests for endomysial antibodies (EMAs), totaling 11 patients for whom celiac disease would have been missed in the absence of a D1 biopsy.

 

“The addition of a D1 biopsy specimen to diagnose celiac disease may reduce the known delay in diagnosis that many patients with celiac disease experience. This may allow earlier institution of a gluten-free diet, potentially prevent nutritional deficiencies, and reduce the symptomatic burden of celiac disease,” wrote Dr. Peter Mooney of Royal Hallamshire Hospital, Sheffield, England, and his colleagues. (Gastroenterology 2016 April 7. doi: 10.1053/j-gastro.2016.01.029).

The prospective study recruited 1,378 consecutive patients referred to a single teaching hospital for endoscopy from 2008 to 2014. In total, 268 were newly diagnosed with celiac disease, and 26 were diagnosed with USCD.

To investigate the optimal site for targeted D1 sampling, 171 patients underwent quadrantic D1 biopsy, 61 of whom were diagnosed with celiac disease. Biopsy specimens from any topographical area resulted in high sensitivity, a fact that increases the feasibility of a D1 biopsy policy, since no specific target area is required, according to the researchers. Nonceliac abnormalities such as peptic duodenitis or gastric heterotopia have been suggested to impede interpretation of D1 biopsies, but these were rare in the study and did not interfere with the analysis.

USCD may be an early form of conventional celiac disease, an idea supported by the findings. Compared with patients diagnosed with conventional celiac disease, patients diagnosed with USCD were younger and had a much lower rate of diarrhea, which by decision-tree analysis was the single factor discriminating between the two groups. Compared with healthy controls, individuals with conventional celiac disease, but not USCD, were more likely to present with anemia, diarrhea, a family history of celiac disease, lethargy, and osteoporosis. Patients with USCD and conventional disease had similar rates of IgA tissue transglutaminase antibodies (tTG), but USCD patients had lower titers (P less than .001). The USCD group also had fewer ferritin and folate deficiencies.

The researchers suggested that clinical phenotypic differences may be due to minimal loss of absorptive capacity associated with a short segment of villous atrophy. Given the younger average age at diagnosis of USCD and lower tTG titers, USCD may represent an early stage of celiac disease, resulting in fewer nutritional deficiencies observed because of a shorter lead time to diagnosis.

Although USCD patients had a milder clinical phenotype, which has raised concerns that a strict gluten-free diet may be unnecessary, follow-up data demonstrated that a gluten-free diet produced improvement in symptoms and a significant decrease in the tTG titer. These results may indicate that the immune cascade was switched off, according to the researchers, and that early diagnosis may present a unique opportunity to prevent further micronutrient deficiency.

Dr. Mooney and his coauthors reported having no relevant financial disclosures.

Publications
Topics
Sections

Among a large cohort of patients referred for endoscopy for suspected celiac disease as well as all upper gastrointestinal symptoms, a single additional D1 biopsy specimen from any site significantly increased the diagnostic yield for celiac disease, according to researchers.

Of 1,378 patients who had D2 and D1 biopsy specimens taken, 268 were newly diagnosed with celiac disease, and 26 had villous atrophy confined to D1, defined as ultrashort celiac disease (USCD). Compared with a standard D2 biopsy, an additional D1 biopsy increased the diagnostic yield by 9.7% (P less than .0001). Among the 26 diagnosed with USCD, 7 had normal D2 biopsy specimens, and 4 others had negative tests for endomysial antibodies (EMAs), totaling 11 patients for whom celiac disease would have been missed in the absence of a D1 biopsy.

 

“The addition of a D1 biopsy specimen to diagnose celiac disease may reduce the known delay in diagnosis that many patients with celiac disease experience. This may allow earlier institution of a gluten-free diet, potentially prevent nutritional deficiencies, and reduce the symptomatic burden of celiac disease,” wrote Dr. Peter Mooney of Royal Hallamshire Hospital, Sheffield, England, and his colleagues. (Gastroenterology 2016 April 7. doi: 10.1053/j-gastro.2016.01.029).

The prospective study recruited 1,378 consecutive patients referred to a single teaching hospital for endoscopy from 2008 to 2014. In total, 268 were newly diagnosed with celiac disease, and 26 were diagnosed with USCD.

To investigate the optimal site for targeted D1 sampling, 171 patients underwent quadrantic D1 biopsy, 61 of whom were diagnosed with celiac disease. Biopsy specimens from any topographical area resulted in high sensitivity, a fact that increases the feasibility of a D1 biopsy policy, since no specific target area is required, according to the researchers. Nonceliac abnormalities such as peptic duodenitis or gastric heterotopia have been suggested to impede interpretation of D1 biopsies, but these were rare in the study and did not interfere with the analysis.

USCD may be an early form of conventional celiac disease, an idea supported by the findings. Compared with patients diagnosed with conventional celiac disease, patients diagnosed with USCD were younger and had a much lower rate of diarrhea, which by decision-tree analysis was the single factor discriminating between the two groups. Compared with healthy controls, individuals with conventional celiac disease, but not USCD, were more likely to present with anemia, diarrhea, a family history of celiac disease, lethargy, and osteoporosis. Patients with USCD and conventional disease had similar rates of IgA tissue transglutaminase antibodies (tTG), but USCD patients had lower titers (P less than .001). The USCD group also had fewer ferritin and folate deficiencies.

The researchers suggested that clinical phenotypic differences may be due to minimal loss of absorptive capacity associated with a short segment of villous atrophy. Given the younger average age at diagnosis of USCD and lower tTG titers, USCD may represent an early stage of celiac disease, resulting in fewer nutritional deficiencies observed because of a shorter lead time to diagnosis.

Although USCD patients had a milder clinical phenotype, which has raised concerns that a strict gluten-free diet may be unnecessary, follow-up data demonstrated that a gluten-free diet produced improvement in symptoms and a significant decrease in the tTG titer. These results may indicate that the immune cascade was switched off, according to the researchers, and that early diagnosis may present a unique opportunity to prevent further micronutrient deficiency.

Dr. Mooney and his coauthors reported having no relevant financial disclosures.

Among a large cohort of patients referred for endoscopy for suspected celiac disease as well as all upper gastrointestinal symptoms, a single additional D1 biopsy specimen from any site significantly increased the diagnostic yield for celiac disease, according to researchers.

Of 1,378 patients who had D2 and D1 biopsy specimens taken, 268 were newly diagnosed with celiac disease, and 26 had villous atrophy confined to D1, defined as ultrashort celiac disease (USCD). Compared with a standard D2 biopsy, an additional D1 biopsy increased the diagnostic yield by 9.7% (P less than .0001). Among the 26 diagnosed with USCD, 7 had normal D2 biopsy specimens, and 4 others had negative tests for endomysial antibodies (EMAs), totaling 11 patients for whom celiac disease would have been missed in the absence of a D1 biopsy.

 

“The addition of a D1 biopsy specimen to diagnose celiac disease may reduce the known delay in diagnosis that many patients with celiac disease experience. This may allow earlier institution of a gluten-free diet, potentially prevent nutritional deficiencies, and reduce the symptomatic burden of celiac disease,” wrote Dr. Peter Mooney of Royal Hallamshire Hospital, Sheffield, England, and his colleagues. (Gastroenterology 2016 April 7. doi: 10.1053/j-gastro.2016.01.029).

The prospective study recruited 1,378 consecutive patients referred to a single teaching hospital for endoscopy from 2008 to 2014. In total, 268 were newly diagnosed with celiac disease, and 26 were diagnosed with USCD.

To investigate the optimal site for targeted D1 sampling, 171 patients underwent quadrantic D1 biopsy, 61 of whom were diagnosed with celiac disease. Biopsy specimens from any topographical area resulted in high sensitivity, a fact that increases the feasibility of a D1 biopsy policy, since no specific target area is required, according to the researchers. Nonceliac abnormalities such as peptic duodenitis or gastric heterotopia have been suggested to impede interpretation of D1 biopsies, but these were rare in the study and did not interfere with the analysis.

USCD may be an early form of conventional celiac disease, an idea supported by the findings. Compared with patients diagnosed with conventional celiac disease, patients diagnosed with USCD were younger and had a much lower rate of diarrhea, which by decision-tree analysis was the single factor discriminating between the two groups. Compared with healthy controls, individuals with conventional celiac disease, but not USCD, were more likely to present with anemia, diarrhea, a family history of celiac disease, lethargy, and osteoporosis. Patients with USCD and conventional disease had similar rates of IgA tissue transglutaminase antibodies (tTG), but USCD patients had lower titers (P less than .001). The USCD group also had fewer ferritin and folate deficiencies.

The researchers suggested that clinical phenotypic differences may be due to minimal loss of absorptive capacity associated with a short segment of villous atrophy. Given the younger average age at diagnosis of USCD and lower tTG titers, USCD may represent an early stage of celiac disease, resulting in fewer nutritional deficiencies observed because of a shorter lead time to diagnosis.

Although USCD patients had a milder clinical phenotype, which has raised concerns that a strict gluten-free diet may be unnecessary, follow-up data demonstrated that a gluten-free diet produced improvement in symptoms and a significant decrease in the tTG titer. These results may indicate that the immune cascade was switched off, according to the researchers, and that early diagnosis may present a unique opportunity to prevent further micronutrient deficiency.

Dr. Mooney and his coauthors reported having no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Display Headline
Additional D1 biopsy increased diagnostic yield for celiac disease
Display Headline
Additional D1 biopsy increased diagnostic yield for celiac disease
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Alternative CME
Vitals

Key clinical point: When added to a standard D2 biopsy, a single D1 biopsy from any site significantly increased the diagnostic yield for celiac disease.

Major finding: In total, 26 of 268 patients diagnosed with celiac disease had villous atrophy confined to D1 (ultrashort celiac disease); an additional D1 biopsy increased the diagnostic yield by 9.7% (P less than .0001), compared with a standard D2 biopsy.

Data source: A prospective study of 1,378 consecutive patients referred to a single teaching hospital for endoscopy from 2008 to 2014, 268 of whom were newly diagnosed with celiac disease and 26 with USCD.

Disclosures: Dr. Mooney and his coauthors reported having no relevant financial disclosures.

Racial disparities in colon cancer survival mainly driven by tumor stage at presentation

Results applicable to older black, white patients only
Article Type
Changed
Display Headline
Racial disparities in colon cancer survival mainly driven by tumor stage at presentation

Although black patients with colon cancer received significantly less treatment than white patients, particularly for late stage disease, much of the overall survival disparity between black and white patients was explained by tumor presentation at diagnosis rather than treatment differences, according to an analysis of SEER data.

Among demographically matched black and white patients, the 5-year survival difference was 8.3% (P less than .0001). Presentation match reduced the difference to 5.0% (P less than .0001), which accounted for 39.8% of the overall disparity. Additional matching by treatment reduced the difference only slightly to 4.9% (P less than .0001), which accounted for 1.2% of the overall disparity. Black patients had lower rates for most treatments, including surgery, than presentation-matched white patients (88.5% vs. 91.4%), and these differences were most pronounced at advanced stages. For example, significant differences between black and white patients in the use of chemotherapy was observed for stage III (53.1% vs. 64.2%; P less than .0001) and stage IV (56.1% vs. 63.3%; P = .001).

Courtesy Wikimedia Commons/Nephron/Creative Commons License

“Our results indicate that tumor presentation, including tumor stage, is indeed one of the most important factors contributing to the racial disparity in colon cancer survival. We observed that, after controlling for demographic factors, black patients in comparison with white patients had a significantly higher proportion of stage IV and lower proportions of stages I and II disease. Adequately matching on tumor presentation variables (e.g., stage, grade, size, and comorbidity) significantly reduced survival disparities,” wrote Dr. Yinzhi Lai of the Department of Medical Oncology at Sidney Kimmel Cancer Center, Philadelphia, and colleagues (Gastroenterology. 2016 Apr 4. doi: 10.1053/j.gastro.2016.01.030).

Treatment differences in advanced-stage patients, compared with early-stage patients, explained a higher proportion of the demographic-matched survival disparity. For example, in stage II patients, treatment match resulted in modest reductions in 2-, 3-, and 5-year survival rate disparities (2.7%-2.8%, 4.1%-3.6%, and 4.6%-4.0%, respectively); by contrast, in stage III patients, treatment match resulted in more substantial reductions in 2-, 3-, and 5-year survival rate disparities (4.5%-2.2%, 3.1%-2.0%, and 4.3%-2.8%, respectively). A similar effect was observed in patients with stage IV disease. The results suggest that, “to control survival disparity, more efforts may need to be tailored to minimize treatment disparities (especially chemotherapy use) in patients with advanced-stage disease,” the investigators wrote.

The retrospective data analysis used patient information from 68,141 patients (6,190 black, 61,951 white) aged 66 years and older with colon cancer identified from the National Cancer Institute SEER-Medicare database. Using a novel minimum distance matching strategy, investigators drew from the pool of white patients to match three distinct comparison cohorts to the same 6,190 black patients. Close matches between black and white patients bypassed the need for model-based analysis.

The primary matching analysis was limited by the inability to control for substantial differences in socioeconomic status, marital status, and urban/rural residence. A subcohort analysis of 2,000 matched black and white patients showed that when socioeconomic status was added to the demographic match, survival differences were reduced, indicating the important role of socioeconomic status on racial survival disparities.

Significantly better survival was observed in all patients who were diagnosed in 2004 or later, the year the Food and Drug Administration approved the important chemotherapy medicines oxaliplatin and bevacizumab. Separating the cohorts into those who were diagnosed before and after 2004 revealed that the racial survival disparity was lower in the more recent group, indicating a favorable impact of oxaliplatin and/or bevacizumab in reducing the survival disparity.

References

Body

Prior studies have documented racial disparities in the incidence and outcomes of colon cancer in the United States. Black men and women have a higher overall incidence and more advanced stage of disease at diagnosis than white men and women, while being less likely to receive guideline-concordant treatment.

Dr. Jennifer Lund

To extend this work, the authors evaluated treatment disparities between black and white colon cancer patients aged 66 years and older and examined the impact of a variety of patient characteristics on racial disparities in overall survival using a novel, sequential matching algorithm that minimized the overall distance between black and white patients based on demographic-, tumor specific–, and treatment-related variables. The authors found that differences in overall survival were mainly driven by tumor presentation; however, advanced-stage black colon cancer patients received less guideline concordant-treatment than white patients. While this minimum-distance algorithm provided close black-white matches on prespecified factors, it could not accommodate other factors (for example, socioeconomic, marital, and urban/rural status); therefore, methodologic improvements to this method and comparisons to other commonly used approaches (that is, propensity score matching and weighting) are warranted.

Finally, these results apply to older black and white colon cancer patients with Medicare fee-for-service coverage only. Additional research using similar methods in older Medicare Advantage populations or younger adults may uncover unique drivers of overall survival disparities by race, which may require tailored interventions.

Jennifer L. Lund, Ph.D., is an assistant professor, department of epidemiology, University of North Carolina at Chapel Hill. She receives research support from the UNC Oncology Clinical Translational Research Training Program (K12 CA120780), as well as through a Research Starter Award from the PhRMA Foundation to the UNC Department of Epidemiology.

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Body

Prior studies have documented racial disparities in the incidence and outcomes of colon cancer in the United States. Black men and women have a higher overall incidence and more advanced stage of disease at diagnosis than white men and women, while being less likely to receive guideline-concordant treatment.

Dr. Jennifer Lund

To extend this work, the authors evaluated treatment disparities between black and white colon cancer patients aged 66 years and older and examined the impact of a variety of patient characteristics on racial disparities in overall survival using a novel, sequential matching algorithm that minimized the overall distance between black and white patients based on demographic-, tumor specific–, and treatment-related variables. The authors found that differences in overall survival were mainly driven by tumor presentation; however, advanced-stage black colon cancer patients received less guideline concordant-treatment than white patients. While this minimum-distance algorithm provided close black-white matches on prespecified factors, it could not accommodate other factors (for example, socioeconomic, marital, and urban/rural status); therefore, methodologic improvements to this method and comparisons to other commonly used approaches (that is, propensity score matching and weighting) are warranted.

Finally, these results apply to older black and white colon cancer patients with Medicare fee-for-service coverage only. Additional research using similar methods in older Medicare Advantage populations or younger adults may uncover unique drivers of overall survival disparities by race, which may require tailored interventions.

Jennifer L. Lund, Ph.D., is an assistant professor, department of epidemiology, University of North Carolina at Chapel Hill. She receives research support from the UNC Oncology Clinical Translational Research Training Program (K12 CA120780), as well as through a Research Starter Award from the PhRMA Foundation to the UNC Department of Epidemiology.

Body

Prior studies have documented racial disparities in the incidence and outcomes of colon cancer in the United States. Black men and women have a higher overall incidence and more advanced stage of disease at diagnosis than white men and women, while being less likely to receive guideline-concordant treatment.

Dr. Jennifer Lund

To extend this work, the authors evaluated treatment disparities between black and white colon cancer patients aged 66 years and older and examined the impact of a variety of patient characteristics on racial disparities in overall survival using a novel, sequential matching algorithm that minimized the overall distance between black and white patients based on demographic-, tumor specific–, and treatment-related variables. The authors found that differences in overall survival were mainly driven by tumor presentation; however, advanced-stage black colon cancer patients received less guideline concordant-treatment than white patients. While this minimum-distance algorithm provided close black-white matches on prespecified factors, it could not accommodate other factors (for example, socioeconomic, marital, and urban/rural status); therefore, methodologic improvements to this method and comparisons to other commonly used approaches (that is, propensity score matching and weighting) are warranted.

Finally, these results apply to older black and white colon cancer patients with Medicare fee-for-service coverage only. Additional research using similar methods in older Medicare Advantage populations or younger adults may uncover unique drivers of overall survival disparities by race, which may require tailored interventions.

Jennifer L. Lund, Ph.D., is an assistant professor, department of epidemiology, University of North Carolina at Chapel Hill. She receives research support from the UNC Oncology Clinical Translational Research Training Program (K12 CA120780), as well as through a Research Starter Award from the PhRMA Foundation to the UNC Department of Epidemiology.

Title
Results applicable to older black, white patients only
Results applicable to older black, white patients only

Although black patients with colon cancer received significantly less treatment than white patients, particularly for late stage disease, much of the overall survival disparity between black and white patients was explained by tumor presentation at diagnosis rather than treatment differences, according to an analysis of SEER data.

Among demographically matched black and white patients, the 5-year survival difference was 8.3% (P less than .0001). Presentation match reduced the difference to 5.0% (P less than .0001), which accounted for 39.8% of the overall disparity. Additional matching by treatment reduced the difference only slightly to 4.9% (P less than .0001), which accounted for 1.2% of the overall disparity. Black patients had lower rates for most treatments, including surgery, than presentation-matched white patients (88.5% vs. 91.4%), and these differences were most pronounced at advanced stages. For example, significant differences between black and white patients in the use of chemotherapy was observed for stage III (53.1% vs. 64.2%; P less than .0001) and stage IV (56.1% vs. 63.3%; P = .001).

Courtesy Wikimedia Commons/Nephron/Creative Commons License

“Our results indicate that tumor presentation, including tumor stage, is indeed one of the most important factors contributing to the racial disparity in colon cancer survival. We observed that, after controlling for demographic factors, black patients in comparison with white patients had a significantly higher proportion of stage IV and lower proportions of stages I and II disease. Adequately matching on tumor presentation variables (e.g., stage, grade, size, and comorbidity) significantly reduced survival disparities,” wrote Dr. Yinzhi Lai of the Department of Medical Oncology at Sidney Kimmel Cancer Center, Philadelphia, and colleagues (Gastroenterology. 2016 Apr 4. doi: 10.1053/j.gastro.2016.01.030).

Treatment differences in advanced-stage patients, compared with early-stage patients, explained a higher proportion of the demographic-matched survival disparity. For example, in stage II patients, treatment match resulted in modest reductions in 2-, 3-, and 5-year survival rate disparities (2.7%-2.8%, 4.1%-3.6%, and 4.6%-4.0%, respectively); by contrast, in stage III patients, treatment match resulted in more substantial reductions in 2-, 3-, and 5-year survival rate disparities (4.5%-2.2%, 3.1%-2.0%, and 4.3%-2.8%, respectively). A similar effect was observed in patients with stage IV disease. The results suggest that, “to control survival disparity, more efforts may need to be tailored to minimize treatment disparities (especially chemotherapy use) in patients with advanced-stage disease,” the investigators wrote.

The retrospective data analysis used patient information from 68,141 patients (6,190 black, 61,951 white) aged 66 years and older with colon cancer identified from the National Cancer Institute SEER-Medicare database. Using a novel minimum distance matching strategy, investigators drew from the pool of white patients to match three distinct comparison cohorts to the same 6,190 black patients. Close matches between black and white patients bypassed the need for model-based analysis.

The primary matching analysis was limited by the inability to control for substantial differences in socioeconomic status, marital status, and urban/rural residence. A subcohort analysis of 2,000 matched black and white patients showed that when socioeconomic status was added to the demographic match, survival differences were reduced, indicating the important role of socioeconomic status on racial survival disparities.

Significantly better survival was observed in all patients who were diagnosed in 2004 or later, the year the Food and Drug Administration approved the important chemotherapy medicines oxaliplatin and bevacizumab. Separating the cohorts into those who were diagnosed before and after 2004 revealed that the racial survival disparity was lower in the more recent group, indicating a favorable impact of oxaliplatin and/or bevacizumab in reducing the survival disparity.

Although black patients with colon cancer received significantly less treatment than white patients, particularly for late stage disease, much of the overall survival disparity between black and white patients was explained by tumor presentation at diagnosis rather than treatment differences, according to an analysis of SEER data.

Among demographically matched black and white patients, the 5-year survival difference was 8.3% (P less than .0001). Presentation match reduced the difference to 5.0% (P less than .0001), which accounted for 39.8% of the overall disparity. Additional matching by treatment reduced the difference only slightly to 4.9% (P less than .0001), which accounted for 1.2% of the overall disparity. Black patients had lower rates for most treatments, including surgery, than presentation-matched white patients (88.5% vs. 91.4%), and these differences were most pronounced at advanced stages. For example, significant differences between black and white patients in the use of chemotherapy was observed for stage III (53.1% vs. 64.2%; P less than .0001) and stage IV (56.1% vs. 63.3%; P = .001).

Courtesy Wikimedia Commons/Nephron/Creative Commons License

“Our results indicate that tumor presentation, including tumor stage, is indeed one of the most important factors contributing to the racial disparity in colon cancer survival. We observed that, after controlling for demographic factors, black patients in comparison with white patients had a significantly higher proportion of stage IV and lower proportions of stages I and II disease. Adequately matching on tumor presentation variables (e.g., stage, grade, size, and comorbidity) significantly reduced survival disparities,” wrote Dr. Yinzhi Lai of the Department of Medical Oncology at Sidney Kimmel Cancer Center, Philadelphia, and colleagues (Gastroenterology. 2016 Apr 4. doi: 10.1053/j.gastro.2016.01.030).

Treatment differences in advanced-stage patients, compared with early-stage patients, explained a higher proportion of the demographic-matched survival disparity. For example, in stage II patients, treatment match resulted in modest reductions in 2-, 3-, and 5-year survival rate disparities (2.7%-2.8%, 4.1%-3.6%, and 4.6%-4.0%, respectively); by contrast, in stage III patients, treatment match resulted in more substantial reductions in 2-, 3-, and 5-year survival rate disparities (4.5%-2.2%, 3.1%-2.0%, and 4.3%-2.8%, respectively). A similar effect was observed in patients with stage IV disease. The results suggest that, “to control survival disparity, more efforts may need to be tailored to minimize treatment disparities (especially chemotherapy use) in patients with advanced-stage disease,” the investigators wrote.

The retrospective data analysis used patient information from 68,141 patients (6,190 black, 61,951 white) aged 66 years and older with colon cancer identified from the National Cancer Institute SEER-Medicare database. Using a novel minimum distance matching strategy, investigators drew from the pool of white patients to match three distinct comparison cohorts to the same 6,190 black patients. Close matches between black and white patients bypassed the need for model-based analysis.

The primary matching analysis was limited by the inability to control for substantial differences in socioeconomic status, marital status, and urban/rural residence. A subcohort analysis of 2,000 matched black and white patients showed that when socioeconomic status was added to the demographic match, survival differences were reduced, indicating the important role of socioeconomic status on racial survival disparities.

Significantly better survival was observed in all patients who were diagnosed in 2004 or later, the year the Food and Drug Administration approved the important chemotherapy medicines oxaliplatin and bevacizumab. Separating the cohorts into those who were diagnosed before and after 2004 revealed that the racial survival disparity was lower in the more recent group, indicating a favorable impact of oxaliplatin and/or bevacizumab in reducing the survival disparity.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Racial disparities in colon cancer survival mainly driven by tumor stage at presentation
Display Headline
Racial disparities in colon cancer survival mainly driven by tumor stage at presentation
Sections
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Tumor stage at diagnosis had a greater effect on survival disparities between black and white patients with colon cancer than treatment differences.

Major finding: Among demographically matched black and white patients, the 5-year survival difference was 8.3% (P less than .0001); matching by presentation reduced the difference to 5.0% (P less than .0001), and additional matching by treatment reduced the difference only slightly to 4.9% (P less than .0001).

Data sources: In total, 68,141 patients (6,190 black, 61,951 white) aged 66 years and older with colon cancer were identified from the National Cancer Institute SEER-Medicare database. Three white comparison cohorts were assembled and matched to the same 6,190 black patients.

Disclosures: Dr. Lai and coauthors reported having no disclosures.

New interventions improve symptoms of GERD

Article Type
Changed
Display Headline
New interventions improve symptoms of GERD

Patients with chronic gastroesophageal reflux disease (GERD) who have failed long-term proton pump inhibitor (PPI) therapy can benefit from surgical intervention with magnetic sphincter augmentation, according to a new study that has validated the long-term safety and efficacy of this procedure.

All 85 patients in the cohort had used PPIs at baseline, but this declined to 15.3% at 5 years. Moderate or severe regurgitation also decreased significantly. It was present in 57% of patients at baseline, but in 1.2% at the 5-year follow-up.

In a second related study, researchers found that compared with patients on esomeprazole therapy, GERD patients who underwent laparoscopic antireflux surgery (LARS), experienced significantly greater reductions in 24-hour esophageal acid exposure after 6 months and at 5 years. Both procedures were effective in achieving and maintaining a reduction in distal esophageal acid exposure down to a normal level, but LARS nearly abolished gastroesophageal acid reflux.

©nebari/Thinkstock.com

Both studies were published in the May issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2015.05.028; doi: 10.1016/j.cgh.2015.07.025).

Gastroesophageal reflux disease (GERD) is caused by excessive exposure of esophageal mucosa to gastric acid. Left unchecked, it can lead to chronic symptoms and complications, and is associated with a higher risk for Barrett’s esophagus and esophageal adenocarcinoma.

In the first study, Dr. Robert A. Ganz of Minnesota Gastroenterology PA, Plymouth, Minn., and colleagues, conducted a prospective international study that looked at the safety and efficacy of a magnetic device in adults with GERD.

The Food and Drug Administration approved this magnetic device in 2012, which augments lower esophageal sphincter function in patients with GERD, and the current paper now reports on the final results after 5 years of follow-up.

Quality of life, reflux control, use of PPIs, and side effects were evaluated, and the GERD health-related quality of life (GERD-HRQL) questionnaire was administered at baseline to patients on and off PPIs, and after placement of the device.

A partial response to PPIs was defined as a GERD-HRQL score of 10 or less on PPIs and a score of 15 or higher off PPIs, or a 6-point or more improvement when scores on vs. off PPI were compared.

During the follow-up period, there were no device erosions, migrations, or malfunctions. The median GERD-HRQL score was 27 in patients not taking PPIs and 11 in patients on PPIs at the start of the study. After 5 years with the device in place, this score decreased to 4.

All patients reported that they had the ability to belch and vomit if they needed to. The proportion of patients reporting bothersome swallowing was 5% at baseline and 6% at 5 years (P = .739), and bothersome gas-bloat was present in 52% at baseline but decreased to 8.3% at 5 years.

“Without a procedure to correct an incompetent lower esophageal sphincter, it is unlikely that continued medical therapy would have improved these reflux symptoms, and the severity and frequency of the symptoms may have worsened,” wrote the authors.

In the second study, Dr. Jan G. Hatlebakk of Haukeland University Hospital, Bergen, Norway, and his colleagues analyzed data from a prospective, randomized, open-label trial that compared the efficacy and safety of LARS with esomeprazole (20 or 40 mg/d) over a 5-year period in patients with chronic GERD.

Among patients in the LARS group (n = 116), the median 24-hour esophageal acid exposure was 8.6% at baseline and 0.7% after 6 months and 5 years (P less than .001 vs. baseline).

In the esomeprazole group (n = 151), the median 24-hour esophageal acid exposure was 8.8% at baseline, 2.1% after 6 months, and 1.9% after 5 years (P less than .001, therapy vs. baseline, and LARS vs. esomeprazole).

Gastric acidity was stable in both groups, and patients who needed a dose increase to 40 mg/d experienced more severe supine reflux at baseline, but less esophageal acid exposure (P less than .02) and gastric acidity after their dose was increased. Esophageal and intragastric pH parameters, both on and off therapy, did not seem to long-term symptom breakthrough.

“We found that neither intragastric nor intraesophageal pH parameters could predict the short- and long-term therapeutic outcome, which indicates that response to therapy in patients with GERD is individual and not related directly to normalization of acid reflux parameters alone,” wrote Dr. Hatlebakk and coauthors.

References

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Patients with chronic gastroesophageal reflux disease (GERD) who have failed long-term proton pump inhibitor (PPI) therapy can benefit from surgical intervention with magnetic sphincter augmentation, according to a new study that has validated the long-term safety and efficacy of this procedure.

All 85 patients in the cohort had used PPIs at baseline, but this declined to 15.3% at 5 years. Moderate or severe regurgitation also decreased significantly. It was present in 57% of patients at baseline, but in 1.2% at the 5-year follow-up.

In a second related study, researchers found that compared with patients on esomeprazole therapy, GERD patients who underwent laparoscopic antireflux surgery (LARS), experienced significantly greater reductions in 24-hour esophageal acid exposure after 6 months and at 5 years. Both procedures were effective in achieving and maintaining a reduction in distal esophageal acid exposure down to a normal level, but LARS nearly abolished gastroesophageal acid reflux.

©nebari/Thinkstock.com

Both studies were published in the May issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2015.05.028; doi: 10.1016/j.cgh.2015.07.025).

Gastroesophageal reflux disease (GERD) is caused by excessive exposure of esophageal mucosa to gastric acid. Left unchecked, it can lead to chronic symptoms and complications, and is associated with a higher risk for Barrett’s esophagus and esophageal adenocarcinoma.

In the first study, Dr. Robert A. Ganz of Minnesota Gastroenterology PA, Plymouth, Minn., and colleagues, conducted a prospective international study that looked at the safety and efficacy of a magnetic device in adults with GERD.

The Food and Drug Administration approved this magnetic device in 2012, which augments lower esophageal sphincter function in patients with GERD, and the current paper now reports on the final results after 5 years of follow-up.

Quality of life, reflux control, use of PPIs, and side effects were evaluated, and the GERD health-related quality of life (GERD-HRQL) questionnaire was administered at baseline to patients on and off PPIs, and after placement of the device.

A partial response to PPIs was defined as a GERD-HRQL score of 10 or less on PPIs and a score of 15 or higher off PPIs, or a 6-point or more improvement when scores on vs. off PPI were compared.

During the follow-up period, there were no device erosions, migrations, or malfunctions. The median GERD-HRQL score was 27 in patients not taking PPIs and 11 in patients on PPIs at the start of the study. After 5 years with the device in place, this score decreased to 4.

All patients reported that they had the ability to belch and vomit if they needed to. The proportion of patients reporting bothersome swallowing was 5% at baseline and 6% at 5 years (P = .739), and bothersome gas-bloat was present in 52% at baseline but decreased to 8.3% at 5 years.

“Without a procedure to correct an incompetent lower esophageal sphincter, it is unlikely that continued medical therapy would have improved these reflux symptoms, and the severity and frequency of the symptoms may have worsened,” wrote the authors.

In the second study, Dr. Jan G. Hatlebakk of Haukeland University Hospital, Bergen, Norway, and his colleagues analyzed data from a prospective, randomized, open-label trial that compared the efficacy and safety of LARS with esomeprazole (20 or 40 mg/d) over a 5-year period in patients with chronic GERD.

Among patients in the LARS group (n = 116), the median 24-hour esophageal acid exposure was 8.6% at baseline and 0.7% after 6 months and 5 years (P less than .001 vs. baseline).

In the esomeprazole group (n = 151), the median 24-hour esophageal acid exposure was 8.8% at baseline, 2.1% after 6 months, and 1.9% after 5 years (P less than .001, therapy vs. baseline, and LARS vs. esomeprazole).

Gastric acidity was stable in both groups, and patients who needed a dose increase to 40 mg/d experienced more severe supine reflux at baseline, but less esophageal acid exposure (P less than .02) and gastric acidity after their dose was increased. Esophageal and intragastric pH parameters, both on and off therapy, did not seem to long-term symptom breakthrough.

“We found that neither intragastric nor intraesophageal pH parameters could predict the short- and long-term therapeutic outcome, which indicates that response to therapy in patients with GERD is individual and not related directly to normalization of acid reflux parameters alone,” wrote Dr. Hatlebakk and coauthors.

Patients with chronic gastroesophageal reflux disease (GERD) who have failed long-term proton pump inhibitor (PPI) therapy can benefit from surgical intervention with magnetic sphincter augmentation, according to a new study that has validated the long-term safety and efficacy of this procedure.

All 85 patients in the cohort had used PPIs at baseline, but this declined to 15.3% at 5 years. Moderate or severe regurgitation also decreased significantly. It was present in 57% of patients at baseline, but in 1.2% at the 5-year follow-up.

In a second related study, researchers found that compared with patients on esomeprazole therapy, GERD patients who underwent laparoscopic antireflux surgery (LARS), experienced significantly greater reductions in 24-hour esophageal acid exposure after 6 months and at 5 years. Both procedures were effective in achieving and maintaining a reduction in distal esophageal acid exposure down to a normal level, but LARS nearly abolished gastroesophageal acid reflux.

©nebari/Thinkstock.com

Both studies were published in the May issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2015.05.028; doi: 10.1016/j.cgh.2015.07.025).

Gastroesophageal reflux disease (GERD) is caused by excessive exposure of esophageal mucosa to gastric acid. Left unchecked, it can lead to chronic symptoms and complications, and is associated with a higher risk for Barrett’s esophagus and esophageal adenocarcinoma.

In the first study, Dr. Robert A. Ganz of Minnesota Gastroenterology PA, Plymouth, Minn., and colleagues, conducted a prospective international study that looked at the safety and efficacy of a magnetic device in adults with GERD.

The Food and Drug Administration approved this magnetic device in 2012, which augments lower esophageal sphincter function in patients with GERD, and the current paper now reports on the final results after 5 years of follow-up.

Quality of life, reflux control, use of PPIs, and side effects were evaluated, and the GERD health-related quality of life (GERD-HRQL) questionnaire was administered at baseline to patients on and off PPIs, and after placement of the device.

A partial response to PPIs was defined as a GERD-HRQL score of 10 or less on PPIs and a score of 15 or higher off PPIs, or a 6-point or more improvement when scores on vs. off PPI were compared.

During the follow-up period, there were no device erosions, migrations, or malfunctions. The median GERD-HRQL score was 27 in patients not taking PPIs and 11 in patients on PPIs at the start of the study. After 5 years with the device in place, this score decreased to 4.

All patients reported that they had the ability to belch and vomit if they needed to. The proportion of patients reporting bothersome swallowing was 5% at baseline and 6% at 5 years (P = .739), and bothersome gas-bloat was present in 52% at baseline but decreased to 8.3% at 5 years.

“Without a procedure to correct an incompetent lower esophageal sphincter, it is unlikely that continued medical therapy would have improved these reflux symptoms, and the severity and frequency of the symptoms may have worsened,” wrote the authors.

In the second study, Dr. Jan G. Hatlebakk of Haukeland University Hospital, Bergen, Norway, and his colleagues analyzed data from a prospective, randomized, open-label trial that compared the efficacy and safety of LARS with esomeprazole (20 or 40 mg/d) over a 5-year period in patients with chronic GERD.

Among patients in the LARS group (n = 116), the median 24-hour esophageal acid exposure was 8.6% at baseline and 0.7% after 6 months and 5 years (P less than .001 vs. baseline).

In the esomeprazole group (n = 151), the median 24-hour esophageal acid exposure was 8.8% at baseline, 2.1% after 6 months, and 1.9% after 5 years (P less than .001, therapy vs. baseline, and LARS vs. esomeprazole).

Gastric acidity was stable in both groups, and patients who needed a dose increase to 40 mg/d experienced more severe supine reflux at baseline, but less esophageal acid exposure (P less than .02) and gastric acidity after their dose was increased. Esophageal and intragastric pH parameters, both on and off therapy, did not seem to long-term symptom breakthrough.

“We found that neither intragastric nor intraesophageal pH parameters could predict the short- and long-term therapeutic outcome, which indicates that response to therapy in patients with GERD is individual and not related directly to normalization of acid reflux parameters alone,” wrote Dr. Hatlebakk and coauthors.

References

References

Publications
Publications
Topics
Article Type
Display Headline
New interventions improve symptoms of GERD
Display Headline
New interventions improve symptoms of GERD
Sections
Article Source

PURLs Copyright

Inside the Article

VIDEO: Eight new quality measures key to performance of esophageal manometry

Article Type
Changed
Display Headline
VIDEO: Eight new quality measures key to performance of esophageal manometry

Health care providers performing esophageal manometry should keep in mind eight new quality measures listed and validated in a recent study published in the April issue of Clinical Gastroenterology and Hepatology (Clin Gastroenterol Hepatol. 2015 Oct 20. doi: 10.1016/j.cgh.2015.10.006), which researchers believe will significantly improve the performance of esophageal manometry and interpretation of data culled from such procedures.

“Despite its critical importance in the diagnosis and management of esophageal motility disorders, features of a high-quality esophageal manometry [study] have not been formally defined,” said the study authors, led by Dr. Rena Yadlapati of Northwestern University in Chicago. “Standardizing key aspects of esophageal manometry is imperative to ensure the delivery of high-quality care.”

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Dr. Yadlapati and her coinvestigators carried out the study in accordance with guidelines set out by the RAND/UCLA Appropriateness Method (RAM), They began by recruiting a panel of 15 esophageal manometry experts with leadership, geographical diversity, and a wide range of practice settings being the key criteria in their selection.

Investigators then conducted a literature review, selecting the 30 most relevant randomized, controlled trials, retrospective studies, and systematic reviews from the past 10 years. From this review, investigators created a list of 30 possible quality measures, all of which were then sent to each member of the expert panel via email for them to rank on a 9-point interval scale, and modify if necessary.

Those rankings were then used to determine the appropriateness of each proposed quality measure at a face-to-face meeting among the investigators and the 15-member expert panel, at which 17 quality measures were determined to be appropriate. In all, 2 measures dealt with competency, 2 pertained to assessment before procedure, 3 were regarding performance of the procedure itself, and 10 were about interpretation of data obtained from esophageal manometry; the 10 measures concerning interpretation of data were compiled into 1 measure, leaving a total of 8 that were ultimately approved.

The quality measures for competency are as follows:

• “If esophageal manometry is performed, then the technician must be competent to perform esophageal manometry.”

• “If a physician is considered competent to interpret esophageal manometry, then the physician must interpret a minimum number of esophageal manometry studies annually.”

For assessment before procedure, the measures state the following:

• “If a patient is referred for esophageal manometry, then the patient should have undergone an evaluation for structural abnormalities before manometry.”

• “If an esophageal manometry is performed, then informed consent must be obtained and documented.”

Quality measures regarding the procedure itself state the following:

• “If an esophageal manometry study is performed, then a time interval of at least 30 seconds should occur between swallows.”

• “If an esophageal manometry study is performed, then at least 10 wet swallows should be attempted.”

• “If an esophageal manometry study is performed, then at least seven evaluable wet swallows should be included.”

Finally, regarding interpretation of data, the single quality measures states that “If an esophageal manometry study is interpreted, then a complete procedure report should document the following:

• “Reason for referral.”

• “Clinical diagnosis.”

• “Diagnosis according to formally validated classification scheme.”

• “Documentation of formally validated classification scheme used.”

• “Summary of results”

• “Tabulated results including upper esophageal sphincter activity, interpretation of esophagogastric junction relaxation, documentation of pressure inversion point if technically feasible, pressurization pattern and contractile pattern.”

• “Technical limitation (if applicable).”

• “Communication to referring provider.”

“These eight appropriate quality measures are considered absolutely necessary in the performance and interpretation of esophageal manometry,” the authors concluded. “In particular, measures 3-8 are clinically feasible and measurable, and should serve as an initial framework to benchmark quality and reduce variability in esophageal manometry practices.”

This study was funded by the Alumnae of Northwestern University, and a grant to Dr. Yadlapati (T32 DK101363-02). Five coinvestigators disclosed consultancy and speaking relationships with Boston Scientific, Cook Endoscopy, EndoStim, Given Imaging, Covidien, and Sandhill Scientific.

dchitnis@frontlinemedcom.com

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
esophageal, manometry, RAM, RAND University of California Los Angeles Appropriateness Methodology (RAM), quality, measures
Sections
Author and Disclosure Information

Author and Disclosure Information

Health care providers performing esophageal manometry should keep in mind eight new quality measures listed and validated in a recent study published in the April issue of Clinical Gastroenterology and Hepatology (Clin Gastroenterol Hepatol. 2015 Oct 20. doi: 10.1016/j.cgh.2015.10.006), which researchers believe will significantly improve the performance of esophageal manometry and interpretation of data culled from such procedures.

“Despite its critical importance in the diagnosis and management of esophageal motility disorders, features of a high-quality esophageal manometry [study] have not been formally defined,” said the study authors, led by Dr. Rena Yadlapati of Northwestern University in Chicago. “Standardizing key aspects of esophageal manometry is imperative to ensure the delivery of high-quality care.”

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Dr. Yadlapati and her coinvestigators carried out the study in accordance with guidelines set out by the RAND/UCLA Appropriateness Method (RAM), They began by recruiting a panel of 15 esophageal manometry experts with leadership, geographical diversity, and a wide range of practice settings being the key criteria in their selection.

Investigators then conducted a literature review, selecting the 30 most relevant randomized, controlled trials, retrospective studies, and systematic reviews from the past 10 years. From this review, investigators created a list of 30 possible quality measures, all of which were then sent to each member of the expert panel via email for them to rank on a 9-point interval scale, and modify if necessary.

Those rankings were then used to determine the appropriateness of each proposed quality measure at a face-to-face meeting among the investigators and the 15-member expert panel, at which 17 quality measures were determined to be appropriate. In all, 2 measures dealt with competency, 2 pertained to assessment before procedure, 3 were regarding performance of the procedure itself, and 10 were about interpretation of data obtained from esophageal manometry; the 10 measures concerning interpretation of data were compiled into 1 measure, leaving a total of 8 that were ultimately approved.

The quality measures for competency are as follows:

• “If esophageal manometry is performed, then the technician must be competent to perform esophageal manometry.”

• “If a physician is considered competent to interpret esophageal manometry, then the physician must interpret a minimum number of esophageal manometry studies annually.”

For assessment before procedure, the measures state the following:

• “If a patient is referred for esophageal manometry, then the patient should have undergone an evaluation for structural abnormalities before manometry.”

• “If an esophageal manometry is performed, then informed consent must be obtained and documented.”

Quality measures regarding the procedure itself state the following:

• “If an esophageal manometry study is performed, then a time interval of at least 30 seconds should occur between swallows.”

• “If an esophageal manometry study is performed, then at least 10 wet swallows should be attempted.”

• “If an esophageal manometry study is performed, then at least seven evaluable wet swallows should be included.”

Finally, regarding interpretation of data, the single quality measures states that “If an esophageal manometry study is interpreted, then a complete procedure report should document the following:

• “Reason for referral.”

• “Clinical diagnosis.”

• “Diagnosis according to formally validated classification scheme.”

• “Documentation of formally validated classification scheme used.”

• “Summary of results”

• “Tabulated results including upper esophageal sphincter activity, interpretation of esophagogastric junction relaxation, documentation of pressure inversion point if technically feasible, pressurization pattern and contractile pattern.”

• “Technical limitation (if applicable).”

• “Communication to referring provider.”

“These eight appropriate quality measures are considered absolutely necessary in the performance and interpretation of esophageal manometry,” the authors concluded. “In particular, measures 3-8 are clinically feasible and measurable, and should serve as an initial framework to benchmark quality and reduce variability in esophageal manometry practices.”

This study was funded by the Alumnae of Northwestern University, and a grant to Dr. Yadlapati (T32 DK101363-02). Five coinvestigators disclosed consultancy and speaking relationships with Boston Scientific, Cook Endoscopy, EndoStim, Given Imaging, Covidien, and Sandhill Scientific.

dchitnis@frontlinemedcom.com

Health care providers performing esophageal manometry should keep in mind eight new quality measures listed and validated in a recent study published in the April issue of Clinical Gastroenterology and Hepatology (Clin Gastroenterol Hepatol. 2015 Oct 20. doi: 10.1016/j.cgh.2015.10.006), which researchers believe will significantly improve the performance of esophageal manometry and interpretation of data culled from such procedures.

“Despite its critical importance in the diagnosis and management of esophageal motility disorders, features of a high-quality esophageal manometry [study] have not been formally defined,” said the study authors, led by Dr. Rena Yadlapati of Northwestern University in Chicago. “Standardizing key aspects of esophageal manometry is imperative to ensure the delivery of high-quality care.”

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Dr. Yadlapati and her coinvestigators carried out the study in accordance with guidelines set out by the RAND/UCLA Appropriateness Method (RAM), They began by recruiting a panel of 15 esophageal manometry experts with leadership, geographical diversity, and a wide range of practice settings being the key criteria in their selection.

Investigators then conducted a literature review, selecting the 30 most relevant randomized, controlled trials, retrospective studies, and systematic reviews from the past 10 years. From this review, investigators created a list of 30 possible quality measures, all of which were then sent to each member of the expert panel via email for them to rank on a 9-point interval scale, and modify if necessary.

Those rankings were then used to determine the appropriateness of each proposed quality measure at a face-to-face meeting among the investigators and the 15-member expert panel, at which 17 quality measures were determined to be appropriate. In all, 2 measures dealt with competency, 2 pertained to assessment before procedure, 3 were regarding performance of the procedure itself, and 10 were about interpretation of data obtained from esophageal manometry; the 10 measures concerning interpretation of data were compiled into 1 measure, leaving a total of 8 that were ultimately approved.

The quality measures for competency are as follows:

• “If esophageal manometry is performed, then the technician must be competent to perform esophageal manometry.”

• “If a physician is considered competent to interpret esophageal manometry, then the physician must interpret a minimum number of esophageal manometry studies annually.”

For assessment before procedure, the measures state the following:

• “If a patient is referred for esophageal manometry, then the patient should have undergone an evaluation for structural abnormalities before manometry.”

• “If an esophageal manometry is performed, then informed consent must be obtained and documented.”

Quality measures regarding the procedure itself state the following:

• “If an esophageal manometry study is performed, then a time interval of at least 30 seconds should occur between swallows.”

• “If an esophageal manometry study is performed, then at least 10 wet swallows should be attempted.”

• “If an esophageal manometry study is performed, then at least seven evaluable wet swallows should be included.”

Finally, regarding interpretation of data, the single quality measures states that “If an esophageal manometry study is interpreted, then a complete procedure report should document the following:

• “Reason for referral.”

• “Clinical diagnosis.”

• “Diagnosis according to formally validated classification scheme.”

• “Documentation of formally validated classification scheme used.”

• “Summary of results”

• “Tabulated results including upper esophageal sphincter activity, interpretation of esophagogastric junction relaxation, documentation of pressure inversion point if technically feasible, pressurization pattern and contractile pattern.”

• “Technical limitation (if applicable).”

• “Communication to referring provider.”

“These eight appropriate quality measures are considered absolutely necessary in the performance and interpretation of esophageal manometry,” the authors concluded. “In particular, measures 3-8 are clinically feasible and measurable, and should serve as an initial framework to benchmark quality and reduce variability in esophageal manometry practices.”

This study was funded by the Alumnae of Northwestern University, and a grant to Dr. Yadlapati (T32 DK101363-02). Five coinvestigators disclosed consultancy and speaking relationships with Boston Scientific, Cook Endoscopy, EndoStim, Given Imaging, Covidien, and Sandhill Scientific.

dchitnis@frontlinemedcom.com

References

References

Publications
Publications
Topics
Article Type
Display Headline
VIDEO: Eight new quality measures key to performance of esophageal manometry
Display Headline
VIDEO: Eight new quality measures key to performance of esophageal manometry
Legacy Keywords
esophageal, manometry, RAM, RAND University of California Los Angeles Appropriateness Methodology (RAM), quality, measures
Legacy Keywords
esophageal, manometry, RAM, RAND University of California Los Angeles Appropriateness Methodology (RAM), quality, measures
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Health care providers should consider eight new validated quality measures when performing and interpreting esophageal manometry data.

Major finding: Of 30 possible measures, 10 regarding interpretation of data were compiled into a single quality measure, 2 were classified as competency measures, 2 were classified as assessments necessary prior to an esophageal manometry procedure, and 3 were classified as integral to the procedure of esophageal manometry, for a total of 8.

Data source: Survey of existing literature and expert interviews on validated quality measures on the basis of the RAM.

Disclosures: Study was partly funded by a grant from the Alumnae of Northwestern University; five coauthors reported financial disclosures.

VIDEO: Rectal indomethacin does not prevent pancreatitis post ERCP

Rectal indomethacin may still be protective in high-risk patients
Article Type
Changed
Display Headline
VIDEO: Rectal indomethacin does not prevent pancreatitis post ERCP

Patients who receive rectal indomethacin after undergoing endoscopic retrograde cholangiopancreatography (ERCP) are not any less likely to develop pancreatitis than individuals who don’t, according to the findings of a recent study published in Gastroenterology (2016 Jan 9. doi: 10.1053/j.gastro.2015.12.018).

 
 

“These results are in contrast to recent studies highlighting the benefit of rectal NSAIDS to prevent PEP [post-ECRP pancreatitis] in high-risk patients [and] counter the guidelines espoused by the European Society for Gastrointestinal Endoscopy, which recently recommended giving rectal indomethacin to prevent PEP in all patients undergoing ERCP,” said the study authors, led by Dr. John M. Levenick of Penn State University in Hershey, Pa.

 

 

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Dr. Levenick and his coinvestigators screened 604 consecutive patients undergoing ERCP, with and without endoscopic ultrasound, at the Dartmouth-Hitchcock Medical Center between March 2013 and December 2014, eventually enrolling and randomizing 449 subjects into two cohorts: one in which subjects were given indomethacin after undergoing ERCP (n = 223), and one in which subjects were simply given a placebo (n = 226). Randomization happened after subjects’ major papilla had been reached, and cannulation attempts were started.

Individuals were excluded if they had active acute pancreatitis or had undergone ERCP to treat or diagnose acute pancreatitis, if they had any contraindications or allergies to NSAIDs, or were younger than 18 years of age, among other factors. The mean age of the indomethacin cohort was 64.9 years, with 118 (52.9%) females; in the placebo cohort, mean age was 64.3 years and 118 (52.2%) were female.

Pancreatitis occurred in 27 subjects overall, 16 (7.2%) of whom were in the indomethacin cohort and the other 11 (4.9%) were on placebo followed ERCP (P = .33). No subjects receiving indomethacin had severe or moderately severe PEP, but one subject had severe PEP and one had moderately severe PEP in the placebo cohort (P = 1.0). There was no necrotizing pancreatitis in either cohort, nor were there any significant differences in gastrointestinal bleeding (P = .75), death (P = .25), or 30-day hospital readmission (P = .1) between the two cohorts.

“Prophylactic rectal indomethacin did not reduce the incidence or severity of PEP in consecutive patients undergoing ERCP,” Dr. Levenick and his coauthors concluded, adding that “guidelines that recommend the administration of rectal indomethacin in all patients undergoing ERCP should be reconsidered.”

This study was funded by the National Pancreas Foundation and a grant from the National Institutes of Health. Dr. Levenick and his coauthors did not report any financial disclosures.

dchitnis@frontlinemedcom.com

Body

Acute pancreatitis is the most common and feared complication of endoscopic retrograde cholangiopancreatography (ERCP). The incidence of post-ERCP pancreatitis is around 10% with a mortality of 0.7% (Gastrointest Endosc. 2015;81:143-9). Recent advances in noninvasive pancreaticobiliary imaging, risk stratification before ERCP, prophylactic pancreatic stent placement, and administration of nonsteroidal anti-inflammatory drugs (NSAIDs) have improved the overall risk benefit ratio of ERCP.  

NSAIDs are potent inhibitors of phospholipase A2, cyclooxygenase, and of the activation of platelets and endothelium, all of which play a central role in the pathogenesis of post-ERCP pancreatitis. NSAIDs constitute an attractive option in clinical practice, because they are inexpensive and widely available with a favorable risk profile. A recent multicenter randomized controlled trial (RCT) of 602 patients at high-risk for post-ERCP pancreatitis showed that rectal indomethacin is associated with a 7.7% absolute and a 46% relative risk reduction of post-ERCP pancreatitis (N Engl J Med. 2012;366:1414-22). These findings have been broadly adapted in endoscopic practice in the United States.

 

Dr. Georgios Papachristou

The presented RCT by Dr. Levenick and his colleagues evaluated the efficacy of rectal indomethacin in preventing post-ERCP pancreatitis among consecutive patients undergoing ERCP in a single U.S. center. This study was a well designed and conducted RCT following the CONSORT guidelines and utilizing an independent data and safety monitoring board.

The authors reported that rectal indomethacin did not result in reduction of post-ERCP pancreatitis (7.2%) when compared with placebo (4.9%). Of importance, 70% of patients included were at average risk for post-ERCP pancreatitis. Furthermore, despite a calculated sample size of 1,398 patients, the study was terminated early after enrolling only 449 patients based on the interim analysis showing futility to reach a statistically different outcome.

This well executed RCT reports no benefit in administering rectal indomethacin in all patients undergoing ERCP. Evidence strongly supports that rectal indomethacin remains an important advancement in preventing post-ERCP pancreatitis. However, its benefit is likely limited to a selected group of patients, those at high-risk for post-ERCP pancreatitis. Further studies are under way to clarify whether rectal indomethacin alone vs. indomethacin plus prophylactic pancreatic stenting is more effective in preventing post-ERCP pancreatitis in high-risk patients.

Dr. Georgios Papachristou is associate professor of medicine at the University of Pittsburgh. He is a consultant for Shire and has received funding from the National Institutes of Health and the VA Health System.

Publications
Topics
Legacy Keywords
rectal, indomethacin, pancreatitis, ERCP, endoscopic, retrograde, cholangiopancreatography, Levenick
Sections
Body

Acute pancreatitis is the most common and feared complication of endoscopic retrograde cholangiopancreatography (ERCP). The incidence of post-ERCP pancreatitis is around 10% with a mortality of 0.7% (Gastrointest Endosc. 2015;81:143-9). Recent advances in noninvasive pancreaticobiliary imaging, risk stratification before ERCP, prophylactic pancreatic stent placement, and administration of nonsteroidal anti-inflammatory drugs (NSAIDs) have improved the overall risk benefit ratio of ERCP.  

NSAIDs are potent inhibitors of phospholipase A2, cyclooxygenase, and of the activation of platelets and endothelium, all of which play a central role in the pathogenesis of post-ERCP pancreatitis. NSAIDs constitute an attractive option in clinical practice, because they are inexpensive and widely available with a favorable risk profile. A recent multicenter randomized controlled trial (RCT) of 602 patients at high-risk for post-ERCP pancreatitis showed that rectal indomethacin is associated with a 7.7% absolute and a 46% relative risk reduction of post-ERCP pancreatitis (N Engl J Med. 2012;366:1414-22). These findings have been broadly adapted in endoscopic practice in the United States.

 

Dr. Georgios Papachristou

The presented RCT by Dr. Levenick and his colleagues evaluated the efficacy of rectal indomethacin in preventing post-ERCP pancreatitis among consecutive patients undergoing ERCP in a single U.S. center. This study was a well designed and conducted RCT following the CONSORT guidelines and utilizing an independent data and safety monitoring board.

The authors reported that rectal indomethacin did not result in reduction of post-ERCP pancreatitis (7.2%) when compared with placebo (4.9%). Of importance, 70% of patients included were at average risk for post-ERCP pancreatitis. Furthermore, despite a calculated sample size of 1,398 patients, the study was terminated early after enrolling only 449 patients based on the interim analysis showing futility to reach a statistically different outcome.

This well executed RCT reports no benefit in administering rectal indomethacin in all patients undergoing ERCP. Evidence strongly supports that rectal indomethacin remains an important advancement in preventing post-ERCP pancreatitis. However, its benefit is likely limited to a selected group of patients, those at high-risk for post-ERCP pancreatitis. Further studies are under way to clarify whether rectal indomethacin alone vs. indomethacin plus prophylactic pancreatic stenting is more effective in preventing post-ERCP pancreatitis in high-risk patients.

Dr. Georgios Papachristou is associate professor of medicine at the University of Pittsburgh. He is a consultant for Shire and has received funding from the National Institutes of Health and the VA Health System.

Body

Acute pancreatitis is the most common and feared complication of endoscopic retrograde cholangiopancreatography (ERCP). The incidence of post-ERCP pancreatitis is around 10% with a mortality of 0.7% (Gastrointest Endosc. 2015;81:143-9). Recent advances in noninvasive pancreaticobiliary imaging, risk stratification before ERCP, prophylactic pancreatic stent placement, and administration of nonsteroidal anti-inflammatory drugs (NSAIDs) have improved the overall risk benefit ratio of ERCP.  

NSAIDs are potent inhibitors of phospholipase A2, cyclooxygenase, and of the activation of platelets and endothelium, all of which play a central role in the pathogenesis of post-ERCP pancreatitis. NSAIDs constitute an attractive option in clinical practice, because they are inexpensive and widely available with a favorable risk profile. A recent multicenter randomized controlled trial (RCT) of 602 patients at high-risk for post-ERCP pancreatitis showed that rectal indomethacin is associated with a 7.7% absolute and a 46% relative risk reduction of post-ERCP pancreatitis (N Engl J Med. 2012;366:1414-22). These findings have been broadly adapted in endoscopic practice in the United States.

 

Dr. Georgios Papachristou

The presented RCT by Dr. Levenick and his colleagues evaluated the efficacy of rectal indomethacin in preventing post-ERCP pancreatitis among consecutive patients undergoing ERCP in a single U.S. center. This study was a well designed and conducted RCT following the CONSORT guidelines and utilizing an independent data and safety monitoring board.

The authors reported that rectal indomethacin did not result in reduction of post-ERCP pancreatitis (7.2%) when compared with placebo (4.9%). Of importance, 70% of patients included were at average risk for post-ERCP pancreatitis. Furthermore, despite a calculated sample size of 1,398 patients, the study was terminated early after enrolling only 449 patients based on the interim analysis showing futility to reach a statistically different outcome.

This well executed RCT reports no benefit in administering rectal indomethacin in all patients undergoing ERCP. Evidence strongly supports that rectal indomethacin remains an important advancement in preventing post-ERCP pancreatitis. However, its benefit is likely limited to a selected group of patients, those at high-risk for post-ERCP pancreatitis. Further studies are under way to clarify whether rectal indomethacin alone vs. indomethacin plus prophylactic pancreatic stenting is more effective in preventing post-ERCP pancreatitis in high-risk patients.

Dr. Georgios Papachristou is associate professor of medicine at the University of Pittsburgh. He is a consultant for Shire and has received funding from the National Institutes of Health and the VA Health System.

Title
Rectal indomethacin may still be protective in high-risk patients
Rectal indomethacin may still be protective in high-risk patients

Patients who receive rectal indomethacin after undergoing endoscopic retrograde cholangiopancreatography (ERCP) are not any less likely to develop pancreatitis than individuals who don’t, according to the findings of a recent study published in Gastroenterology (2016 Jan 9. doi: 10.1053/j.gastro.2015.12.018).

 
 

“These results are in contrast to recent studies highlighting the benefit of rectal NSAIDS to prevent PEP [post-ECRP pancreatitis] in high-risk patients [and] counter the guidelines espoused by the European Society for Gastrointestinal Endoscopy, which recently recommended giving rectal indomethacin to prevent PEP in all patients undergoing ERCP,” said the study authors, led by Dr. John M. Levenick of Penn State University in Hershey, Pa.

 

 

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Dr. Levenick and his coinvestigators screened 604 consecutive patients undergoing ERCP, with and without endoscopic ultrasound, at the Dartmouth-Hitchcock Medical Center between March 2013 and December 2014, eventually enrolling and randomizing 449 subjects into two cohorts: one in which subjects were given indomethacin after undergoing ERCP (n = 223), and one in which subjects were simply given a placebo (n = 226). Randomization happened after subjects’ major papilla had been reached, and cannulation attempts were started.

Individuals were excluded if they had active acute pancreatitis or had undergone ERCP to treat or diagnose acute pancreatitis, if they had any contraindications or allergies to NSAIDs, or were younger than 18 years of age, among other factors. The mean age of the indomethacin cohort was 64.9 years, with 118 (52.9%) females; in the placebo cohort, mean age was 64.3 years and 118 (52.2%) were female.

Pancreatitis occurred in 27 subjects overall, 16 (7.2%) of whom were in the indomethacin cohort and the other 11 (4.9%) were on placebo followed ERCP (P = .33). No subjects receiving indomethacin had severe or moderately severe PEP, but one subject had severe PEP and one had moderately severe PEP in the placebo cohort (P = 1.0). There was no necrotizing pancreatitis in either cohort, nor were there any significant differences in gastrointestinal bleeding (P = .75), death (P = .25), or 30-day hospital readmission (P = .1) between the two cohorts.

“Prophylactic rectal indomethacin did not reduce the incidence or severity of PEP in consecutive patients undergoing ERCP,” Dr. Levenick and his coauthors concluded, adding that “guidelines that recommend the administration of rectal indomethacin in all patients undergoing ERCP should be reconsidered.”

This study was funded by the National Pancreas Foundation and a grant from the National Institutes of Health. Dr. Levenick and his coauthors did not report any financial disclosures.

dchitnis@frontlinemedcom.com

Patients who receive rectal indomethacin after undergoing endoscopic retrograde cholangiopancreatography (ERCP) are not any less likely to develop pancreatitis than individuals who don’t, according to the findings of a recent study published in Gastroenterology (2016 Jan 9. doi: 10.1053/j.gastro.2015.12.018).

 
 

“These results are in contrast to recent studies highlighting the benefit of rectal NSAIDS to prevent PEP [post-ECRP pancreatitis] in high-risk patients [and] counter the guidelines espoused by the European Society for Gastrointestinal Endoscopy, which recently recommended giving rectal indomethacin to prevent PEP in all patients undergoing ERCP,” said the study authors, led by Dr. John M. Levenick of Penn State University in Hershey, Pa.

 

 

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Dr. Levenick and his coinvestigators screened 604 consecutive patients undergoing ERCP, with and without endoscopic ultrasound, at the Dartmouth-Hitchcock Medical Center between March 2013 and December 2014, eventually enrolling and randomizing 449 subjects into two cohorts: one in which subjects were given indomethacin after undergoing ERCP (n = 223), and one in which subjects were simply given a placebo (n = 226). Randomization happened after subjects’ major papilla had been reached, and cannulation attempts were started.

Individuals were excluded if they had active acute pancreatitis or had undergone ERCP to treat or diagnose acute pancreatitis, if they had any contraindications or allergies to NSAIDs, or were younger than 18 years of age, among other factors. The mean age of the indomethacin cohort was 64.9 years, with 118 (52.9%) females; in the placebo cohort, mean age was 64.3 years and 118 (52.2%) were female.

Pancreatitis occurred in 27 subjects overall, 16 (7.2%) of whom were in the indomethacin cohort and the other 11 (4.9%) were on placebo followed ERCP (P = .33). No subjects receiving indomethacin had severe or moderately severe PEP, but one subject had severe PEP and one had moderately severe PEP in the placebo cohort (P = 1.0). There was no necrotizing pancreatitis in either cohort, nor were there any significant differences in gastrointestinal bleeding (P = .75), death (P = .25), or 30-day hospital readmission (P = .1) between the two cohorts.

“Prophylactic rectal indomethacin did not reduce the incidence or severity of PEP in consecutive patients undergoing ERCP,” Dr. Levenick and his coauthors concluded, adding that “guidelines that recommend the administration of rectal indomethacin in all patients undergoing ERCP should be reconsidered.”

This study was funded by the National Pancreas Foundation and a grant from the National Institutes of Health. Dr. Levenick and his coauthors did not report any financial disclosures.

dchitnis@frontlinemedcom.com

Publications
Publications
Topics
Article Type
Display Headline
VIDEO: Rectal indomethacin does not prevent pancreatitis post ERCP
Display Headline
VIDEO: Rectal indomethacin does not prevent pancreatitis post ERCP
Legacy Keywords
rectal, indomethacin, pancreatitis, ERCP, endoscopic, retrograde, cholangiopancreatography, Levenick
Legacy Keywords
rectal, indomethacin, pancreatitis, ERCP, endoscopic, retrograde, cholangiopancreatography, Levenick
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Alternative CME
Vitals

Key clinical point: Rectal indomethacin does not prevent pancreatitis in patients who undergo endoscopic retrograde cholangiopancreatography (ERCP).

Major finding: 7.2% of subjects on indomethacin and 4.9% on placebo developed post-ERCP pancreatitis, indicating no significant difference between the two cohorts (P = .33).

Data source: Prospective, double-blind, placebo-controlled study of 449 ERCP patients between March 2013 and December 2014.

Disclosures: Study funded by National Pancreas Foundation and National Institutes of Health. Dr. Levenick and his coauthors did not report any relevant financial disclosures.

VIDEO: Newer MRI hardware, software significantly better at detecting pancreatic cysts

Newer MRIs much better at detecting pancreatic cysts
Article Type
Changed
Display Headline
VIDEO: Newer MRI hardware, software significantly better at detecting pancreatic cysts

As magnetic resonance imaging technology continues to advance year after year, so does MRI’s ability to accurately detect pancreatic cysts, according to a new study published in the April issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2015.08.038).

“To our knowledge, this is the first study to analyze the relationship between the technical improvements in imaging techniques (specifically, MRI) and the presence of incidentally found PCLs [pancreatic cystic lesions],” said the study authors, led by Dr. Michael B. Wallace of the Mayo Clinic in Jacksonville, Fla.

 

 

Dr. Michael B. Wallace

Dr. Wallace and his coinvestigators launched this retrospective descriptive study selecting the first 50 consecutive abdominal MRI patients at the Jacksonville Mayo Clinic during January and February of each year from 2005 through 2014, for a total of 500 cases who met inclusion criteria included in the study. Patients were excluded if they had preexisting symptomatic or asymptomatic pancreatitis, either acute or chronic, pancreatic masses, pancreatic cysts, pancreatic surgery, pancreatic symptoms, or any pancreas-related indications found by MRI.

The clinic underwent periodic MRI updates over the course of the 10-year study, along with requisite software updates to “take advantage of the new hardware technology,” the study explains. Major hardware improvements, provided by Siemens Medical Solutions USA, were Symphony/Sonata, Espree/Avanto, and Aera/Skyra, while software updates corresponding to each hardware update were VA, VB, and VD, respectively.

 

 

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Furthermore, each software update had other, smaller upgrades, leading to a total of 20 combinations of MRI hardware and software on which MRIs were performed over the 10 years. Every MRI taken included “an axial and a coronal T2-weighted single-shot (HASTE) pulse sequence [with] TR 1400-1500 ms, TE 82-99 ms, and slice thickness 5-7 mm (gap, 0.5-0.7 mm).” Each MRI was analyzed by a pancreatic MRI specialist to find incidental cysts.

The number of patients found with pancreatic cysts increased incrementally from 2005 to 2014, with 2010 being the year with the highest number. A total of 208 subjects (41.6%) were found to have incidental cysts, but only 44 of these cases were discovered in the original MRI. The presence of cysts was associated with older age in patients who had them; only 20% of all subjects under 50 years of age had cysts, compared to 32.4% of those between 50 and 60 years, 54.9% of those between 60 and 70 years, and 61.5% of those over the age of 70 years (P less than .01).

Additionally, 56.4% of all subjects with diabetes (P less than .01), 59.0% of subjects with nonmelanoma skin cancer (P less than .03), and 58.1% of those with hepatocarcinoma (P less than .02) were also found to have cysts. Most striking, however, is that newer hardware and software permutations were able to detect cysts in 56.3% (Skyra) of patients who had them, compared with only 30.3% (Symphony) of patients who underwent MRI on older technology.

“The variable field strength” (1.5 T vs. 3 T) was not significantly related to the presence of PCLs,” Dr. Wallace and his coauthors concluded. “We believe this may be secondary to the lack of power of the analysis, because only 6% of the examinations were 3-T studies. Therefore, we speculate that this relationship may be confirmed if the number of 3-T studies increased.”

Males and females each made up roughly 50% of the study population, with a median age of 60 years and 85% being white. Additionally, 4% of subjects had a family history of pancreatic cancer, 12% had a personal history of solid organ transplant, and 53% had a personal history of smoking.

This study was funded by the Mayo Clinic. Dr. Wallace disclosed that he has received grant funding from Olympus, Boston Scientific, and Cosmo Pharmaceuticals, and travel support from Olympus. No other authors reported any financial disclosures.

dchitnis@frontlinemedcom.com

Body

The increasing prevalence of pancreatic cystic lesions on MRI scanning may provide an important opportunity for detection of early precursors of pancreatic cancer – or may represent just another insignificant incidental finding. What is the implication of a small asymptomatic cyst?

MRI scanning of the pancreas has revolutionized our ability to detect early cystic neoplasms of the pancreas. Pancreatic cysts appear as well-defined, small, round fluid-filled structures within the pancreas. The inner structures – such as septations, nodules, and adjacent masses – offer clues as to the type of cyst and the risk of malignancy. But the real strength of pancreatic MRI scanning is the ability to detect and portray small cysts and the adjacent main pancreatic duct.  

The size, number, and distribution of cysts over time can be tracked with MRI surveillance. By tracking the diameter of cysts and calculating the rate of growth of cysts, clinicians may be able to predict the development of malignancy in intraductal papillary mucinous neoplasms.

How should these patients be managed clinically? Once a cyst has been identified, are clinicians obligated to notify the patient, monitor the cyst with an established surveillance program, or biopsy the cyst? If the cyst is very small and benign appearing, can the clinician ignore the finding and perhaps not notify the patient?  

Once again, we are watching dilemmas unfold as technology outstrips our understanding of diseases and their management. We are going to need some good correlations between imaging and tissue of pancreatic cystic lesions. In the meantime, it is important to reserve the use of pancreatic MRI scanning to high-risk patients or patients with CT scan abnormalities.

Dr. William R. Brugge, AGAF, is professor of medicine, Harvard Medical School, and director, Pancreas Biliary Center, Massachusetts General Hospital, both in Boston. He is a consultant with Boston Scientific.

Publications
Topics
Legacy Keywords
MRI, hardware, software, technology, pancreatic, cysts
Sections
Body

The increasing prevalence of pancreatic cystic lesions on MRI scanning may provide an important opportunity for detection of early precursors of pancreatic cancer – or may represent just another insignificant incidental finding. What is the implication of a small asymptomatic cyst?

MRI scanning of the pancreas has revolutionized our ability to detect early cystic neoplasms of the pancreas. Pancreatic cysts appear as well-defined, small, round fluid-filled structures within the pancreas. The inner structures – such as septations, nodules, and adjacent masses – offer clues as to the type of cyst and the risk of malignancy. But the real strength of pancreatic MRI scanning is the ability to detect and portray small cysts and the adjacent main pancreatic duct.  

The size, number, and distribution of cysts over time can be tracked with MRI surveillance. By tracking the diameter of cysts and calculating the rate of growth of cysts, clinicians may be able to predict the development of malignancy in intraductal papillary mucinous neoplasms.

How should these patients be managed clinically? Once a cyst has been identified, are clinicians obligated to notify the patient, monitor the cyst with an established surveillance program, or biopsy the cyst? If the cyst is very small and benign appearing, can the clinician ignore the finding and perhaps not notify the patient?  

Once again, we are watching dilemmas unfold as technology outstrips our understanding of diseases and their management. We are going to need some good correlations between imaging and tissue of pancreatic cystic lesions. In the meantime, it is important to reserve the use of pancreatic MRI scanning to high-risk patients or patients with CT scan abnormalities.

Dr. William R. Brugge, AGAF, is professor of medicine, Harvard Medical School, and director, Pancreas Biliary Center, Massachusetts General Hospital, both in Boston. He is a consultant with Boston Scientific.

Body

The increasing prevalence of pancreatic cystic lesions on MRI scanning may provide an important opportunity for detection of early precursors of pancreatic cancer – or may represent just another insignificant incidental finding. What is the implication of a small asymptomatic cyst?

MRI scanning of the pancreas has revolutionized our ability to detect early cystic neoplasms of the pancreas. Pancreatic cysts appear as well-defined, small, round fluid-filled structures within the pancreas. The inner structures – such as septations, nodules, and adjacent masses – offer clues as to the type of cyst and the risk of malignancy. But the real strength of pancreatic MRI scanning is the ability to detect and portray small cysts and the adjacent main pancreatic duct.  

The size, number, and distribution of cysts over time can be tracked with MRI surveillance. By tracking the diameter of cysts and calculating the rate of growth of cysts, clinicians may be able to predict the development of malignancy in intraductal papillary mucinous neoplasms.

How should these patients be managed clinically? Once a cyst has been identified, are clinicians obligated to notify the patient, monitor the cyst with an established surveillance program, or biopsy the cyst? If the cyst is very small and benign appearing, can the clinician ignore the finding and perhaps not notify the patient?  

Once again, we are watching dilemmas unfold as technology outstrips our understanding of diseases and their management. We are going to need some good correlations between imaging and tissue of pancreatic cystic lesions. In the meantime, it is important to reserve the use of pancreatic MRI scanning to high-risk patients or patients with CT scan abnormalities.

Dr. William R. Brugge, AGAF, is professor of medicine, Harvard Medical School, and director, Pancreas Biliary Center, Massachusetts General Hospital, both in Boston. He is a consultant with Boston Scientific.

Title
Newer MRIs much better at detecting pancreatic cysts
Newer MRIs much better at detecting pancreatic cysts

As magnetic resonance imaging technology continues to advance year after year, so does MRI’s ability to accurately detect pancreatic cysts, according to a new study published in the April issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2015.08.038).

“To our knowledge, this is the first study to analyze the relationship between the technical improvements in imaging techniques (specifically, MRI) and the presence of incidentally found PCLs [pancreatic cystic lesions],” said the study authors, led by Dr. Michael B. Wallace of the Mayo Clinic in Jacksonville, Fla.

 

 

Dr. Michael B. Wallace

Dr. Wallace and his coinvestigators launched this retrospective descriptive study selecting the first 50 consecutive abdominal MRI patients at the Jacksonville Mayo Clinic during January and February of each year from 2005 through 2014, for a total of 500 cases who met inclusion criteria included in the study. Patients were excluded if they had preexisting symptomatic or asymptomatic pancreatitis, either acute or chronic, pancreatic masses, pancreatic cysts, pancreatic surgery, pancreatic symptoms, or any pancreas-related indications found by MRI.

The clinic underwent periodic MRI updates over the course of the 10-year study, along with requisite software updates to “take advantage of the new hardware technology,” the study explains. Major hardware improvements, provided by Siemens Medical Solutions USA, were Symphony/Sonata, Espree/Avanto, and Aera/Skyra, while software updates corresponding to each hardware update were VA, VB, and VD, respectively.

 

 

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Furthermore, each software update had other, smaller upgrades, leading to a total of 20 combinations of MRI hardware and software on which MRIs were performed over the 10 years. Every MRI taken included “an axial and a coronal T2-weighted single-shot (HASTE) pulse sequence [with] TR 1400-1500 ms, TE 82-99 ms, and slice thickness 5-7 mm (gap, 0.5-0.7 mm).” Each MRI was analyzed by a pancreatic MRI specialist to find incidental cysts.

The number of patients found with pancreatic cysts increased incrementally from 2005 to 2014, with 2010 being the year with the highest number. A total of 208 subjects (41.6%) were found to have incidental cysts, but only 44 of these cases were discovered in the original MRI. The presence of cysts was associated with older age in patients who had them; only 20% of all subjects under 50 years of age had cysts, compared to 32.4% of those between 50 and 60 years, 54.9% of those between 60 and 70 years, and 61.5% of those over the age of 70 years (P less than .01).

Additionally, 56.4% of all subjects with diabetes (P less than .01), 59.0% of subjects with nonmelanoma skin cancer (P less than .03), and 58.1% of those with hepatocarcinoma (P less than .02) were also found to have cysts. Most striking, however, is that newer hardware and software permutations were able to detect cysts in 56.3% (Skyra) of patients who had them, compared with only 30.3% (Symphony) of patients who underwent MRI on older technology.

“The variable field strength” (1.5 T vs. 3 T) was not significantly related to the presence of PCLs,” Dr. Wallace and his coauthors concluded. “We believe this may be secondary to the lack of power of the analysis, because only 6% of the examinations were 3-T studies. Therefore, we speculate that this relationship may be confirmed if the number of 3-T studies increased.”

Males and females each made up roughly 50% of the study population, with a median age of 60 years and 85% being white. Additionally, 4% of subjects had a family history of pancreatic cancer, 12% had a personal history of solid organ transplant, and 53% had a personal history of smoking.

This study was funded by the Mayo Clinic. Dr. Wallace disclosed that he has received grant funding from Olympus, Boston Scientific, and Cosmo Pharmaceuticals, and travel support from Olympus. No other authors reported any financial disclosures.

dchitnis@frontlinemedcom.com

As magnetic resonance imaging technology continues to advance year after year, so does MRI’s ability to accurately detect pancreatic cysts, according to a new study published in the April issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2015.08.038).

“To our knowledge, this is the first study to analyze the relationship between the technical improvements in imaging techniques (specifically, MRI) and the presence of incidentally found PCLs [pancreatic cystic lesions],” said the study authors, led by Dr. Michael B. Wallace of the Mayo Clinic in Jacksonville, Fla.

 

 

Dr. Michael B. Wallace

Dr. Wallace and his coinvestigators launched this retrospective descriptive study selecting the first 50 consecutive abdominal MRI patients at the Jacksonville Mayo Clinic during January and February of each year from 2005 through 2014, for a total of 500 cases who met inclusion criteria included in the study. Patients were excluded if they had preexisting symptomatic or asymptomatic pancreatitis, either acute or chronic, pancreatic masses, pancreatic cysts, pancreatic surgery, pancreatic symptoms, or any pancreas-related indications found by MRI.

The clinic underwent periodic MRI updates over the course of the 10-year study, along with requisite software updates to “take advantage of the new hardware technology,” the study explains. Major hardware improvements, provided by Siemens Medical Solutions USA, were Symphony/Sonata, Espree/Avanto, and Aera/Skyra, while software updates corresponding to each hardware update were VA, VB, and VD, respectively.

 

 

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Furthermore, each software update had other, smaller upgrades, leading to a total of 20 combinations of MRI hardware and software on which MRIs were performed over the 10 years. Every MRI taken included “an axial and a coronal T2-weighted single-shot (HASTE) pulse sequence [with] TR 1400-1500 ms, TE 82-99 ms, and slice thickness 5-7 mm (gap, 0.5-0.7 mm).” Each MRI was analyzed by a pancreatic MRI specialist to find incidental cysts.

The number of patients found with pancreatic cysts increased incrementally from 2005 to 2014, with 2010 being the year with the highest number. A total of 208 subjects (41.6%) were found to have incidental cysts, but only 44 of these cases were discovered in the original MRI. The presence of cysts was associated with older age in patients who had them; only 20% of all subjects under 50 years of age had cysts, compared to 32.4% of those between 50 and 60 years, 54.9% of those between 60 and 70 years, and 61.5% of those over the age of 70 years (P less than .01).

Additionally, 56.4% of all subjects with diabetes (P less than .01), 59.0% of subjects with nonmelanoma skin cancer (P less than .03), and 58.1% of those with hepatocarcinoma (P less than .02) were also found to have cysts. Most striking, however, is that newer hardware and software permutations were able to detect cysts in 56.3% (Skyra) of patients who had them, compared with only 30.3% (Symphony) of patients who underwent MRI on older technology.

“The variable field strength” (1.5 T vs. 3 T) was not significantly related to the presence of PCLs,” Dr. Wallace and his coauthors concluded. “We believe this may be secondary to the lack of power of the analysis, because only 6% of the examinations were 3-T studies. Therefore, we speculate that this relationship may be confirmed if the number of 3-T studies increased.”

Males and females each made up roughly 50% of the study population, with a median age of 60 years and 85% being white. Additionally, 4% of subjects had a family history of pancreatic cancer, 12% had a personal history of solid organ transplant, and 53% had a personal history of smoking.

This study was funded by the Mayo Clinic. Dr. Wallace disclosed that he has received grant funding from Olympus, Boston Scientific, and Cosmo Pharmaceuticals, and travel support from Olympus. No other authors reported any financial disclosures.

dchitnis@frontlinemedcom.com

Publications
Publications
Topics
Article Type
Display Headline
VIDEO: Newer MRI hardware, software significantly better at detecting pancreatic cysts
Display Headline
VIDEO: Newer MRI hardware, software significantly better at detecting pancreatic cysts
Legacy Keywords
MRI, hardware, software, technology, pancreatic, cysts
Legacy Keywords
MRI, hardware, software, technology, pancreatic, cysts
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Alternative CME
Vitals

Key clinical point: Newer MRI technology is more effective at detecting pancreatic cysts, particularly in patients with diabetes or advanced age.

Major finding: Newer MRI hardware and software detected pancreatic cysts in 56.3% of patients, compared with only 30.3% on older MRI hardware and software.

Data source: Retrospective, descriptive study of 500 patients undergoing MRI for nonpancreatic indications during January and February of 2005-2014.

Disclosures: Study funded by the Mayo Clinic. Dr. Michael B. Wallace disclosed relationships with Olympus, Boston Scientific, and Cosmo Pharmaceuticals.

VIDEO: Anesthesia services during colonoscopy increase risk of near-term complications

Anesthesia during colonoscopy may not be worth the cost
Article Type
Changed
Display Headline
VIDEO: Anesthesia services during colonoscopy increase risk of near-term complications

Receiving anesthesia services while undergoing a colonoscopy may not be in your patients’ best interest, as doing so could significantly increase the likelihood of patients experiencing serious complications within 30 days of the procedure.

This is according to a new study published in the April issue of Gastroenterology, in which Dr. Karen J. Wernli and her coinvestigators analyzed claims data, collected from the Truven Health MarketScan Research Database, related to 3,168,228 colonoscopy procedures that took place between 2008 and 2011, to determine whether patients who received anesthesia were at a higher risk of developing complications after the procedure (doi: 10.1053/j.gastro.2015.12.018).

Source: American Gastroenterological Association

“The involvement of anesthesia services for colonoscopy sedation, mainly to administer propofol, has increased accordingly, from 11.0% of colonoscopies in 2001 to 23.4% in 2006, with projections of more than 50% in 2015,” wrote Dr. Wernli of the Group Health Research Institute in Seattle, and her coauthors. “Whether the use of propofol is associated with higher rates of short-term complications compared with standard sedation is not well understood.”

Men and women whose data was included in the study were between 40 and 64 years of age; men accounted for 46.8% of those receiving standard sedation (53.2% women) and 46.5% of those receiving anesthesia services (53.5% women). A total of 4,939,993 individuals were initially screened for enrollment, with 39,784 excluded because of a previous colorectal cancer diagnosis, 240,038 for “noncancer exclusions,” and 1,491,943 for being enrolled in the study less than 1 year.

Standard sedation was done in 2,079,784 (65.6%) of the procedures included in the study, while the other 1,088,444 (34.4%) colonoscopies involved anesthesia services. Use of anesthesia services resulted in a 13% increase in likelihood for patients to experience some kind of complication within 30 days of colonoscopy (95% confidence interval, 1.12-1.14). The most common complications were perforation (odds ratio, 1.07; 95% CI, 1.00-1.15), hemorrhage (OR, 1.28; 95% CI, 1.27-1.30), abdominal pain (OR, 1.07; 95% CI, 1.05-1.08), complications secondary to anesthesia (OR, 1.15; 95% CI, 1.05-1.28), and “stroke and other central nervous system events” (OR, 1.04; 95% CI, 1.00-1.08).

Analysis of geographic distribution of colonoscopies performed with and without anesthesia services showed that all areas of the United States had a higher likelihood of postcolonoscopy complications associated with anesthesia except in the Southeast, where there was no association between the two. Additionally, in the western U.S., use of anesthesia services was less common than in any other geographic area, but was associated with a staggering 60% higher chance of complication within 30 days for patients who did opt for it.

“Although the use of anesthesia agents can directly impact colonoscopy outcomes, it is not solely the anesthesia agent that could lead to additional complications,” the study authors wrote. “In the absence of patient feedback, increased colonic-wall tension from colonoscopy pressure may not be identified by the endoscopist, and, consistent with our results, could lead to increased risks of colonic complications, such as perforation and abdominal pain.”

Dr. Wernli and her coauthors did not report any relevant financial disclosures.

dchitnis@frontlinemedcom.com

References

Click for Credit Link
Body

We are approaching a time when half of all colonoscopies are performed with anesthesia assistance, most using propofol. Undeniably, some patients require anesthesia support for medical reasons, or because they do not sedate adequately with opiate-benzodiazepine combinations endoscopists can administer. The popularity of propofol-based anesthesia for routine colonoscopy, however, is based on several perceived benefits: patient demand for a discomfort-free procedure, rapid sedation followed by quick recovery, and good reimbursement for the anesthesia service itself, added to the benefits of faster overall procedure turnaround time. And presently, there is no disincentive — financial or otherwise — to continuing or expanding this practice. Colonoscopy with anesthesia looks like a win-win for both patient and endoscopist, as long as the added cost of anesthesia can be justified.

However, while anesthesia-assisted colonoscopy appears to possess several advantages, growing evidence suggests that a lower risk of complications is not one of them.

A smaller study (165,000 colonoscopies) using NCI SEER registry data suggested that adding anesthesia to colonoscopy may increase some adverse events. Cooper et al. (JAMA Intern Med. 2013;173:551-6) showed an increase in overall complications and, specifically, aspiration, although not in technical complications of colonoscopy, including perforation and splenic rupture. However, this study did not include patients who underwent polypectomy. Wernli, et al. now show evidence derived from over 3 million patients demonstrating that adding anesthesia to colonoscopy increases complications significantly — not only aspiration, but also technical aspects of colonoscopy, including perforation, bleeding, and abdominal pain.

Colonoscopy is extremely safe, so complications are infrequent. Thus, data sets of colonoscopy complications large enough to be statistically meaningful for studies of this type require an extraordinarily large patient pool. For this prospective, observational cohort study, the authors obtained the large sample size by mining administrative claims data for 3 years, not through examining clinical data. As a result, several assumptions were made. These 3 million colonoscopies represented all indications — not just colorectal cancer screening. Billing claims for anesthesia represented surrogate markers for administration of propofol-based anesthesia. While anesthesia assistance was associated with increased risk of perforation, hemorrhage, abdominal pain, anesthesia complications, and stroke; risk of perforation associated with anesthesia was increased only in patients who underwent polypectomy.

Study methodology and confounding variables aside, it is hard to ignore the core message here: a large body of data analyzed rigorously demonstrate that anesthesia support for colonoscopy increases risk of procedure-related complications.

Patients who are ill, have certain cardiopulmonary issues, or do not sedate adequately with moderate sedation benefit from anesthesia assistance for colonoscopy. But for patients undergoing routine colonoscopy, without such issues, who could safely undergo colonoscopy under moderate sedation without unreasonable discomfort, we must now ask ourselves and discuss with our patients honestly, not only whether the added cost of anesthesia is reasonable — but also whether the apparent added risk of anesthesia justifies perceived benefits.

Dr. John A. Martin is senior associate consultant and associate professor, associate chair for endoscopy, Mayo Clinic, Rochester, Minn. He has no conflicts of interest to disclose.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Anesthesia, services, colonoscopy, complications, 30-day, sedation, performation, abdominal
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Body

We are approaching a time when half of all colonoscopies are performed with anesthesia assistance, most using propofol. Undeniably, some patients require anesthesia support for medical reasons, or because they do not sedate adequately with opiate-benzodiazepine combinations endoscopists can administer. The popularity of propofol-based anesthesia for routine colonoscopy, however, is based on several perceived benefits: patient demand for a discomfort-free procedure, rapid sedation followed by quick recovery, and good reimbursement for the anesthesia service itself, added to the benefits of faster overall procedure turnaround time. And presently, there is no disincentive — financial or otherwise — to continuing or expanding this practice. Colonoscopy with anesthesia looks like a win-win for both patient and endoscopist, as long as the added cost of anesthesia can be justified.

However, while anesthesia-assisted colonoscopy appears to possess several advantages, growing evidence suggests that a lower risk of complications is not one of them.

A smaller study (165,000 colonoscopies) using NCI SEER registry data suggested that adding anesthesia to colonoscopy may increase some adverse events. Cooper et al. (JAMA Intern Med. 2013;173:551-6) showed an increase in overall complications and, specifically, aspiration, although not in technical complications of colonoscopy, including perforation and splenic rupture. However, this study did not include patients who underwent polypectomy. Wernli, et al. now show evidence derived from over 3 million patients demonstrating that adding anesthesia to colonoscopy increases complications significantly — not only aspiration, but also technical aspects of colonoscopy, including perforation, bleeding, and abdominal pain.

Colonoscopy is extremely safe, so complications are infrequent. Thus, data sets of colonoscopy complications large enough to be statistically meaningful for studies of this type require an extraordinarily large patient pool. For this prospective, observational cohort study, the authors obtained the large sample size by mining administrative claims data for 3 years, not through examining clinical data. As a result, several assumptions were made. These 3 million colonoscopies represented all indications — not just colorectal cancer screening. Billing claims for anesthesia represented surrogate markers for administration of propofol-based anesthesia. While anesthesia assistance was associated with increased risk of perforation, hemorrhage, abdominal pain, anesthesia complications, and stroke; risk of perforation associated with anesthesia was increased only in patients who underwent polypectomy.

Study methodology and confounding variables aside, it is hard to ignore the core message here: a large body of data analyzed rigorously demonstrate that anesthesia support for colonoscopy increases risk of procedure-related complications.

Patients who are ill, have certain cardiopulmonary issues, or do not sedate adequately with moderate sedation benefit from anesthesia assistance for colonoscopy. But for patients undergoing routine colonoscopy, without such issues, who could safely undergo colonoscopy under moderate sedation without unreasonable discomfort, we must now ask ourselves and discuss with our patients honestly, not only whether the added cost of anesthesia is reasonable — but also whether the apparent added risk of anesthesia justifies perceived benefits.

Dr. John A. Martin is senior associate consultant and associate professor, associate chair for endoscopy, Mayo Clinic, Rochester, Minn. He has no conflicts of interest to disclose.

Body

We are approaching a time when half of all colonoscopies are performed with anesthesia assistance, most using propofol. Undeniably, some patients require anesthesia support for medical reasons, or because they do not sedate adequately with opiate-benzodiazepine combinations endoscopists can administer. The popularity of propofol-based anesthesia for routine colonoscopy, however, is based on several perceived benefits: patient demand for a discomfort-free procedure, rapid sedation followed by quick recovery, and good reimbursement for the anesthesia service itself, added to the benefits of faster overall procedure turnaround time. And presently, there is no disincentive — financial or otherwise — to continuing or expanding this practice. Colonoscopy with anesthesia looks like a win-win for both patient and endoscopist, as long as the added cost of anesthesia can be justified.

However, while anesthesia-assisted colonoscopy appears to possess several advantages, growing evidence suggests that a lower risk of complications is not one of them.

A smaller study (165,000 colonoscopies) using NCI SEER registry data suggested that adding anesthesia to colonoscopy may increase some adverse events. Cooper et al. (JAMA Intern Med. 2013;173:551-6) showed an increase in overall complications and, specifically, aspiration, although not in technical complications of colonoscopy, including perforation and splenic rupture. However, this study did not include patients who underwent polypectomy. Wernli, et al. now show evidence derived from over 3 million patients demonstrating that adding anesthesia to colonoscopy increases complications significantly — not only aspiration, but also technical aspects of colonoscopy, including perforation, bleeding, and abdominal pain.

Colonoscopy is extremely safe, so complications are infrequent. Thus, data sets of colonoscopy complications large enough to be statistically meaningful for studies of this type require an extraordinarily large patient pool. For this prospective, observational cohort study, the authors obtained the large sample size by mining administrative claims data for 3 years, not through examining clinical data. As a result, several assumptions were made. These 3 million colonoscopies represented all indications — not just colorectal cancer screening. Billing claims for anesthesia represented surrogate markers for administration of propofol-based anesthesia. While anesthesia assistance was associated with increased risk of perforation, hemorrhage, abdominal pain, anesthesia complications, and stroke; risk of perforation associated with anesthesia was increased only in patients who underwent polypectomy.

Study methodology and confounding variables aside, it is hard to ignore the core message here: a large body of data analyzed rigorously demonstrate that anesthesia support for colonoscopy increases risk of procedure-related complications.

Patients who are ill, have certain cardiopulmonary issues, or do not sedate adequately with moderate sedation benefit from anesthesia assistance for colonoscopy. But for patients undergoing routine colonoscopy, without such issues, who could safely undergo colonoscopy under moderate sedation without unreasonable discomfort, we must now ask ourselves and discuss with our patients honestly, not only whether the added cost of anesthesia is reasonable — but also whether the apparent added risk of anesthesia justifies perceived benefits.

Dr. John A. Martin is senior associate consultant and associate professor, associate chair for endoscopy, Mayo Clinic, Rochester, Minn. He has no conflicts of interest to disclose.

Title
Anesthesia during colonoscopy may not be worth the cost
Anesthesia during colonoscopy may not be worth the cost

Receiving anesthesia services while undergoing a colonoscopy may not be in your patients’ best interest, as doing so could significantly increase the likelihood of patients experiencing serious complications within 30 days of the procedure.

This is according to a new study published in the April issue of Gastroenterology, in which Dr. Karen J. Wernli and her coinvestigators analyzed claims data, collected from the Truven Health MarketScan Research Database, related to 3,168,228 colonoscopy procedures that took place between 2008 and 2011, to determine whether patients who received anesthesia were at a higher risk of developing complications after the procedure (doi: 10.1053/j.gastro.2015.12.018).

Source: American Gastroenterological Association

“The involvement of anesthesia services for colonoscopy sedation, mainly to administer propofol, has increased accordingly, from 11.0% of colonoscopies in 2001 to 23.4% in 2006, with projections of more than 50% in 2015,” wrote Dr. Wernli of the Group Health Research Institute in Seattle, and her coauthors. “Whether the use of propofol is associated with higher rates of short-term complications compared with standard sedation is not well understood.”

Men and women whose data was included in the study were between 40 and 64 years of age; men accounted for 46.8% of those receiving standard sedation (53.2% women) and 46.5% of those receiving anesthesia services (53.5% women). A total of 4,939,993 individuals were initially screened for enrollment, with 39,784 excluded because of a previous colorectal cancer diagnosis, 240,038 for “noncancer exclusions,” and 1,491,943 for being enrolled in the study less than 1 year.

Standard sedation was done in 2,079,784 (65.6%) of the procedures included in the study, while the other 1,088,444 (34.4%) colonoscopies involved anesthesia services. Use of anesthesia services resulted in a 13% increase in likelihood for patients to experience some kind of complication within 30 days of colonoscopy (95% confidence interval, 1.12-1.14). The most common complications were perforation (odds ratio, 1.07; 95% CI, 1.00-1.15), hemorrhage (OR, 1.28; 95% CI, 1.27-1.30), abdominal pain (OR, 1.07; 95% CI, 1.05-1.08), complications secondary to anesthesia (OR, 1.15; 95% CI, 1.05-1.28), and “stroke and other central nervous system events” (OR, 1.04; 95% CI, 1.00-1.08).

Analysis of geographic distribution of colonoscopies performed with and without anesthesia services showed that all areas of the United States had a higher likelihood of postcolonoscopy complications associated with anesthesia except in the Southeast, where there was no association between the two. Additionally, in the western U.S., use of anesthesia services was less common than in any other geographic area, but was associated with a staggering 60% higher chance of complication within 30 days for patients who did opt for it.

“Although the use of anesthesia agents can directly impact colonoscopy outcomes, it is not solely the anesthesia agent that could lead to additional complications,” the study authors wrote. “In the absence of patient feedback, increased colonic-wall tension from colonoscopy pressure may not be identified by the endoscopist, and, consistent with our results, could lead to increased risks of colonic complications, such as perforation and abdominal pain.”

Dr. Wernli and her coauthors did not report any relevant financial disclosures.

dchitnis@frontlinemedcom.com

Receiving anesthesia services while undergoing a colonoscopy may not be in your patients’ best interest, as doing so could significantly increase the likelihood of patients experiencing serious complications within 30 days of the procedure.

This is according to a new study published in the April issue of Gastroenterology, in which Dr. Karen J. Wernli and her coinvestigators analyzed claims data, collected from the Truven Health MarketScan Research Database, related to 3,168,228 colonoscopy procedures that took place between 2008 and 2011, to determine whether patients who received anesthesia were at a higher risk of developing complications after the procedure (doi: 10.1053/j.gastro.2015.12.018).

Source: American Gastroenterological Association

“The involvement of anesthesia services for colonoscopy sedation, mainly to administer propofol, has increased accordingly, from 11.0% of colonoscopies in 2001 to 23.4% in 2006, with projections of more than 50% in 2015,” wrote Dr. Wernli of the Group Health Research Institute in Seattle, and her coauthors. “Whether the use of propofol is associated with higher rates of short-term complications compared with standard sedation is not well understood.”

Men and women whose data was included in the study were between 40 and 64 years of age; men accounted for 46.8% of those receiving standard sedation (53.2% women) and 46.5% of those receiving anesthesia services (53.5% women). A total of 4,939,993 individuals were initially screened for enrollment, with 39,784 excluded because of a previous colorectal cancer diagnosis, 240,038 for “noncancer exclusions,” and 1,491,943 for being enrolled in the study less than 1 year.

Standard sedation was done in 2,079,784 (65.6%) of the procedures included in the study, while the other 1,088,444 (34.4%) colonoscopies involved anesthesia services. Use of anesthesia services resulted in a 13% increase in likelihood for patients to experience some kind of complication within 30 days of colonoscopy (95% confidence interval, 1.12-1.14). The most common complications were perforation (odds ratio, 1.07; 95% CI, 1.00-1.15), hemorrhage (OR, 1.28; 95% CI, 1.27-1.30), abdominal pain (OR, 1.07; 95% CI, 1.05-1.08), complications secondary to anesthesia (OR, 1.15; 95% CI, 1.05-1.28), and “stroke and other central nervous system events” (OR, 1.04; 95% CI, 1.00-1.08).

Analysis of geographic distribution of colonoscopies performed with and without anesthesia services showed that all areas of the United States had a higher likelihood of postcolonoscopy complications associated with anesthesia except in the Southeast, where there was no association between the two. Additionally, in the western U.S., use of anesthesia services was less common than in any other geographic area, but was associated with a staggering 60% higher chance of complication within 30 days for patients who did opt for it.

“Although the use of anesthesia agents can directly impact colonoscopy outcomes, it is not solely the anesthesia agent that could lead to additional complications,” the study authors wrote. “In the absence of patient feedback, increased colonic-wall tension from colonoscopy pressure may not be identified by the endoscopist, and, consistent with our results, could lead to increased risks of colonic complications, such as perforation and abdominal pain.”

Dr. Wernli and her coauthors did not report any relevant financial disclosures.

dchitnis@frontlinemedcom.com

References

References

Publications
Publications
Topics
Article Type
Display Headline
VIDEO: Anesthesia services during colonoscopy increase risk of near-term complications
Display Headline
VIDEO: Anesthesia services during colonoscopy increase risk of near-term complications
Legacy Keywords
Anesthesia, services, colonoscopy, complications, 30-day, sedation, performation, abdominal
Legacy Keywords
Anesthesia, services, colonoscopy, complications, 30-day, sedation, performation, abdominal
Click for Credit Status
Active
Sections
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Using anesthesia services on individuals receiving colonoscopy increases the overall risk of complications associated with the procedure.

Major finding: Colonoscopy patients who received anesthesia had a 13% higher risk of complication within 30 days, including perforation, hemorrhage, abdominal pain, and stroke.

Data source: A prospective cohort study of claims data from 3,168,228 colonoscopy procedures in the Truven Health MarketScan Research Databases from 2008 to 2011.

Disclosures: Funding provided by the Agency for Healthcare Research and Quality and the National Institutes of Health. Dr. Wernli and her coauthors did not report any relevant financial disclosures.

High gluten consumption early in life upped risk of celiac disease

High early gluten consumption upped celiac disease
Article Type
Changed
Display Headline
High gluten consumption early in life upped risk of celiac disease

Children who were genetically susceptible to celiac disease and consumed high amounts of gluten at 12 months of age were at least twice as likely to develop the autoimmune disorder as genetically predisposed children who consumed less gluten, researchers reported in the March issue of Clinical Gastroenterology and Hepatology.

The association was similar among children who carried any of the major human leukocyte antigen (HLA) risk genotypes for celiac disease, said Dr. Carin Aronsson at Lund University in Sweden and her associates. “Because these HLA risk genotypes are widely distributed in the general population, these findings may have consequence for future infant feeding recommendations,” they said. They recommended repeating the study in other countries to confirm the link.

In order to develop celiac disease, patients must consume gluten and carry at least one of the relevant DR3-DQ2 and DR4-DQ8 HLA risk haplotypes. But because gluten is widely consumed in products containing wheat, rye, and barley, and because about half of whites have at least one of the two haplotypes, gluten intolerance probably depends on other environmental factors, the researchers said. To further study these factors, they compared 3-day food diaries collected at ages 9, 12, 18, and 24 months for 146 children with positive tissue transglutaminase autoantibody (tTGA) assays and biopsy-confirmed celiac disease (cases) and 436 tTGA-negative children (controls). Cases and controls were matched by age, sex, and HLA genotype (Clin Gastroenterol Hepatol. 2015 Oct 7. doi: 10.1016/j.cgh.2015.09.030).

The food diaries revealed higher gluten intake among cases, compared with controls, beginning at the age of 12 months, said the researchers. Notably, cases consumed a median of 4.9 g of gluten a day before tTGA seroconversion, 1 g more than the median amount for controls of the same age (odds ratio, 1.3; 95% confidence interval, 1.1-1.5; P = .0002). Furthermore, significantly more cases than controls consumed the highest tertile of gluten, more than 5 g per day, before seroconversion (OR, 2.7; 95% CI, 1.7-4.1; P less than .0001). These associations were similar among children of all haplotype profiles and trended in the same direction among children with and without first-degree relatives with celiac disease.

Cases and controls resembled each other in terms of breastfeeding duration, age at first introduction to gluten, and total daily caloric intake, the investigators noted. “The prospective design of this birth cohort study enabled us to obtain the diet information before seroconversion of tTGA as a marker of celiac disease,” they said. “This eliminated the risk of reporting biases or a change in feeding habits because of the knowledge of serology results or disease status.” But they did not analyze the number of daily servings of foods that contained gluten. “We cannot exclude the possibility that the number of portions given frequently during the course of the day may have different effects on disease risk,” they said.

The National Institutes of Health, Juvenile Diabetes Research Foundation, and the Centers for Disease Control and Prevention funded the study. The investigators had no disclosures.

Source: American Gastroenterological Association

References

Body

Long-suffering Swedish children probably have the highest rate of celiac disease in the world. This rate has dramatically increased. Why and why not? Previous studies have shown that it is not breastfeeding. It is not age or timing of introduction of gluten. It is not likely to be infections. This study shows that it is the amount of gluten that drives children with the highest genetic risk for celiac disease to develop the disease early in life. This conversion is preceded by a high intake of gluten. While these results alone should not determine general infant feeding practices, it suggests that if you are a Swedish child who carries these high-risk genes, high quantities of gluten early in life are not for you.

This study also raises the question of the effect high-dose gluten in adults at risk. Previously, studies have shown that the prevalence of celiac disease in adults in Sweden is not much different from the pediatric population. This study needs to be expanded to other Western populations where the rate of celiac disease is not so high. While nutritional engineering on a grand scale should not be undertaken lightly given the possibility of unexpected consequences, it behooves at least the Swedish population to perhaps reexamine their cultural practices of incorporating high gluten-containing cereals early in the lives of children, most especially those at particular risk for celiac disease.

Dr. Joseph A. Murray, AGAF, is professor of medicine, consultant, division of gastroenterology and hepatology, and department of immunology, and director of the Celiac Disease Program at the Mayo Clinic, Rochester, Minn.

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Body

Long-suffering Swedish children probably have the highest rate of celiac disease in the world. This rate has dramatically increased. Why and why not? Previous studies have shown that it is not breastfeeding. It is not age or timing of introduction of gluten. It is not likely to be infections. This study shows that it is the amount of gluten that drives children with the highest genetic risk for celiac disease to develop the disease early in life. This conversion is preceded by a high intake of gluten. While these results alone should not determine general infant feeding practices, it suggests that if you are a Swedish child who carries these high-risk genes, high quantities of gluten early in life are not for you.

This study also raises the question of the effect high-dose gluten in adults at risk. Previously, studies have shown that the prevalence of celiac disease in adults in Sweden is not much different from the pediatric population. This study needs to be expanded to other Western populations where the rate of celiac disease is not so high. While nutritional engineering on a grand scale should not be undertaken lightly given the possibility of unexpected consequences, it behooves at least the Swedish population to perhaps reexamine their cultural practices of incorporating high gluten-containing cereals early in the lives of children, most especially those at particular risk for celiac disease.

Dr. Joseph A. Murray, AGAF, is professor of medicine, consultant, division of gastroenterology and hepatology, and department of immunology, and director of the Celiac Disease Program at the Mayo Clinic, Rochester, Minn.

Body

Long-suffering Swedish children probably have the highest rate of celiac disease in the world. This rate has dramatically increased. Why and why not? Previous studies have shown that it is not breastfeeding. It is not age or timing of introduction of gluten. It is not likely to be infections. This study shows that it is the amount of gluten that drives children with the highest genetic risk for celiac disease to develop the disease early in life. This conversion is preceded by a high intake of gluten. While these results alone should not determine general infant feeding practices, it suggests that if you are a Swedish child who carries these high-risk genes, high quantities of gluten early in life are not for you.

This study also raises the question of the effect high-dose gluten in adults at risk. Previously, studies have shown that the prevalence of celiac disease in adults in Sweden is not much different from the pediatric population. This study needs to be expanded to other Western populations where the rate of celiac disease is not so high. While nutritional engineering on a grand scale should not be undertaken lightly given the possibility of unexpected consequences, it behooves at least the Swedish population to perhaps reexamine their cultural practices of incorporating high gluten-containing cereals early in the lives of children, most especially those at particular risk for celiac disease.

Dr. Joseph A. Murray, AGAF, is professor of medicine, consultant, division of gastroenterology and hepatology, and department of immunology, and director of the Celiac Disease Program at the Mayo Clinic, Rochester, Minn.

Title
High early gluten consumption upped celiac disease
High early gluten consumption upped celiac disease

Children who were genetically susceptible to celiac disease and consumed high amounts of gluten at 12 months of age were at least twice as likely to develop the autoimmune disorder as genetically predisposed children who consumed less gluten, researchers reported in the March issue of Clinical Gastroenterology and Hepatology.

The association was similar among children who carried any of the major human leukocyte antigen (HLA) risk genotypes for celiac disease, said Dr. Carin Aronsson at Lund University in Sweden and her associates. “Because these HLA risk genotypes are widely distributed in the general population, these findings may have consequence for future infant feeding recommendations,” they said. They recommended repeating the study in other countries to confirm the link.

In order to develop celiac disease, patients must consume gluten and carry at least one of the relevant DR3-DQ2 and DR4-DQ8 HLA risk haplotypes. But because gluten is widely consumed in products containing wheat, rye, and barley, and because about half of whites have at least one of the two haplotypes, gluten intolerance probably depends on other environmental factors, the researchers said. To further study these factors, they compared 3-day food diaries collected at ages 9, 12, 18, and 24 months for 146 children with positive tissue transglutaminase autoantibody (tTGA) assays and biopsy-confirmed celiac disease (cases) and 436 tTGA-negative children (controls). Cases and controls were matched by age, sex, and HLA genotype (Clin Gastroenterol Hepatol. 2015 Oct 7. doi: 10.1016/j.cgh.2015.09.030).

The food diaries revealed higher gluten intake among cases, compared with controls, beginning at the age of 12 months, said the researchers. Notably, cases consumed a median of 4.9 g of gluten a day before tTGA seroconversion, 1 g more than the median amount for controls of the same age (odds ratio, 1.3; 95% confidence interval, 1.1-1.5; P = .0002). Furthermore, significantly more cases than controls consumed the highest tertile of gluten, more than 5 g per day, before seroconversion (OR, 2.7; 95% CI, 1.7-4.1; P less than .0001). These associations were similar among children of all haplotype profiles and trended in the same direction among children with and without first-degree relatives with celiac disease.

Cases and controls resembled each other in terms of breastfeeding duration, age at first introduction to gluten, and total daily caloric intake, the investigators noted. “The prospective design of this birth cohort study enabled us to obtain the diet information before seroconversion of tTGA as a marker of celiac disease,” they said. “This eliminated the risk of reporting biases or a change in feeding habits because of the knowledge of serology results or disease status.” But they did not analyze the number of daily servings of foods that contained gluten. “We cannot exclude the possibility that the number of portions given frequently during the course of the day may have different effects on disease risk,” they said.

The National Institutes of Health, Juvenile Diabetes Research Foundation, and the Centers for Disease Control and Prevention funded the study. The investigators had no disclosures.

Source: American Gastroenterological Association

Children who were genetically susceptible to celiac disease and consumed high amounts of gluten at 12 months of age were at least twice as likely to develop the autoimmune disorder as genetically predisposed children who consumed less gluten, researchers reported in the March issue of Clinical Gastroenterology and Hepatology.

The association was similar among children who carried any of the major human leukocyte antigen (HLA) risk genotypes for celiac disease, said Dr. Carin Aronsson at Lund University in Sweden and her associates. “Because these HLA risk genotypes are widely distributed in the general population, these findings may have consequence for future infant feeding recommendations,” they said. They recommended repeating the study in other countries to confirm the link.

In order to develop celiac disease, patients must consume gluten and carry at least one of the relevant DR3-DQ2 and DR4-DQ8 HLA risk haplotypes. But because gluten is widely consumed in products containing wheat, rye, and barley, and because about half of whites have at least one of the two haplotypes, gluten intolerance probably depends on other environmental factors, the researchers said. To further study these factors, they compared 3-day food diaries collected at ages 9, 12, 18, and 24 months for 146 children with positive tissue transglutaminase autoantibody (tTGA) assays and biopsy-confirmed celiac disease (cases) and 436 tTGA-negative children (controls). Cases and controls were matched by age, sex, and HLA genotype (Clin Gastroenterol Hepatol. 2015 Oct 7. doi: 10.1016/j.cgh.2015.09.030).

The food diaries revealed higher gluten intake among cases, compared with controls, beginning at the age of 12 months, said the researchers. Notably, cases consumed a median of 4.9 g of gluten a day before tTGA seroconversion, 1 g more than the median amount for controls of the same age (odds ratio, 1.3; 95% confidence interval, 1.1-1.5; P = .0002). Furthermore, significantly more cases than controls consumed the highest tertile of gluten, more than 5 g per day, before seroconversion (OR, 2.7; 95% CI, 1.7-4.1; P less than .0001). These associations were similar among children of all haplotype profiles and trended in the same direction among children with and without first-degree relatives with celiac disease.

Cases and controls resembled each other in terms of breastfeeding duration, age at first introduction to gluten, and total daily caloric intake, the investigators noted. “The prospective design of this birth cohort study enabled us to obtain the diet information before seroconversion of tTGA as a marker of celiac disease,” they said. “This eliminated the risk of reporting biases or a change in feeding habits because of the knowledge of serology results or disease status.” But they did not analyze the number of daily servings of foods that contained gluten. “We cannot exclude the possibility that the number of portions given frequently during the course of the day may have different effects on disease risk,” they said.

The National Institutes of Health, Juvenile Diabetes Research Foundation, and the Centers for Disease Control and Prevention funded the study. The investigators had no disclosures.

Source: American Gastroenterological Association

References

References

Publications
Publications
Topics
Article Type
Display Headline
High gluten consumption early in life upped risk of celiac disease
Display Headline
High gluten consumption early in life upped risk of celiac disease
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: High levels of gluten consumption in early life significantly increased the risk of celiac disease.

Major finding: The odds of celiac disease were more than twice as high among children who consumed more than 5 g of gluten a day, compared with those who consumed less gluten (OR, 2.65; P less than .0001).

Data source: A 1 to 3 matched nested case-control study of 146 children with biopsy-confirmed celiac disease (cases) and 436 tissue transglutaminase (tTGA)-negative controls.

Disclosures: The National Institutes of Health, Juvenile Diabetes Research Foundation, and the Centers for Disease Control and Prevention funded the study. The investigators had no disclosures.

Mirtazapine improves functional dyspepsia in small study

Article Type
Changed
Display Headline
Mirtazapine improves functional dyspepsia in small study

The antidepressant mirtazapine improved weight loss, early satiation, nausea, and other signs and symptoms in patients with functional dyspepsia, said the authors of a placebo-controlled pilot study published in the March issue of Clinical Gastroenterology and Hepatology.

The findings suggest that mirtazapine “has the potential to become the treatment of choice for functional dyspepsia in patients with weight loss, and evaluation in larger multicenter studies is warranted,” said Dr. Jan Tack and his associates at the University of Leuven, Belgium.

Functional dyspepsia, one of the most prevalent gastrointestinal disorders, is characterized by early satiation, postprandial fullness, and epigastric pain and burning in the absence of underlying systemic or metabolic disease. Up to 40% of affected patients lose weight, an “alarm symptom” that until now has lacked effective treatment, the researchers said.

©Artem_Furman/Thinkstockphotos.com

Mirtazapine, an antagonist of the H1, alpha2, 5-hydroxytryptamine (5-HT)2c, and 5-HT3 receptors, often causes weight gain when used to treat depression. Therefore, the investigators designed a double-blind single-center pilot trial of 34 patients with functional dyspepsia who had lost more than 10% of their original body weight. After a 2-week run-in period, half the patients were randomized to 15 mg of mirtazapine every evening and the other half to placebo (Clin Gastroenterol Hepatol. 2016 Jan 9. doi: 10.1016/j.cgh.2015.09.043).

The average weight of placebo patients remained almost unchanged throughout the trial, while patients on mirtazapine gained an average of 2.5 + 0.6 kg by week 4 (P = .003 for between-group comparison) and 3.9 + 0.7 kg, or 6.4% of their original body weight, by week 8 (P less than .0001). Mean scores on a validated dyspepsia symptom severity (DSS) questionnaire improved significantly between baseline and weeks 4 (P = .003) and 8 (P = .017) for mirtazapine but not placebo. Directly comparing the two groups in terms of the DSS revealed a large effect size that trended toward significance (P = .06) at week 4 but not at week 8 (P = .55). However, mirtazapine significantly outperformed placebo in measures of early satiety, quality of life, gastrointestinal-specific anxiety, and nutrient tolerance, “mostly with large effect sizes,” the investigators said.

Mirtazapine did not affect epigastric pain or gastric emptying, and had little effect on postprandial fullness. Moreover, 2 of 17 patients in the mirtazapine group dropped out of the study because of unacceptable levels of drowsiness, which is a common side effect of the medication.

Many patients with functional dyspepsia respond inadequately to first-line treatment with acid-suppressive or prokinetic drugs, the investigators noted. While tegaserod, buspirone, and acotiamide can improve gastric accommodation, it is unknown if they promote weight gain. The results for mirtazapine are promising, but the pilot trial included only tertiary care patients, and the small sample size precluded separate analyses of patients with postprandial distress syndrome as opposed to epigastric pain syndrome, the researchers said.

The study was funded by Leuven University, the FWO, and the KU Leuven Special Research Fund. Mirtazapine and placebo were supplied by MSD Belgium. The investigators had no disclosures.

References

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

The antidepressant mirtazapine improved weight loss, early satiation, nausea, and other signs and symptoms in patients with functional dyspepsia, said the authors of a placebo-controlled pilot study published in the March issue of Clinical Gastroenterology and Hepatology.

The findings suggest that mirtazapine “has the potential to become the treatment of choice for functional dyspepsia in patients with weight loss, and evaluation in larger multicenter studies is warranted,” said Dr. Jan Tack and his associates at the University of Leuven, Belgium.

Functional dyspepsia, one of the most prevalent gastrointestinal disorders, is characterized by early satiation, postprandial fullness, and epigastric pain and burning in the absence of underlying systemic or metabolic disease. Up to 40% of affected patients lose weight, an “alarm symptom” that until now has lacked effective treatment, the researchers said.

©Artem_Furman/Thinkstockphotos.com

Mirtazapine, an antagonist of the H1, alpha2, 5-hydroxytryptamine (5-HT)2c, and 5-HT3 receptors, often causes weight gain when used to treat depression. Therefore, the investigators designed a double-blind single-center pilot trial of 34 patients with functional dyspepsia who had lost more than 10% of their original body weight. After a 2-week run-in period, half the patients were randomized to 15 mg of mirtazapine every evening and the other half to placebo (Clin Gastroenterol Hepatol. 2016 Jan 9. doi: 10.1016/j.cgh.2015.09.043).

The average weight of placebo patients remained almost unchanged throughout the trial, while patients on mirtazapine gained an average of 2.5 + 0.6 kg by week 4 (P = .003 for between-group comparison) and 3.9 + 0.7 kg, or 6.4% of their original body weight, by week 8 (P less than .0001). Mean scores on a validated dyspepsia symptom severity (DSS) questionnaire improved significantly between baseline and weeks 4 (P = .003) and 8 (P = .017) for mirtazapine but not placebo. Directly comparing the two groups in terms of the DSS revealed a large effect size that trended toward significance (P = .06) at week 4 but not at week 8 (P = .55). However, mirtazapine significantly outperformed placebo in measures of early satiety, quality of life, gastrointestinal-specific anxiety, and nutrient tolerance, “mostly with large effect sizes,” the investigators said.

Mirtazapine did not affect epigastric pain or gastric emptying, and had little effect on postprandial fullness. Moreover, 2 of 17 patients in the mirtazapine group dropped out of the study because of unacceptable levels of drowsiness, which is a common side effect of the medication.

Many patients with functional dyspepsia respond inadequately to first-line treatment with acid-suppressive or prokinetic drugs, the investigators noted. While tegaserod, buspirone, and acotiamide can improve gastric accommodation, it is unknown if they promote weight gain. The results for mirtazapine are promising, but the pilot trial included only tertiary care patients, and the small sample size precluded separate analyses of patients with postprandial distress syndrome as opposed to epigastric pain syndrome, the researchers said.

The study was funded by Leuven University, the FWO, and the KU Leuven Special Research Fund. Mirtazapine and placebo were supplied by MSD Belgium. The investigators had no disclosures.

The antidepressant mirtazapine improved weight loss, early satiation, nausea, and other signs and symptoms in patients with functional dyspepsia, said the authors of a placebo-controlled pilot study published in the March issue of Clinical Gastroenterology and Hepatology.

The findings suggest that mirtazapine “has the potential to become the treatment of choice for functional dyspepsia in patients with weight loss, and evaluation in larger multicenter studies is warranted,” said Dr. Jan Tack and his associates at the University of Leuven, Belgium.

Functional dyspepsia, one of the most prevalent gastrointestinal disorders, is characterized by early satiation, postprandial fullness, and epigastric pain and burning in the absence of underlying systemic or metabolic disease. Up to 40% of affected patients lose weight, an “alarm symptom” that until now has lacked effective treatment, the researchers said.

©Artem_Furman/Thinkstockphotos.com

Mirtazapine, an antagonist of the H1, alpha2, 5-hydroxytryptamine (5-HT)2c, and 5-HT3 receptors, often causes weight gain when used to treat depression. Therefore, the investigators designed a double-blind single-center pilot trial of 34 patients with functional dyspepsia who had lost more than 10% of their original body weight. After a 2-week run-in period, half the patients were randomized to 15 mg of mirtazapine every evening and the other half to placebo (Clin Gastroenterol Hepatol. 2016 Jan 9. doi: 10.1016/j.cgh.2015.09.043).

The average weight of placebo patients remained almost unchanged throughout the trial, while patients on mirtazapine gained an average of 2.5 + 0.6 kg by week 4 (P = .003 for between-group comparison) and 3.9 + 0.7 kg, or 6.4% of their original body weight, by week 8 (P less than .0001). Mean scores on a validated dyspepsia symptom severity (DSS) questionnaire improved significantly between baseline and weeks 4 (P = .003) and 8 (P = .017) for mirtazapine but not placebo. Directly comparing the two groups in terms of the DSS revealed a large effect size that trended toward significance (P = .06) at week 4 but not at week 8 (P = .55). However, mirtazapine significantly outperformed placebo in measures of early satiety, quality of life, gastrointestinal-specific anxiety, and nutrient tolerance, “mostly with large effect sizes,” the investigators said.

Mirtazapine did not affect epigastric pain or gastric emptying, and had little effect on postprandial fullness. Moreover, 2 of 17 patients in the mirtazapine group dropped out of the study because of unacceptable levels of drowsiness, which is a common side effect of the medication.

Many patients with functional dyspepsia respond inadequately to first-line treatment with acid-suppressive or prokinetic drugs, the investigators noted. While tegaserod, buspirone, and acotiamide can improve gastric accommodation, it is unknown if they promote weight gain. The results for mirtazapine are promising, but the pilot trial included only tertiary care patients, and the small sample size precluded separate analyses of patients with postprandial distress syndrome as opposed to epigastric pain syndrome, the researchers said.

The study was funded by Leuven University, the FWO, and the KU Leuven Special Research Fund. Mirtazapine and placebo were supplied by MSD Belgium. The investigators had no disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Mirtazapine improves functional dyspepsia in small study
Display Headline
Mirtazapine improves functional dyspepsia in small study
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Mirtazapine treatment led to weight gain and a number of other improvements among patients with functional dyspepsia and weight loss.

Major finding: Patients regained an average of 6.5% of their original body weight on mirtazapine, and did not regain weight on placebo.

Data source: A single-center randomized double-blind study of 34 patients with functional dyspepsia.

Disclosures: Leuven University, the FWO, and the KU Leuven Special Research Fund helped fund the study. Mirtazapine and placebo were supplied by MSD Belgium. The investigators had no disclosures.

Study backed familial component of advanced adenoma risk

Study supports earlier CRC screening of first-degree relatives of CRC patients
Article Type
Changed
Display Headline
Study backed familial component of advanced adenoma risk

Siblings of patients with advanced adenoma had sixfold higher odds of having the tumors themselves, as compared with controls, said the authors of a blinded cross-sectional study reported in the March issue of Gastroenterology.

The results reinforce the need for early screening of individuals whose siblings have advanced adenoma, said Dr. Siew Ng at the Chinese University of Hong Kong and her associates. The risk of advanced adenoma was even higher when affected probands were younger than average or had multiple adenomas, the researchers added.

Most studies that have purported to study the familial risk of adenoma actually studied the risk of adenoma in persons whose first-degree relatives have colorectal cancer, according to Dr. Ng and her associates. Their study included 200 asymptomatic (“exposed”) siblings of individuals with advanced adenomas as diagnosed on colonoscopy, and 400 controls whose siblings had no family history of colorectal cancer or colonoscopic evidence of neoplasia. The researchers defined advanced adenomas as those measuring at least 10 mm or that had high-grade dysplasia or villous or tubulovillous characteristics. “We focused on advanced lesions, as they have the greatest malignant potential, and removing these lesions can reduce colorectal cancer incidence and mortality,” they said (Gastroenterology. 2015 Nov 14. doi: 10.1053/j.gastro.2015.11.003).

Exposed siblings were consistently more likely to have adenomas themselves, compared with the control group, said the investigators. For example, the prevalence of any advanced adenoma was 11.5% among exposed siblings compared with only 2.5% among controls (matched odds ratio, 6.05; 95% confidence interval, 2.7-13.4; P less than .001). Similarly, the prevalence of adenomas measuring at least 10 mm was 10.5% among exposed individuals and 1.8% among controls (mOR, 8.6; 95% CI, 3.4-21.4; P less than .001). The prevalence of villous adenomas was 5.5% among exposed individuals and 1.3% among controls (mOR, 6.3; 95% CI, 2.0-19.5; P = .001) and the prevalence of all colorectal adenomas was 39% among exposed individuals and 19% among controls (mOR, 3.3; 95% CI, 2.2-5.0; P less than .001). Finally, two cases of colorectal cancer were detected among the exposed siblings, while no such cases were detected among the controls.

The exposed siblings and controls resembled each other in terms of aspirin use, smoking, body mass index, and metabolic diseases, the researchers said. However, the probands with adenoma were identified from a consecutive group of patients, while control siblings were enrolled through a screening program, they said. Therefore, the groups might have differed in terms of unmeasured environmental risk factors for cancer, such as physical activity and dietary habits. They also noted the difficulties in obtaining accurate family histories of colonic neoplasia, especially distinguishing adenoma from advanced adenoma. Finally, Hong Kong is ethnically homogenous, and the data might not be generalizable to other populations, although Asia and Western countries do tend to have comparable rates of advanced adenoma in average-risk individuals and in families with histories of colorectal neoplasias.

The Research Grants Council of the Hong Kong Special Administrative Region funded the study. The investigators had no disclosures.

Source: American Gastroenterological Association

References

Body

Current guidelines recommend early screening and shorter surveillance intervals in individuals with a first-degree relative (FDR) with colorectal cancer (CRC) (Gastroenterology. 2008;134:1570-950). Existing literature is limited by either lack of an appropriate comparison group or inability to assess adenoma risk in subjects who have an FDR with adenomas.

Dr. Harini S. Naidu

To date, this is the first prospective study to demonstrate increased prevalence of advanced adenomas in siblings of probands with advanced adenomas detected during colonoscopy. The authors should be congratulated on completing an organized, well-powered study using colonoscopy and histopathology and were careful to limit familial clustering by randomly selecting only one sibling from each family. Although this study has important findings, there are a few points worthy of consideration.

First, it would be helpful to understand whether the siblings shared both parents, one parent, or were adopted, as this would affect the genetic implications of the findings.

Second, the analysis did not stratify probands and siblings based on whether the colonoscopy included in the study was the first or second screening, or surveillance colonoscopy. The risk of advanced adenomas is expected to be different in someone with numerous normal colonoscopies, compared with someone undergoing their initial screening colonoscopy, and this point deserves clarification.

Dr. Audrey H. Calderwood

Third, it would be helpful to know how many siblings in each group were excluded due to previous adenomas, which bias results towards the null. For example, exclusion of high-risk individuals with previous adenomas in the control group may make the prevalence of adenoma detection appear lower if only lower-risk individuals are included.

Lastly, this study was performed in a uniform Asian patient population, and may not be generalizable to other populations. Validation in a more ethnically heterogeneous setting is warranted. Overall, this is a solid, clinically relevant study that can help inform the impact of family history of advanced adenomas on CRC screening recommendations.

In addition, the study’s findings corroborate the American College of Gastroenterology’s recommendations for earlier CRC screening at shorter surveillance intervals in patients who have FDRs with advanced adenomas detected at age less than 60, or two FDRs diagnosed with advanced adenomas at any age (Am J Gastroenterol. 2009;104:739–50).

Dr. Harini S. Naidu and Dr. Audrey H. Calderwood are in the section of gastroenterology, Boston University. The authors have no conflicts of interest to declare.

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Body

Current guidelines recommend early screening and shorter surveillance intervals in individuals with a first-degree relative (FDR) with colorectal cancer (CRC) (Gastroenterology. 2008;134:1570-950). Existing literature is limited by either lack of an appropriate comparison group or inability to assess adenoma risk in subjects who have an FDR with adenomas.

Dr. Harini S. Naidu

To date, this is the first prospective study to demonstrate increased prevalence of advanced adenomas in siblings of probands with advanced adenomas detected during colonoscopy. The authors should be congratulated on completing an organized, well-powered study using colonoscopy and histopathology and were careful to limit familial clustering by randomly selecting only one sibling from each family. Although this study has important findings, there are a few points worthy of consideration.

First, it would be helpful to understand whether the siblings shared both parents, one parent, or were adopted, as this would affect the genetic implications of the findings.

Second, the analysis did not stratify probands and siblings based on whether the colonoscopy included in the study was the first or second screening, or surveillance colonoscopy. The risk of advanced adenomas is expected to be different in someone with numerous normal colonoscopies, compared with someone undergoing their initial screening colonoscopy, and this point deserves clarification.

Dr. Audrey H. Calderwood

Third, it would be helpful to know how many siblings in each group were excluded due to previous adenomas, which bias results towards the null. For example, exclusion of high-risk individuals with previous adenomas in the control group may make the prevalence of adenoma detection appear lower if only lower-risk individuals are included.

Lastly, this study was performed in a uniform Asian patient population, and may not be generalizable to other populations. Validation in a more ethnically heterogeneous setting is warranted. Overall, this is a solid, clinically relevant study that can help inform the impact of family history of advanced adenomas on CRC screening recommendations.

In addition, the study’s findings corroborate the American College of Gastroenterology’s recommendations for earlier CRC screening at shorter surveillance intervals in patients who have FDRs with advanced adenomas detected at age less than 60, or two FDRs diagnosed with advanced adenomas at any age (Am J Gastroenterol. 2009;104:739–50).

Dr. Harini S. Naidu and Dr. Audrey H. Calderwood are in the section of gastroenterology, Boston University. The authors have no conflicts of interest to declare.

Body

Current guidelines recommend early screening and shorter surveillance intervals in individuals with a first-degree relative (FDR) with colorectal cancer (CRC) (Gastroenterology. 2008;134:1570-950). Existing literature is limited by either lack of an appropriate comparison group or inability to assess adenoma risk in subjects who have an FDR with adenomas.

Dr. Harini S. Naidu

To date, this is the first prospective study to demonstrate increased prevalence of advanced adenomas in siblings of probands with advanced adenomas detected during colonoscopy. The authors should be congratulated on completing an organized, well-powered study using colonoscopy and histopathology and were careful to limit familial clustering by randomly selecting only one sibling from each family. Although this study has important findings, there are a few points worthy of consideration.

First, it would be helpful to understand whether the siblings shared both parents, one parent, or were adopted, as this would affect the genetic implications of the findings.

Second, the analysis did not stratify probands and siblings based on whether the colonoscopy included in the study was the first or second screening, or surveillance colonoscopy. The risk of advanced adenomas is expected to be different in someone with numerous normal colonoscopies, compared with someone undergoing their initial screening colonoscopy, and this point deserves clarification.

Dr. Audrey H. Calderwood

Third, it would be helpful to know how many siblings in each group were excluded due to previous adenomas, which bias results towards the null. For example, exclusion of high-risk individuals with previous adenomas in the control group may make the prevalence of adenoma detection appear lower if only lower-risk individuals are included.

Lastly, this study was performed in a uniform Asian patient population, and may not be generalizable to other populations. Validation in a more ethnically heterogeneous setting is warranted. Overall, this is a solid, clinically relevant study that can help inform the impact of family history of advanced adenomas on CRC screening recommendations.

In addition, the study’s findings corroborate the American College of Gastroenterology’s recommendations for earlier CRC screening at shorter surveillance intervals in patients who have FDRs with advanced adenomas detected at age less than 60, or two FDRs diagnosed with advanced adenomas at any age (Am J Gastroenterol. 2009;104:739–50).

Dr. Harini S. Naidu and Dr. Audrey H. Calderwood are in the section of gastroenterology, Boston University. The authors have no conflicts of interest to declare.

Title
Study supports earlier CRC screening of first-degree relatives of CRC patients
Study supports earlier CRC screening of first-degree relatives of CRC patients

Siblings of patients with advanced adenoma had sixfold higher odds of having the tumors themselves, as compared with controls, said the authors of a blinded cross-sectional study reported in the March issue of Gastroenterology.

The results reinforce the need for early screening of individuals whose siblings have advanced adenoma, said Dr. Siew Ng at the Chinese University of Hong Kong and her associates. The risk of advanced adenoma was even higher when affected probands were younger than average or had multiple adenomas, the researchers added.

Most studies that have purported to study the familial risk of adenoma actually studied the risk of adenoma in persons whose first-degree relatives have colorectal cancer, according to Dr. Ng and her associates. Their study included 200 asymptomatic (“exposed”) siblings of individuals with advanced adenomas as diagnosed on colonoscopy, and 400 controls whose siblings had no family history of colorectal cancer or colonoscopic evidence of neoplasia. The researchers defined advanced adenomas as those measuring at least 10 mm or that had high-grade dysplasia or villous or tubulovillous characteristics. “We focused on advanced lesions, as they have the greatest malignant potential, and removing these lesions can reduce colorectal cancer incidence and mortality,” they said (Gastroenterology. 2015 Nov 14. doi: 10.1053/j.gastro.2015.11.003).

Exposed siblings were consistently more likely to have adenomas themselves, compared with the control group, said the investigators. For example, the prevalence of any advanced adenoma was 11.5% among exposed siblings compared with only 2.5% among controls (matched odds ratio, 6.05; 95% confidence interval, 2.7-13.4; P less than .001). Similarly, the prevalence of adenomas measuring at least 10 mm was 10.5% among exposed individuals and 1.8% among controls (mOR, 8.6; 95% CI, 3.4-21.4; P less than .001). The prevalence of villous adenomas was 5.5% among exposed individuals and 1.3% among controls (mOR, 6.3; 95% CI, 2.0-19.5; P = .001) and the prevalence of all colorectal adenomas was 39% among exposed individuals and 19% among controls (mOR, 3.3; 95% CI, 2.2-5.0; P less than .001). Finally, two cases of colorectal cancer were detected among the exposed siblings, while no such cases were detected among the controls.

The exposed siblings and controls resembled each other in terms of aspirin use, smoking, body mass index, and metabolic diseases, the researchers said. However, the probands with adenoma were identified from a consecutive group of patients, while control siblings were enrolled through a screening program, they said. Therefore, the groups might have differed in terms of unmeasured environmental risk factors for cancer, such as physical activity and dietary habits. They also noted the difficulties in obtaining accurate family histories of colonic neoplasia, especially distinguishing adenoma from advanced adenoma. Finally, Hong Kong is ethnically homogenous, and the data might not be generalizable to other populations, although Asia and Western countries do tend to have comparable rates of advanced adenoma in average-risk individuals and in families with histories of colorectal neoplasias.

The Research Grants Council of the Hong Kong Special Administrative Region funded the study. The investigators had no disclosures.

Source: American Gastroenterological Association

Siblings of patients with advanced adenoma had sixfold higher odds of having the tumors themselves, as compared with controls, said the authors of a blinded cross-sectional study reported in the March issue of Gastroenterology.

The results reinforce the need for early screening of individuals whose siblings have advanced adenoma, said Dr. Siew Ng at the Chinese University of Hong Kong and her associates. The risk of advanced adenoma was even higher when affected probands were younger than average or had multiple adenomas, the researchers added.

Most studies that have purported to study the familial risk of adenoma actually studied the risk of adenoma in persons whose first-degree relatives have colorectal cancer, according to Dr. Ng and her associates. Their study included 200 asymptomatic (“exposed”) siblings of individuals with advanced adenomas as diagnosed on colonoscopy, and 400 controls whose siblings had no family history of colorectal cancer or colonoscopic evidence of neoplasia. The researchers defined advanced adenomas as those measuring at least 10 mm or that had high-grade dysplasia or villous or tubulovillous characteristics. “We focused on advanced lesions, as they have the greatest malignant potential, and removing these lesions can reduce colorectal cancer incidence and mortality,” they said (Gastroenterology. 2015 Nov 14. doi: 10.1053/j.gastro.2015.11.003).

Exposed siblings were consistently more likely to have adenomas themselves, compared with the control group, said the investigators. For example, the prevalence of any advanced adenoma was 11.5% among exposed siblings compared with only 2.5% among controls (matched odds ratio, 6.05; 95% confidence interval, 2.7-13.4; P less than .001). Similarly, the prevalence of adenomas measuring at least 10 mm was 10.5% among exposed individuals and 1.8% among controls (mOR, 8.6; 95% CI, 3.4-21.4; P less than .001). The prevalence of villous adenomas was 5.5% among exposed individuals and 1.3% among controls (mOR, 6.3; 95% CI, 2.0-19.5; P = .001) and the prevalence of all colorectal adenomas was 39% among exposed individuals and 19% among controls (mOR, 3.3; 95% CI, 2.2-5.0; P less than .001). Finally, two cases of colorectal cancer were detected among the exposed siblings, while no such cases were detected among the controls.

The exposed siblings and controls resembled each other in terms of aspirin use, smoking, body mass index, and metabolic diseases, the researchers said. However, the probands with adenoma were identified from a consecutive group of patients, while control siblings were enrolled through a screening program, they said. Therefore, the groups might have differed in terms of unmeasured environmental risk factors for cancer, such as physical activity and dietary habits. They also noted the difficulties in obtaining accurate family histories of colonic neoplasia, especially distinguishing adenoma from advanced adenoma. Finally, Hong Kong is ethnically homogenous, and the data might not be generalizable to other populations, although Asia and Western countries do tend to have comparable rates of advanced adenoma in average-risk individuals and in families with histories of colorectal neoplasias.

The Research Grants Council of the Hong Kong Special Administrative Region funded the study. The investigators had no disclosures.

Source: American Gastroenterological Association

References

References

Publications
Publications
Topics
Article Type
Display Headline
Study backed familial component of advanced adenoma risk
Display Headline
Study backed familial component of advanced adenoma risk
Sections
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Siblings of patients with advanced adenoma were substantially more likely to also have advanced adenomas as compared with controls.

Major finding: The odds of advanced adenomas among exposed siblings were six times greater than for controls (95% confidence interval, 2.7-13.4; P less than .001).

Data source: A cross-sectional study of 200 asymptomatic siblings of individuals with advanced adenomas and 400 controls whose siblings had no family history of colorectal cancer or colonoscopic evidence of neoplasia.

Disclosures: The Research Grants Council of the Hong Kong Special Administrative Region funded the study. The investigators had no disclosures.