Diabetes-related kidney failure down sharply in Native Americans

Article Type
Changed
Tue, 05/03/2022 - 15:31

 

Kidney failure in Native Americans and Alaska Natives with diabetes has declined drastically over the last 20 years, according to new data released as part of this month’s Vital Signs report by the CDC.

“The 54 percent decline in kidney failure from diabetes followed implementation of public health and population approaches to diabetes as well as improvements in clinical care by the IHS [Indian Health Service],” said Mary L. Smith, principal deputy director of the Indian Health Service.

Of all U.S.-based populations, Native Americans are the most susceptible to diabetes and are about twice as likely as white Americans to develop diabetes. Furthermore, 69% of kidney failure deaths in Native Americans are the result of diabetes (MMWR. 2017 Jan 10. doi: 10.15585/mmwr.mm6601e1).

Since 1996, however, kidney failure has dropped more among Native Americans than any other ethnic group in the country. The 54% drop represents a decrease from 57.3 diabetes-related end-stage renal disease cases per 100,000 population in 1996 to 26.5 per 100,000 population in 2013 among U.S. adults.

“This decline is especially remarkable given the well-documented health and socioeconomic disparities in the [Native American and Alaska Natives] population, including poverty, limited health care resources, and disproportionate burden of many health problems,” wrote the authors of the Vital Signs report.

According to the report, blood sugar control among Native American populations has improved by 10%, kidney testing in diabetic Native Americans aged 65 years or older is 50% greater than Medicare diabetes patients of the same age, and the average blood pressure of Native Americans with both diabetes and hypertension was 133/76 in 2015.

“We believe these strategies can be effective in any population,” Ms. Smith stated, a sentiment that was also shared by Tom Frieden, MD, director of the CDC.

“Strong coordinated clinical care and education, community outreach and environmental changes can make a dramatic difference in reducing complications from diabetes for all Americans,” Dr. Frieden said in a statement.

Not only does diabetes persist as a significant burden on the U.S. health care system, but kidney failure in particular can be costly. Figures released by the CDC indicate that average medical costs associated with kidney failure in 2013 were as high as $82,000 per patient, with Medicare spending nearly $14 billion for kidney failure treatments in the same year.

“The findings in this report are consistent with other studies among [Native Americans and Alaska Natives] nationwide and among Pima Indians in the Southwest, which concluded that improvements in blood pressure, blood glucose, and the use of ACE inhibitors and [angiotensin II receptor blockers] played a significant role in the decline of [diabetes-related end-stage renal disease] in these populations,” the report concludes.

To ensure that kidney failure decreases continue in Native Americans, the U.S. government will continue funding diabetes screening and prevention efforts in applicable communities, assist community health care facilities to provide care for diabetes, and will establish a nationwide system for tracking chronic kidney disease. The CDC also advocates using population approaches and coordinated care to treat diabetes, advising health care professionals to “integrate kidney disease prevention and education into routine diabetes care.”

“The Indian Health Service has made tremendous progress by applying population health and team-based approaches to diabetes and kidney care,” Dr. Frieden stated.

Publications
Topics
Sections

 

Kidney failure in Native Americans and Alaska Natives with diabetes has declined drastically over the last 20 years, according to new data released as part of this month’s Vital Signs report by the CDC.

“The 54 percent decline in kidney failure from diabetes followed implementation of public health and population approaches to diabetes as well as improvements in clinical care by the IHS [Indian Health Service],” said Mary L. Smith, principal deputy director of the Indian Health Service.

Of all U.S.-based populations, Native Americans are the most susceptible to diabetes and are about twice as likely as white Americans to develop diabetes. Furthermore, 69% of kidney failure deaths in Native Americans are the result of diabetes (MMWR. 2017 Jan 10. doi: 10.15585/mmwr.mm6601e1).

Since 1996, however, kidney failure has dropped more among Native Americans than any other ethnic group in the country. The 54% drop represents a decrease from 57.3 diabetes-related end-stage renal disease cases per 100,000 population in 1996 to 26.5 per 100,000 population in 2013 among U.S. adults.

“This decline is especially remarkable given the well-documented health and socioeconomic disparities in the [Native American and Alaska Natives] population, including poverty, limited health care resources, and disproportionate burden of many health problems,” wrote the authors of the Vital Signs report.

According to the report, blood sugar control among Native American populations has improved by 10%, kidney testing in diabetic Native Americans aged 65 years or older is 50% greater than Medicare diabetes patients of the same age, and the average blood pressure of Native Americans with both diabetes and hypertension was 133/76 in 2015.

“We believe these strategies can be effective in any population,” Ms. Smith stated, a sentiment that was also shared by Tom Frieden, MD, director of the CDC.

“Strong coordinated clinical care and education, community outreach and environmental changes can make a dramatic difference in reducing complications from diabetes for all Americans,” Dr. Frieden said in a statement.

Not only does diabetes persist as a significant burden on the U.S. health care system, but kidney failure in particular can be costly. Figures released by the CDC indicate that average medical costs associated with kidney failure in 2013 were as high as $82,000 per patient, with Medicare spending nearly $14 billion for kidney failure treatments in the same year.

“The findings in this report are consistent with other studies among [Native Americans and Alaska Natives] nationwide and among Pima Indians in the Southwest, which concluded that improvements in blood pressure, blood glucose, and the use of ACE inhibitors and [angiotensin II receptor blockers] played a significant role in the decline of [diabetes-related end-stage renal disease] in these populations,” the report concludes.

To ensure that kidney failure decreases continue in Native Americans, the U.S. government will continue funding diabetes screening and prevention efforts in applicable communities, assist community health care facilities to provide care for diabetes, and will establish a nationwide system for tracking chronic kidney disease. The CDC also advocates using population approaches and coordinated care to treat diabetes, advising health care professionals to “integrate kidney disease prevention and education into routine diabetes care.”

“The Indian Health Service has made tremendous progress by applying population health and team-based approaches to diabetes and kidney care,” Dr. Frieden stated.

 

Kidney failure in Native Americans and Alaska Natives with diabetes has declined drastically over the last 20 years, according to new data released as part of this month’s Vital Signs report by the CDC.

“The 54 percent decline in kidney failure from diabetes followed implementation of public health and population approaches to diabetes as well as improvements in clinical care by the IHS [Indian Health Service],” said Mary L. Smith, principal deputy director of the Indian Health Service.

Of all U.S.-based populations, Native Americans are the most susceptible to diabetes and are about twice as likely as white Americans to develop diabetes. Furthermore, 69% of kidney failure deaths in Native Americans are the result of diabetes (MMWR. 2017 Jan 10. doi: 10.15585/mmwr.mm6601e1).

Since 1996, however, kidney failure has dropped more among Native Americans than any other ethnic group in the country. The 54% drop represents a decrease from 57.3 diabetes-related end-stage renal disease cases per 100,000 population in 1996 to 26.5 per 100,000 population in 2013 among U.S. adults.

“This decline is especially remarkable given the well-documented health and socioeconomic disparities in the [Native American and Alaska Natives] population, including poverty, limited health care resources, and disproportionate burden of many health problems,” wrote the authors of the Vital Signs report.

According to the report, blood sugar control among Native American populations has improved by 10%, kidney testing in diabetic Native Americans aged 65 years or older is 50% greater than Medicare diabetes patients of the same age, and the average blood pressure of Native Americans with both diabetes and hypertension was 133/76 in 2015.

“We believe these strategies can be effective in any population,” Ms. Smith stated, a sentiment that was also shared by Tom Frieden, MD, director of the CDC.

“Strong coordinated clinical care and education, community outreach and environmental changes can make a dramatic difference in reducing complications from diabetes for all Americans,” Dr. Frieden said in a statement.

Not only does diabetes persist as a significant burden on the U.S. health care system, but kidney failure in particular can be costly. Figures released by the CDC indicate that average medical costs associated with kidney failure in 2013 were as high as $82,000 per patient, with Medicare spending nearly $14 billion for kidney failure treatments in the same year.

“The findings in this report are consistent with other studies among [Native Americans and Alaska Natives] nationwide and among Pima Indians in the Southwest, which concluded that improvements in blood pressure, blood glucose, and the use of ACE inhibitors and [angiotensin II receptor blockers] played a significant role in the decline of [diabetes-related end-stage renal disease] in these populations,” the report concludes.

To ensure that kidney failure decreases continue in Native Americans, the U.S. government will continue funding diabetes screening and prevention efforts in applicable communities, assist community health care facilities to provide care for diabetes, and will establish a nationwide system for tracking chronic kidney disease. The CDC also advocates using population approaches and coordinated care to treat diabetes, advising health care professionals to “integrate kidney disease prevention and education into routine diabetes care.”

“The Indian Health Service has made tremendous progress by applying population health and team-based approaches to diabetes and kidney care,” Dr. Frieden stated.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Osteopenia risk up in men with sarcopenia and COPD

Article Type
Changed
Sat, 12/08/2018 - 03:13

 

Men experiencing sarcopenia who also have been diagnosed with chronic obstructive pulmonary disease (COPD) are at a significantly higher risk of developing osteopenia and osteoporosis than are men who do not suffer from COPD, according to a new study published in Chest.

“Muscle depletion has been considered a risk factor for low [bone mineral density (BMD)] in the healthy general population [but] data on the association between sarcopenia and osteopenia/osteoporosis in COPD patients are lacking,” wrote the investigators of the study, coauthored by Moo Suk Park, MD, of Yonsei University in Seoul, South Korea (Chest. 2017 Jan. doi: 10.1016/j.chest.2016.12.006).

“Although previous studies showed that loss of fat-free mass (FFM) was related to BMD loss in COPD patients, it is difficult to know the genuine relationship between skeletal muscle mass and BMD because whole body FFM contains a large proportion of water-retaining organs and nonmuscle soft tissue,” the authors continued.

The investigators examined data from the Korean National Health and Nutritional Examination Survey (KNHANES), looking for men at least 20 years of age with COPD who had both pulmonary function test and the dual-energy x-ray absorptiometry (DXA) performed on them during the years 2008-2011. A total of 864 men were deemed eligible for inclusion, and were scored for sarcopenia and osteopenia/osteoporosis; the former was assessed via the appendicular skeletal mass index (ASMI), with the latter done via T-score.

“Sarcopenia and presarcopenia were defined according to the presence of ASMI values that were less than two standard deviations (SDs) and between 2SDs and 1SD, respectively, below the mean value of a young male reference group aged 20-39 years,” according to the investigators. “Osteoporosis, osteopenia, and normal BMD were identified according to the lowest T-score of the three measured locations and were defined according to the World Health Organization criteria.”

Dr. Eric J. Gartman
Results showed that sarcopenia in men with COPD led to a higher risk of having low BMD, with an odds ratio of 2.31 (95% CI 1.53-3.46, P less than .001). A higher percentage of men categorized as having presarcopenia had low BMD (157/445, 35.53%), compared to just 15.4% (15/332) of those with normal BMD (P less than .001). Similarly, while only 1.2% (4/332) of those with sarcopenia had normal BMD, 8.3% (37/445) had low BMD (P = .017). The ASMI to T-score ratio of 0.408 (P less than .001) indicated a significant association between appendicular skeletal muscle mass and BMD.

“This study affirms the systemic nature of COPD, as it is not merely a disease that manifests as breathlessness and other respiratory complaints, but affects many aspects of a patient’s functionality and overall health,” explained Eric J. Gartman, MD, of Brown University, Providence, Rhode Island. “In clinical practice, this study reminds us that we need to consider these other issues in a COPD patient’s care, since the outcomes from these problems (e.g. hip fractures) can be devastating.”

Dr. Vera De Palo
Echoing those thoughts in a separate interview, Vera De Palo, MD – of Signature Healthcare in Brockton, Mass. – explained that this study will help health care providers “deepen our understanding of these associations and contributing factors, [and] it may lead to targeted interventions that help to slow the sarcopenia that contributes to the dysfunction and fragility in our patients.”*

A critical limitation of this study, however, is the sample population, according to Dr. Gartman. “It is solely made up of Korean men, thus somewhat limiting the generalizability to a larger population [and] especially to women, given that there are several other considerations surrounding effects on BMD.”

No funding sources were disclosed. The authors reported no conflicts of interest.

*This article was updated on 1/20/17 at 1:30 p.m. It misstated the affiliation for Vera Palo, MD, FCCP.  

Publications
Topics
Sections

 

Men experiencing sarcopenia who also have been diagnosed with chronic obstructive pulmonary disease (COPD) are at a significantly higher risk of developing osteopenia and osteoporosis than are men who do not suffer from COPD, according to a new study published in Chest.

“Muscle depletion has been considered a risk factor for low [bone mineral density (BMD)] in the healthy general population [but] data on the association between sarcopenia and osteopenia/osteoporosis in COPD patients are lacking,” wrote the investigators of the study, coauthored by Moo Suk Park, MD, of Yonsei University in Seoul, South Korea (Chest. 2017 Jan. doi: 10.1016/j.chest.2016.12.006).

“Although previous studies showed that loss of fat-free mass (FFM) was related to BMD loss in COPD patients, it is difficult to know the genuine relationship between skeletal muscle mass and BMD because whole body FFM contains a large proportion of water-retaining organs and nonmuscle soft tissue,” the authors continued.

The investigators examined data from the Korean National Health and Nutritional Examination Survey (KNHANES), looking for men at least 20 years of age with COPD who had both pulmonary function test and the dual-energy x-ray absorptiometry (DXA) performed on them during the years 2008-2011. A total of 864 men were deemed eligible for inclusion, and were scored for sarcopenia and osteopenia/osteoporosis; the former was assessed via the appendicular skeletal mass index (ASMI), with the latter done via T-score.

“Sarcopenia and presarcopenia were defined according to the presence of ASMI values that were less than two standard deviations (SDs) and between 2SDs and 1SD, respectively, below the mean value of a young male reference group aged 20-39 years,” according to the investigators. “Osteoporosis, osteopenia, and normal BMD were identified according to the lowest T-score of the three measured locations and were defined according to the World Health Organization criteria.”

Dr. Eric J. Gartman
Results showed that sarcopenia in men with COPD led to a higher risk of having low BMD, with an odds ratio of 2.31 (95% CI 1.53-3.46, P less than .001). A higher percentage of men categorized as having presarcopenia had low BMD (157/445, 35.53%), compared to just 15.4% (15/332) of those with normal BMD (P less than .001). Similarly, while only 1.2% (4/332) of those with sarcopenia had normal BMD, 8.3% (37/445) had low BMD (P = .017). The ASMI to T-score ratio of 0.408 (P less than .001) indicated a significant association between appendicular skeletal muscle mass and BMD.

“This study affirms the systemic nature of COPD, as it is not merely a disease that manifests as breathlessness and other respiratory complaints, but affects many aspects of a patient’s functionality and overall health,” explained Eric J. Gartman, MD, of Brown University, Providence, Rhode Island. “In clinical practice, this study reminds us that we need to consider these other issues in a COPD patient’s care, since the outcomes from these problems (e.g. hip fractures) can be devastating.”

Dr. Vera De Palo
Echoing those thoughts in a separate interview, Vera De Palo, MD – of Signature Healthcare in Brockton, Mass. – explained that this study will help health care providers “deepen our understanding of these associations and contributing factors, [and] it may lead to targeted interventions that help to slow the sarcopenia that contributes to the dysfunction and fragility in our patients.”*

A critical limitation of this study, however, is the sample population, according to Dr. Gartman. “It is solely made up of Korean men, thus somewhat limiting the generalizability to a larger population [and] especially to women, given that there are several other considerations surrounding effects on BMD.”

No funding sources were disclosed. The authors reported no conflicts of interest.

*This article was updated on 1/20/17 at 1:30 p.m. It misstated the affiliation for Vera Palo, MD, FCCP.  

 

Men experiencing sarcopenia who also have been diagnosed with chronic obstructive pulmonary disease (COPD) are at a significantly higher risk of developing osteopenia and osteoporosis than are men who do not suffer from COPD, according to a new study published in Chest.

“Muscle depletion has been considered a risk factor for low [bone mineral density (BMD)] in the healthy general population [but] data on the association between sarcopenia and osteopenia/osteoporosis in COPD patients are lacking,” wrote the investigators of the study, coauthored by Moo Suk Park, MD, of Yonsei University in Seoul, South Korea (Chest. 2017 Jan. doi: 10.1016/j.chest.2016.12.006).

“Although previous studies showed that loss of fat-free mass (FFM) was related to BMD loss in COPD patients, it is difficult to know the genuine relationship between skeletal muscle mass and BMD because whole body FFM contains a large proportion of water-retaining organs and nonmuscle soft tissue,” the authors continued.

The investigators examined data from the Korean National Health and Nutritional Examination Survey (KNHANES), looking for men at least 20 years of age with COPD who had both pulmonary function test and the dual-energy x-ray absorptiometry (DXA) performed on them during the years 2008-2011. A total of 864 men were deemed eligible for inclusion, and were scored for sarcopenia and osteopenia/osteoporosis; the former was assessed via the appendicular skeletal mass index (ASMI), with the latter done via T-score.

“Sarcopenia and presarcopenia were defined according to the presence of ASMI values that were less than two standard deviations (SDs) and between 2SDs and 1SD, respectively, below the mean value of a young male reference group aged 20-39 years,” according to the investigators. “Osteoporosis, osteopenia, and normal BMD were identified according to the lowest T-score of the three measured locations and were defined according to the World Health Organization criteria.”

Dr. Eric J. Gartman
Results showed that sarcopenia in men with COPD led to a higher risk of having low BMD, with an odds ratio of 2.31 (95% CI 1.53-3.46, P less than .001). A higher percentage of men categorized as having presarcopenia had low BMD (157/445, 35.53%), compared to just 15.4% (15/332) of those with normal BMD (P less than .001). Similarly, while only 1.2% (4/332) of those with sarcopenia had normal BMD, 8.3% (37/445) had low BMD (P = .017). The ASMI to T-score ratio of 0.408 (P less than .001) indicated a significant association between appendicular skeletal muscle mass and BMD.

“This study affirms the systemic nature of COPD, as it is not merely a disease that manifests as breathlessness and other respiratory complaints, but affects many aspects of a patient’s functionality and overall health,” explained Eric J. Gartman, MD, of Brown University, Providence, Rhode Island. “In clinical practice, this study reminds us that we need to consider these other issues in a COPD patient’s care, since the outcomes from these problems (e.g. hip fractures) can be devastating.”

Dr. Vera De Palo
Echoing those thoughts in a separate interview, Vera De Palo, MD – of Signature Healthcare in Brockton, Mass. – explained that this study will help health care providers “deepen our understanding of these associations and contributing factors, [and] it may lead to targeted interventions that help to slow the sarcopenia that contributes to the dysfunction and fragility in our patients.”*

A critical limitation of this study, however, is the sample population, according to Dr. Gartman. “It is solely made up of Korean men, thus somewhat limiting the generalizability to a larger population [and] especially to women, given that there are several other considerations surrounding effects on BMD.”

No funding sources were disclosed. The authors reported no conflicts of interest.

*This article was updated on 1/20/17 at 1:30 p.m. It misstated the affiliation for Vera Palo, MD, FCCP.  

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CHEST

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Men with chronic obstructive pulmonary disease who experience sarcopenia are at higher risk for osteopenia and osteoporosis than are those without COPD.

Major finding: Sarcopenia in men with COPD carried a significantly higher risk of bone mineral density loss: OR = 2.31 (95% CI 1.53–3.46) (P less than .001).

Data source: Retrospective cross-sectional study of data on 777 men with COPD during 2008-2011.

Disclosures: No funding sources were disclosed. The authors reported no conflicts of interest.

Depressive symptoms plague ‘significant number’ of active airline pilots

Article Type
Changed
Fri, 01/18/2019 - 16:26

 

The number of active airline pilots managing depressive symptoms might be severely underreported because of pilots’ fear of facing workplace stigma, a cross-sectional study of pilots from across the globe suggests.

“This study fills an important gap of knowledge by providing a current glimpse of mental health among commercial airline pilots, which to date had not been available,” wrote the authors, who added that such data are even more important in the wake of the March 2015 Germanwings flight 9525 crash.

In the study, Alexander C. Wu, MPH, and his coinvestigators conducted a descriptive, cross-sectional study by distributing an anonymous online survey through airline unions, companies, and airports. The survey, administered between April and December 2015, was received by 3,485 airline pilots. Among that number, 1,866 completed half the survey and 1,837 completed the entire survey. Symptoms of depression were evaluated based on the Patient Health Questionnaire (PHQ-9) depression module, with questions taken from the National Health and Nutrition Examination Survey – created by the Centers for Disease Control and Prevention’s National Center for Health Statistics – and the standardized Job Content Questionnaire (JCQ) (Environ Health. 2016 Dec 15. doi: 10.1186/s12940-016-0200-6).

The median age of those who responded was 42 years for females and 50 for males, and the average career length was 16 years across both genders, reported Mr. Wu, a doctoral candidate at the Harvard School of Public Health in Boston, and his coinvestigators. Nearly half of the respondents (45.5%) were from the United States, which was one of more than 50 countries represented, including Canada, Australia, Spain, the United Kingdom, Germany, the United Arab Emirates, Hong Kong, and Thailand.

More than 60% of the respondents had either a 4-year college/university degree or graduate education, and 80% of respondents had flown at least one “major trip” in the 30 days prior to completing the survey. Most of the respondents did not smoke, were married, and were white. Depression threshold was defined as a score of at least 10 on the PHQ-9, which was reported by 233 of the 1,848 responding airline pilots (12.6%), and 193 (13.5%) of the 1,430 pilots who reported flying in the 7 days prior to completing the survey. Furthermore, 75 (4.1%) of the 1,829 who answered the relevant question reported having suicidal thoughts at some point in the prior 2 weeks.

“We found a significant trend in proportions of depression at higher levels of use of sleep-aid medication (P less than .001) and among those experiencing sexual harassment (P = .001) or verbal harassment (P less than .001),” the investigators noted. In addition, 75 of the pilots “reported having thoughts of being better off dead or self-harm within the past 2 weeks,” the investigators wrote. “To our knowledge, this is the most current measure of the prevalence of suicidal thoughts among airline pilots.”

When asked about the study, Mark S. Gold, MD, said in an interview that workplace stigmas and the fear of facing criticism from their colleagues may be the prevailing factor in preventing pilots from openly discussing these issues among themselves. “Being a pilot, like a physician, is a drug-free occupation, [but] substance use disorders and depression are so commonly found together that the question is often ‘chicken or egg,’ ” said Dr. Gold, adjunct professor of psychiatry at Washington University in St. Louis, and former Donald R. Dizney Eminent Scholar and chairman of the psychiatry department at the University of Florida, Gainesville. “Add shame, guilt, and denial, and suicide ideation, [and] attempts and completions become all the more common.”

The solution, Dr. Gold said, is early detection, along with “multidisciplinary evaluation and diagnosis, prompt treatment and long-term follow-up by a physician’s health program or [an employee assistance program that] is associated with sustained remissions.” Mr. Wu and his coinvestigators said they would neither rate nor recommend a specific treatment. “However, [Internet-based cognitive-behavioral therapy] is one example of a possible intervention found in the literature,” they wrote.

The investigators cited several limitations. They conceded, for example, that there is “potential underestimation of frequencies of adverse mental health outcomes due to less participation among participants with more severe depression compared to those with less severe or without depression,” as well as the fact that the age of those who completed the survey were, on average, older, and they had flown more recently than had noncompleters. The age discrepancy might have skewed the results.

Harvard T.H. Chan School of Public Health funded the study. Mr. Wu and his coinvestigators reported no relevant financial disclosures.

Publications
Topics
Sections

 

The number of active airline pilots managing depressive symptoms might be severely underreported because of pilots’ fear of facing workplace stigma, a cross-sectional study of pilots from across the globe suggests.

“This study fills an important gap of knowledge by providing a current glimpse of mental health among commercial airline pilots, which to date had not been available,” wrote the authors, who added that such data are even more important in the wake of the March 2015 Germanwings flight 9525 crash.

In the study, Alexander C. Wu, MPH, and his coinvestigators conducted a descriptive, cross-sectional study by distributing an anonymous online survey through airline unions, companies, and airports. The survey, administered between April and December 2015, was received by 3,485 airline pilots. Among that number, 1,866 completed half the survey and 1,837 completed the entire survey. Symptoms of depression were evaluated based on the Patient Health Questionnaire (PHQ-9) depression module, with questions taken from the National Health and Nutrition Examination Survey – created by the Centers for Disease Control and Prevention’s National Center for Health Statistics – and the standardized Job Content Questionnaire (JCQ) (Environ Health. 2016 Dec 15. doi: 10.1186/s12940-016-0200-6).

The median age of those who responded was 42 years for females and 50 for males, and the average career length was 16 years across both genders, reported Mr. Wu, a doctoral candidate at the Harvard School of Public Health in Boston, and his coinvestigators. Nearly half of the respondents (45.5%) were from the United States, which was one of more than 50 countries represented, including Canada, Australia, Spain, the United Kingdom, Germany, the United Arab Emirates, Hong Kong, and Thailand.

More than 60% of the respondents had either a 4-year college/university degree or graduate education, and 80% of respondents had flown at least one “major trip” in the 30 days prior to completing the survey. Most of the respondents did not smoke, were married, and were white. Depression threshold was defined as a score of at least 10 on the PHQ-9, which was reported by 233 of the 1,848 responding airline pilots (12.6%), and 193 (13.5%) of the 1,430 pilots who reported flying in the 7 days prior to completing the survey. Furthermore, 75 (4.1%) of the 1,829 who answered the relevant question reported having suicidal thoughts at some point in the prior 2 weeks.

“We found a significant trend in proportions of depression at higher levels of use of sleep-aid medication (P less than .001) and among those experiencing sexual harassment (P = .001) or verbal harassment (P less than .001),” the investigators noted. In addition, 75 of the pilots “reported having thoughts of being better off dead or self-harm within the past 2 weeks,” the investigators wrote. “To our knowledge, this is the most current measure of the prevalence of suicidal thoughts among airline pilots.”

When asked about the study, Mark S. Gold, MD, said in an interview that workplace stigmas and the fear of facing criticism from their colleagues may be the prevailing factor in preventing pilots from openly discussing these issues among themselves. “Being a pilot, like a physician, is a drug-free occupation, [but] substance use disorders and depression are so commonly found together that the question is often ‘chicken or egg,’ ” said Dr. Gold, adjunct professor of psychiatry at Washington University in St. Louis, and former Donald R. Dizney Eminent Scholar and chairman of the psychiatry department at the University of Florida, Gainesville. “Add shame, guilt, and denial, and suicide ideation, [and] attempts and completions become all the more common.”

The solution, Dr. Gold said, is early detection, along with “multidisciplinary evaluation and diagnosis, prompt treatment and long-term follow-up by a physician’s health program or [an employee assistance program that] is associated with sustained remissions.” Mr. Wu and his coinvestigators said they would neither rate nor recommend a specific treatment. “However, [Internet-based cognitive-behavioral therapy] is one example of a possible intervention found in the literature,” they wrote.

The investigators cited several limitations. They conceded, for example, that there is “potential underestimation of frequencies of adverse mental health outcomes due to less participation among participants with more severe depression compared to those with less severe or without depression,” as well as the fact that the age of those who completed the survey were, on average, older, and they had flown more recently than had noncompleters. The age discrepancy might have skewed the results.

Harvard T.H. Chan School of Public Health funded the study. Mr. Wu and his coinvestigators reported no relevant financial disclosures.

 

The number of active airline pilots managing depressive symptoms might be severely underreported because of pilots’ fear of facing workplace stigma, a cross-sectional study of pilots from across the globe suggests.

“This study fills an important gap of knowledge by providing a current glimpse of mental health among commercial airline pilots, which to date had not been available,” wrote the authors, who added that such data are even more important in the wake of the March 2015 Germanwings flight 9525 crash.

In the study, Alexander C. Wu, MPH, and his coinvestigators conducted a descriptive, cross-sectional study by distributing an anonymous online survey through airline unions, companies, and airports. The survey, administered between April and December 2015, was received by 3,485 airline pilots. Among that number, 1,866 completed half the survey and 1,837 completed the entire survey. Symptoms of depression were evaluated based on the Patient Health Questionnaire (PHQ-9) depression module, with questions taken from the National Health and Nutrition Examination Survey – created by the Centers for Disease Control and Prevention’s National Center for Health Statistics – and the standardized Job Content Questionnaire (JCQ) (Environ Health. 2016 Dec 15. doi: 10.1186/s12940-016-0200-6).

The median age of those who responded was 42 years for females and 50 for males, and the average career length was 16 years across both genders, reported Mr. Wu, a doctoral candidate at the Harvard School of Public Health in Boston, and his coinvestigators. Nearly half of the respondents (45.5%) were from the United States, which was one of more than 50 countries represented, including Canada, Australia, Spain, the United Kingdom, Germany, the United Arab Emirates, Hong Kong, and Thailand.

More than 60% of the respondents had either a 4-year college/university degree or graduate education, and 80% of respondents had flown at least one “major trip” in the 30 days prior to completing the survey. Most of the respondents did not smoke, were married, and were white. Depression threshold was defined as a score of at least 10 on the PHQ-9, which was reported by 233 of the 1,848 responding airline pilots (12.6%), and 193 (13.5%) of the 1,430 pilots who reported flying in the 7 days prior to completing the survey. Furthermore, 75 (4.1%) of the 1,829 who answered the relevant question reported having suicidal thoughts at some point in the prior 2 weeks.

“We found a significant trend in proportions of depression at higher levels of use of sleep-aid medication (P less than .001) and among those experiencing sexual harassment (P = .001) or verbal harassment (P less than .001),” the investigators noted. In addition, 75 of the pilots “reported having thoughts of being better off dead or self-harm within the past 2 weeks,” the investigators wrote. “To our knowledge, this is the most current measure of the prevalence of suicidal thoughts among airline pilots.”

When asked about the study, Mark S. Gold, MD, said in an interview that workplace stigmas and the fear of facing criticism from their colleagues may be the prevailing factor in preventing pilots from openly discussing these issues among themselves. “Being a pilot, like a physician, is a drug-free occupation, [but] substance use disorders and depression are so commonly found together that the question is often ‘chicken or egg,’ ” said Dr. Gold, adjunct professor of psychiatry at Washington University in St. Louis, and former Donald R. Dizney Eminent Scholar and chairman of the psychiatry department at the University of Florida, Gainesville. “Add shame, guilt, and denial, and suicide ideation, [and] attempts and completions become all the more common.”

The solution, Dr. Gold said, is early detection, along with “multidisciplinary evaluation and diagnosis, prompt treatment and long-term follow-up by a physician’s health program or [an employee assistance program that] is associated with sustained remissions.” Mr. Wu and his coinvestigators said they would neither rate nor recommend a specific treatment. “However, [Internet-based cognitive-behavioral therapy] is one example of a possible intervention found in the literature,” they wrote.

The investigators cited several limitations. They conceded, for example, that there is “potential underestimation of frequencies of adverse mental health outcomes due to less participation among participants with more severe depression compared to those with less severe or without depression,” as well as the fact that the age of those who completed the survey were, on average, older, and they had flown more recently than had noncompleters. The age discrepancy might have skewed the results.

Harvard T.H. Chan School of Public Health funded the study. Mr. Wu and his coinvestigators reported no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM ENVIRONMENTAL HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: A significant portion of airline pilots suffer from symptoms of mental health issues but keep them quiet to avoid facing any stigma in the workplace.

Major finding: Among airline pilots, 12.6% meet standards for depression, while 4.1% reported experiencing thoughts of committing suicide.

Data source: Descriptive cross-sectional study of 3,485 surveyed pilots from April through December 2015.

Disclosures: Harvard T.H. Chan School of Public Health funded the study. Mr. Wu and his coinvestigators reported no relevant financial disclosures.

Vedolizumab effective at treating UC in wide range of patients

Article Type
Changed
Sat, 12/08/2018 - 03:13

 

When treating patients for ulcerative colitis (UC), clinicians should consider using vedolizumab, because the drug has been found to be both safe and highly effective in patients who have never received tumor necrosis factor (TNF)–antagonist treatment and those who have but did not benefit from it, according to a study published in the February issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2016.08.044).

“Approximately 50% of patients with UC do not respond to induction therapy with TNF antagonists or lose response over time such that after 1 year of treatment, clinical remission is observed in only 17%-34% of patients,” explained the authors of the report, led by Brian G. Feagan, MD, of the University of Western Ontario in London. “Furthermore, the risk of serious infection (with immunosuppressants in general, and TNF antagonists specifically) is an important concern [so] alternative approaches to treatment are needed.”

For this study, Dr. Feagan and his colleagues turned to the GEMINI 1 trial, which evaluated vedolizumab in patients with moderate and severe UC via a multicenter, phase III, randomized, placebo-controlled trial. This study produced data on 374 subjects who had been randomized into cohorts receiving either vedolizumab intravenously or a placebo. However, this number was deemed too low, so a further 521 patients were enrolled for an open-label study and randomized in the same 3:2 ratio as the previous study. The former study was called Cohort 1 and the latter called Cohort 2.

“Eligible patients had UC for [at least] 6 months before enrollment, MCS [Mayo Clinic scores for disease activity] from 6 to 12, and endoscopic subscores of [at least] 2 within 7 days before the first dose of study drug, and evidence of disease extending [at least] 15 cm proximal to the rectum,” the authors explained.

Vedolizumab was administered at baseline, with follow-up evaluations at 2, 4, and 6 weeks. Subjects who experienced a clinical response – defined as an MCS reduction of at least 3 points and 30%, along with at least a 1-point reduction in rectal bleeding and an absolute rectal bleeding subscore of either 0 or 1 – were re-randomized into cohorts that received the drug every 4 weeks or every 8 weeks, for a period of up to 46 weeks. The total length of the study was, therefore, 52 weeks; for patients that were re-randomized, follow-up evaluations took place every 4 weeks.

A total of 464 patients who were enrolled and completed the study were naive to TNF antagonists, while 367 had previously been treated with TNF antagonists unsuccessfully. At 6-week follow-up, 53.1% of naive subjects receiving vedolizumab had achieved clinical response, versus 26.3% of naive subjects on placebo (absolute difference, 26.4%; 95% confidence interval, 12.4-40.4). Similarly, those with previous TNF antagonist exposure who were given vedolizumab had a 39.0% clinical response rate, versus 20.6% of those on placebo (AD, 18.1%; 95% CI, 2.8-33.5).

At week 52, naive subjects on vedolizumab continued to have far higher rates of clinical response than did those on placebo, with 46.9% and 19.0%, respectively (AD, 28.0%; 95% CI, 14.9-41.1). For those with previous TNF antagonist exposure, the disparity between vedolizumab and placebo was similarly profound: 36.1% versus 5.3%, respectively (AD, 29.5%; 95% CI, 12.8-46.1).

Adverse event rates between naive and previously exposed patients were not significantly different, according to the findings. In naive patients, 74% of those on vedolizumab experienced an adverse event, and 9% experienced a serious adverse event. For those on placebo, those rates were 75% and 16%, respectively. For patients who had previously been on a TNF antagonist, subjects on vedolizumab had an 88% rate of adverse events and a 17% rate of serious adverse events, compared with 84% and 11%, respectively, for those on placebo.

“It is notable that, in maintenance, the absolute remission rates were substantially lower in the TNF failure population for both vedolizumab-treated and placebo-treated patients,” the investigators noted, positing that “The relatively low placebo response rate in the TNF-failure group could be attributed to the presence of a greater proportion of patients with more refractory disease and poor prognostic factors, such as pancolitis and long disease duration.”

The study was funded by Millennium Pharmaceuticals. Dr. Feagan disclosed serving as a consultant and receiving financial support for research from Millennium and other companies. No other coauthors reported relevant financial disclosures.
 

Publications
Topics
Sections

 

When treating patients for ulcerative colitis (UC), clinicians should consider using vedolizumab, because the drug has been found to be both safe and highly effective in patients who have never received tumor necrosis factor (TNF)–antagonist treatment and those who have but did not benefit from it, according to a study published in the February issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2016.08.044).

“Approximately 50% of patients with UC do not respond to induction therapy with TNF antagonists or lose response over time such that after 1 year of treatment, clinical remission is observed in only 17%-34% of patients,” explained the authors of the report, led by Brian G. Feagan, MD, of the University of Western Ontario in London. “Furthermore, the risk of serious infection (with immunosuppressants in general, and TNF antagonists specifically) is an important concern [so] alternative approaches to treatment are needed.”

For this study, Dr. Feagan and his colleagues turned to the GEMINI 1 trial, which evaluated vedolizumab in patients with moderate and severe UC via a multicenter, phase III, randomized, placebo-controlled trial. This study produced data on 374 subjects who had been randomized into cohorts receiving either vedolizumab intravenously or a placebo. However, this number was deemed too low, so a further 521 patients were enrolled for an open-label study and randomized in the same 3:2 ratio as the previous study. The former study was called Cohort 1 and the latter called Cohort 2.

“Eligible patients had UC for [at least] 6 months before enrollment, MCS [Mayo Clinic scores for disease activity] from 6 to 12, and endoscopic subscores of [at least] 2 within 7 days before the first dose of study drug, and evidence of disease extending [at least] 15 cm proximal to the rectum,” the authors explained.

Vedolizumab was administered at baseline, with follow-up evaluations at 2, 4, and 6 weeks. Subjects who experienced a clinical response – defined as an MCS reduction of at least 3 points and 30%, along with at least a 1-point reduction in rectal bleeding and an absolute rectal bleeding subscore of either 0 or 1 – were re-randomized into cohorts that received the drug every 4 weeks or every 8 weeks, for a period of up to 46 weeks. The total length of the study was, therefore, 52 weeks; for patients that were re-randomized, follow-up evaluations took place every 4 weeks.

A total of 464 patients who were enrolled and completed the study were naive to TNF antagonists, while 367 had previously been treated with TNF antagonists unsuccessfully. At 6-week follow-up, 53.1% of naive subjects receiving vedolizumab had achieved clinical response, versus 26.3% of naive subjects on placebo (absolute difference, 26.4%; 95% confidence interval, 12.4-40.4). Similarly, those with previous TNF antagonist exposure who were given vedolizumab had a 39.0% clinical response rate, versus 20.6% of those on placebo (AD, 18.1%; 95% CI, 2.8-33.5).

At week 52, naive subjects on vedolizumab continued to have far higher rates of clinical response than did those on placebo, with 46.9% and 19.0%, respectively (AD, 28.0%; 95% CI, 14.9-41.1). For those with previous TNF antagonist exposure, the disparity between vedolizumab and placebo was similarly profound: 36.1% versus 5.3%, respectively (AD, 29.5%; 95% CI, 12.8-46.1).

Adverse event rates between naive and previously exposed patients were not significantly different, according to the findings. In naive patients, 74% of those on vedolizumab experienced an adverse event, and 9% experienced a serious adverse event. For those on placebo, those rates were 75% and 16%, respectively. For patients who had previously been on a TNF antagonist, subjects on vedolizumab had an 88% rate of adverse events and a 17% rate of serious adverse events, compared with 84% and 11%, respectively, for those on placebo.

“It is notable that, in maintenance, the absolute remission rates were substantially lower in the TNF failure population for both vedolizumab-treated and placebo-treated patients,” the investigators noted, positing that “The relatively low placebo response rate in the TNF-failure group could be attributed to the presence of a greater proportion of patients with more refractory disease and poor prognostic factors, such as pancolitis and long disease duration.”

The study was funded by Millennium Pharmaceuticals. Dr. Feagan disclosed serving as a consultant and receiving financial support for research from Millennium and other companies. No other coauthors reported relevant financial disclosures.
 

 

When treating patients for ulcerative colitis (UC), clinicians should consider using vedolizumab, because the drug has been found to be both safe and highly effective in patients who have never received tumor necrosis factor (TNF)–antagonist treatment and those who have but did not benefit from it, according to a study published in the February issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2016.08.044).

“Approximately 50% of patients with UC do not respond to induction therapy with TNF antagonists or lose response over time such that after 1 year of treatment, clinical remission is observed in only 17%-34% of patients,” explained the authors of the report, led by Brian G. Feagan, MD, of the University of Western Ontario in London. “Furthermore, the risk of serious infection (with immunosuppressants in general, and TNF antagonists specifically) is an important concern [so] alternative approaches to treatment are needed.”

For this study, Dr. Feagan and his colleagues turned to the GEMINI 1 trial, which evaluated vedolizumab in patients with moderate and severe UC via a multicenter, phase III, randomized, placebo-controlled trial. This study produced data on 374 subjects who had been randomized into cohorts receiving either vedolizumab intravenously or a placebo. However, this number was deemed too low, so a further 521 patients were enrolled for an open-label study and randomized in the same 3:2 ratio as the previous study. The former study was called Cohort 1 and the latter called Cohort 2.

“Eligible patients had UC for [at least] 6 months before enrollment, MCS [Mayo Clinic scores for disease activity] from 6 to 12, and endoscopic subscores of [at least] 2 within 7 days before the first dose of study drug, and evidence of disease extending [at least] 15 cm proximal to the rectum,” the authors explained.

Vedolizumab was administered at baseline, with follow-up evaluations at 2, 4, and 6 weeks. Subjects who experienced a clinical response – defined as an MCS reduction of at least 3 points and 30%, along with at least a 1-point reduction in rectal bleeding and an absolute rectal bleeding subscore of either 0 or 1 – were re-randomized into cohorts that received the drug every 4 weeks or every 8 weeks, for a period of up to 46 weeks. The total length of the study was, therefore, 52 weeks; for patients that were re-randomized, follow-up evaluations took place every 4 weeks.

A total of 464 patients who were enrolled and completed the study were naive to TNF antagonists, while 367 had previously been treated with TNF antagonists unsuccessfully. At 6-week follow-up, 53.1% of naive subjects receiving vedolizumab had achieved clinical response, versus 26.3% of naive subjects on placebo (absolute difference, 26.4%; 95% confidence interval, 12.4-40.4). Similarly, those with previous TNF antagonist exposure who were given vedolizumab had a 39.0% clinical response rate, versus 20.6% of those on placebo (AD, 18.1%; 95% CI, 2.8-33.5).

At week 52, naive subjects on vedolizumab continued to have far higher rates of clinical response than did those on placebo, with 46.9% and 19.0%, respectively (AD, 28.0%; 95% CI, 14.9-41.1). For those with previous TNF antagonist exposure, the disparity between vedolizumab and placebo was similarly profound: 36.1% versus 5.3%, respectively (AD, 29.5%; 95% CI, 12.8-46.1).

Adverse event rates between naive and previously exposed patients were not significantly different, according to the findings. In naive patients, 74% of those on vedolizumab experienced an adverse event, and 9% experienced a serious adverse event. For those on placebo, those rates were 75% and 16%, respectively. For patients who had previously been on a TNF antagonist, subjects on vedolizumab had an 88% rate of adverse events and a 17% rate of serious adverse events, compared with 84% and 11%, respectively, for those on placebo.

“It is notable that, in maintenance, the absolute remission rates were substantially lower in the TNF failure population for both vedolizumab-treated and placebo-treated patients,” the investigators noted, positing that “The relatively low placebo response rate in the TNF-failure group could be attributed to the presence of a greater proportion of patients with more refractory disease and poor prognostic factors, such as pancolitis and long disease duration.”

The study was funded by Millennium Pharmaceuticals. Dr. Feagan disclosed serving as a consultant and receiving financial support for research from Millennium and other companies. No other coauthors reported relevant financial disclosures.
 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Vedolizumab is a highly effective treatment option for UC patients who have failed using TNF-antagonists, and those who have never received TNF antagonists.

Major finding: Response to vedolizumab in patients new to TNF antagonists was 53.1%, versus 26.3% in the placebo cohort; patients who failed TNF antagonist treatment previously had a 39.0% response rate to vedolizumab, versus 20.6% on placebo.

Data source: Post-hoc cohort analysis of 831 UC patients from the GEMINI 1 study population.

Disclosures: Funding provided by Millennium Pharmaceuticals. Dr. Feagan disclosed potential conflicts of interest.

Endoscopy during pregnancy increases risk of preterm, SGA birth

Article Type
Changed
Tue, 08/28/2018 - 10:13

Women who undergo an endoscopy during pregnancy are increasing the chances that their baby will be born preterm, or be small for gestational age (SGA), according to research published in the February issue of Gastroenterology (doi: 10.1053/j.gastro.2016.10.016).

“Research in pregnancy outcome in women undergoing endoscopy during pregnancy is scarce,” wrote the authors, led by Jonas F. Ludvigsson, MD, of the Karolinska Institutet in Stockholm, adding that there are nine studies with original data on a total of 379 pregnant women undergoing endoscopy; two of these studies examined pregnancy outcome in upper endoscopy (n = 143), two examined pregnancy outcome in sigmoidoscopy or colonoscopy (n = 116), and four examined pregnancy outcome in endoscopic retrograde cholangiopancreatography (n = 120).

 


Additionally, the authors noted that, to their knowledge, there are no studies that offer data on the relative risk of endoscopy during pregnancy, and none that followed up subjects after birth. Of the few studies that do exist, a handful conclude that endoscopy during pregnancy is actually safe, but do not include data on stillbirths and neonatal deaths that did not occur immediately after patients underwent endoscopy, which could compromise that data.

To address the lack of reliable research on the effect of endoscopy on pregnancy, Dr. Ludvigsson and his coinvestigators launched a nationwide study of pregnancies in Sweden that occurred between 1992 and 2011, all of which were registered in the Swedish Medical Birth Registry and the Swedish Patient Registry. The databases revealed 2,025 upper endoscopies, 1,109 lower endoscopies, and 58 endoscopic retrograde cholangiopancreatographies, for a total of 3,052 pregnancies exposed to endoscopy over that time period.

The primary endpoint of the study was the frequency of preterm birth and stillbirth in this population. To measure this, the investigators used adjusted relative risk (ARR), calculated via Poisson regression by using data on 1,589,173 pregnancies that were not exposed to endoscopy as reference.

“Stillbirth is recorded from 22 completed gestational weeks since mid-2008, and before that from gestational week 28. Gestational age was determined using ultrasound, and when ultrasound data were missing, we used the first day of the last menstrual period for pregnancy start,” the authors wrote.

The results showed that mothers who had any kind of endoscopy during pregnancy were more likely to experience a preterm birth or give birth to a baby who was SGA, with the ARR being 1.54 (95% confidence interval, 1.36-1.75) and 1.30 (95% CI, 1.07-1.57), respectively. However, the risk of other adverse effects, such as stillbirth or congenital malformation, was not significant: Stillbirth ARR was 1.45 (95% CI, 0.87-2.40) and congenital malformation ARR was 1.00 (95% CI, 0.83-1.20).

Women who were exposed to endoscopy during pregnancy were more likely to have a preterm birth, compared with women who had endoscopy 1 year before or after pregnancy, but were not more highly predisposed to SGA, stillbirth, or congenital malformations. Additionally, when data on multiple pregnancies carried by the same mother were compared, no correlation was found between endoscopy and gestational age or birth weight, if the mother was exposed to endoscopy during only one of the pregnancies.

“Earlier recommendations suggest that endoscopy should only be performed during pregnancy if there are strong indications, and if so, not during the second trimester, [but] our study shows that endoscopy is unlikely to have a more than marginal influence on pregnancy outcome independently of trimester,” the authors concluded. “Neither does it seem that sigmoidoscopy is preferable to a full colonoscopy in the pregnant woman.”

Regarding the latter conclusion, the authors clarified that “it is possible that in women with particularly severe gastrointestinal disease where endoscopy is inevitable, the physician will prefer a sigmoidoscopy rather than a full colonoscopy, and under such circumstances the sigmoidoscopy will signal a more severe disease.”

The investigators also noted that their study had several limitations, including not knowing the length of time each endoscopy took, the sedatives and bowel preparations that were used, the patient’s position during the procedure, and the indication that prompted the endoscopy in the first place.

The study was funded by grants from the Swedish Society of Medicine and the Stockholm County Council, and the Swedish Research Council. Dr. Ludvigsson and his coauthors did not report any relevant financial disclosures.

Publications
Topics
Sections

Women who undergo an endoscopy during pregnancy are increasing the chances that their baby will be born preterm, or be small for gestational age (SGA), according to research published in the February issue of Gastroenterology (doi: 10.1053/j.gastro.2016.10.016).

“Research in pregnancy outcome in women undergoing endoscopy during pregnancy is scarce,” wrote the authors, led by Jonas F. Ludvigsson, MD, of the Karolinska Institutet in Stockholm, adding that there are nine studies with original data on a total of 379 pregnant women undergoing endoscopy; two of these studies examined pregnancy outcome in upper endoscopy (n = 143), two examined pregnancy outcome in sigmoidoscopy or colonoscopy (n = 116), and four examined pregnancy outcome in endoscopic retrograde cholangiopancreatography (n = 120).

 


Additionally, the authors noted that, to their knowledge, there are no studies that offer data on the relative risk of endoscopy during pregnancy, and none that followed up subjects after birth. Of the few studies that do exist, a handful conclude that endoscopy during pregnancy is actually safe, but do not include data on stillbirths and neonatal deaths that did not occur immediately after patients underwent endoscopy, which could compromise that data.

To address the lack of reliable research on the effect of endoscopy on pregnancy, Dr. Ludvigsson and his coinvestigators launched a nationwide study of pregnancies in Sweden that occurred between 1992 and 2011, all of which were registered in the Swedish Medical Birth Registry and the Swedish Patient Registry. The databases revealed 2,025 upper endoscopies, 1,109 lower endoscopies, and 58 endoscopic retrograde cholangiopancreatographies, for a total of 3,052 pregnancies exposed to endoscopy over that time period.

The primary endpoint of the study was the frequency of preterm birth and stillbirth in this population. To measure this, the investigators used adjusted relative risk (ARR), calculated via Poisson regression by using data on 1,589,173 pregnancies that were not exposed to endoscopy as reference.

“Stillbirth is recorded from 22 completed gestational weeks since mid-2008, and before that from gestational week 28. Gestational age was determined using ultrasound, and when ultrasound data were missing, we used the first day of the last menstrual period for pregnancy start,” the authors wrote.

The results showed that mothers who had any kind of endoscopy during pregnancy were more likely to experience a preterm birth or give birth to a baby who was SGA, with the ARR being 1.54 (95% confidence interval, 1.36-1.75) and 1.30 (95% CI, 1.07-1.57), respectively. However, the risk of other adverse effects, such as stillbirth or congenital malformation, was not significant: Stillbirth ARR was 1.45 (95% CI, 0.87-2.40) and congenital malformation ARR was 1.00 (95% CI, 0.83-1.20).

Women who were exposed to endoscopy during pregnancy were more likely to have a preterm birth, compared with women who had endoscopy 1 year before or after pregnancy, but were not more highly predisposed to SGA, stillbirth, or congenital malformations. Additionally, when data on multiple pregnancies carried by the same mother were compared, no correlation was found between endoscopy and gestational age or birth weight, if the mother was exposed to endoscopy during only one of the pregnancies.

“Earlier recommendations suggest that endoscopy should only be performed during pregnancy if there are strong indications, and if so, not during the second trimester, [but] our study shows that endoscopy is unlikely to have a more than marginal influence on pregnancy outcome independently of trimester,” the authors concluded. “Neither does it seem that sigmoidoscopy is preferable to a full colonoscopy in the pregnant woman.”

Regarding the latter conclusion, the authors clarified that “it is possible that in women with particularly severe gastrointestinal disease where endoscopy is inevitable, the physician will prefer a sigmoidoscopy rather than a full colonoscopy, and under such circumstances the sigmoidoscopy will signal a more severe disease.”

The investigators also noted that their study had several limitations, including not knowing the length of time each endoscopy took, the sedatives and bowel preparations that were used, the patient’s position during the procedure, and the indication that prompted the endoscopy in the first place.

The study was funded by grants from the Swedish Society of Medicine and the Stockholm County Council, and the Swedish Research Council. Dr. Ludvigsson and his coauthors did not report any relevant financial disclosures.

Women who undergo an endoscopy during pregnancy are increasing the chances that their baby will be born preterm, or be small for gestational age (SGA), according to research published in the February issue of Gastroenterology (doi: 10.1053/j.gastro.2016.10.016).

“Research in pregnancy outcome in women undergoing endoscopy during pregnancy is scarce,” wrote the authors, led by Jonas F. Ludvigsson, MD, of the Karolinska Institutet in Stockholm, adding that there are nine studies with original data on a total of 379 pregnant women undergoing endoscopy; two of these studies examined pregnancy outcome in upper endoscopy (n = 143), two examined pregnancy outcome in sigmoidoscopy or colonoscopy (n = 116), and four examined pregnancy outcome in endoscopic retrograde cholangiopancreatography (n = 120).

 


Additionally, the authors noted that, to their knowledge, there are no studies that offer data on the relative risk of endoscopy during pregnancy, and none that followed up subjects after birth. Of the few studies that do exist, a handful conclude that endoscopy during pregnancy is actually safe, but do not include data on stillbirths and neonatal deaths that did not occur immediately after patients underwent endoscopy, which could compromise that data.

To address the lack of reliable research on the effect of endoscopy on pregnancy, Dr. Ludvigsson and his coinvestigators launched a nationwide study of pregnancies in Sweden that occurred between 1992 and 2011, all of which were registered in the Swedish Medical Birth Registry and the Swedish Patient Registry. The databases revealed 2,025 upper endoscopies, 1,109 lower endoscopies, and 58 endoscopic retrograde cholangiopancreatographies, for a total of 3,052 pregnancies exposed to endoscopy over that time period.

The primary endpoint of the study was the frequency of preterm birth and stillbirth in this population. To measure this, the investigators used adjusted relative risk (ARR), calculated via Poisson regression by using data on 1,589,173 pregnancies that were not exposed to endoscopy as reference.

“Stillbirth is recorded from 22 completed gestational weeks since mid-2008, and before that from gestational week 28. Gestational age was determined using ultrasound, and when ultrasound data were missing, we used the first day of the last menstrual period for pregnancy start,” the authors wrote.

The results showed that mothers who had any kind of endoscopy during pregnancy were more likely to experience a preterm birth or give birth to a baby who was SGA, with the ARR being 1.54 (95% confidence interval, 1.36-1.75) and 1.30 (95% CI, 1.07-1.57), respectively. However, the risk of other adverse effects, such as stillbirth or congenital malformation, was not significant: Stillbirth ARR was 1.45 (95% CI, 0.87-2.40) and congenital malformation ARR was 1.00 (95% CI, 0.83-1.20).

Women who were exposed to endoscopy during pregnancy were more likely to have a preterm birth, compared with women who had endoscopy 1 year before or after pregnancy, but were not more highly predisposed to SGA, stillbirth, or congenital malformations. Additionally, when data on multiple pregnancies carried by the same mother were compared, no correlation was found between endoscopy and gestational age or birth weight, if the mother was exposed to endoscopy during only one of the pregnancies.

“Earlier recommendations suggest that endoscopy should only be performed during pregnancy if there are strong indications, and if so, not during the second trimester, [but] our study shows that endoscopy is unlikely to have a more than marginal influence on pregnancy outcome independently of trimester,” the authors concluded. “Neither does it seem that sigmoidoscopy is preferable to a full colonoscopy in the pregnant woman.”

Regarding the latter conclusion, the authors clarified that “it is possible that in women with particularly severe gastrointestinal disease where endoscopy is inevitable, the physician will prefer a sigmoidoscopy rather than a full colonoscopy, and under such circumstances the sigmoidoscopy will signal a more severe disease.”

The investigators also noted that their study had several limitations, including not knowing the length of time each endoscopy took, the sedatives and bowel preparations that were used, the patient’s position during the procedure, and the indication that prompted the endoscopy in the first place.

The study was funded by grants from the Swedish Society of Medicine and the Stockholm County Council, and the Swedish Research Council. Dr. Ludvigsson and his coauthors did not report any relevant financial disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Endoscopy during pregnancy is associated with a small but not insignificant increase in risk of preterm birth and children being born small for gestational age.

Major finding: The adjusted relative risk of preterm birth was 1.54 (95% CI, 1.36-1.75) and was 1.30 (95% CI, 1.07-1.57) for SGA.

Data source: A population-based cohort study of 3,052 pregnancies in Sweden exposed to endoscopy from 1992 through 2011.

Disclosures: The study was funded by the Swedish Society of Medicine and the Stockholm County Council, and the Swedish Research Council. The authors did not report any relevant financial disclosures.

Propofol safety similar to that of traditional sedatives used in endoscopy

Propofol sedation not worth the cost
Article Type
Changed
Sat, 12/08/2018 - 03:13

 

For doctors performing gastrointestinal endoscopic procedures, use of propofol as a sedative instead of the more commonly used drugs carries about the same risk of causing cardiopulmonary adverse events, according to a study published in the February issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2016.07.013).

“Because of its popularity, propofol is being used for both simple endoscopic procedures such as esophagogastroduodenoscopy and colonoscopy, and advanced endoscopic procedures [but] despite the widespread use of propofol, significant concerns remain regarding its safety profile,” according to the authors of the study, led by Vaibhav Wadhwa, MD, of Fairview Hospital in Cleveland.
 

 

The use of propofol as a sedative in gastrointestinal endoscopic procedures has increased in recent years, but because of an increasing number of advanced and, therefore, more complicated procedures being performed, the safety of sedatives has come into question because of their more prolonged use. Before use of propofol became prevalent, the more traditionally used sedative was a combination of benzodiazepine with an opioid. While still used today, this combination has seen a dramatic decline in usage because of its longer recovery time and lower rates of satisfaction among both patients and doctors, according to the authors. Combinations including midazolam, meperidine, pethidine, remifentanil, and fentanyl have also been used.

To compare the safety of propofol and a more traditional sedative combination, Dr. Wadhwa and his coauthors conducted a meta-analysis of published studies in the Medline (Ovid), EMBASE, and the Cochrane controlled trials registry databases. All searches were for research conducted through September of 2014, with the Medline database search starting in 1960, and the EMBASE and Cochrane searches starting in 1980, yielding a total of 2,117 studies eligible for inclusion.

Of those, 1,568 remained after duplicates were removed, then 136 were screened after removal of those deemed irrelevant or otherwise unsuitable. From those 136, 83 were excluded for various reasons – because they featured either ineligible populations, or were retrospective studies, single-arm studies, or conference abstracts – leaving 53 full-text articles to be evaluated for inclusion in the study. Of those, 27 were deemed eligible and were ultimately included.

“The primary outcomes measured were cardiopulmonary complications such as hypoxia, if oxygen saturation decreased to less than 90%; hypotension, if systolic blood pressure decreased to less than 90 mm Hg; arrhythmias, including bradycardia, supraventricular and ventricular arrhythmias, and ectopy,” Dr. Wadhwa and his coauthors wrote. “A subgroup analysis also was performed to assess studies in which sedation was directed by gastroenterologists and was compared with nongastroenterologists.” Apnea was not measured because of the lack of studies that assessed it qualitatively.

Pooled odds ratios were used to measure and compare results. The 27 included studies featured data on a total of 2,518 patients. Traditional sedatives were used on 1,194 of these subjects, while the remaining 1,324 received propofol. Regarding hypoxia, 26 of the 27 studies addressed this, of which 13 concluded that propofol was safer and 9 found that traditional sedatives were safer, with a pooled OR for propofol of 0.82 (95% confidence interval [CI] 0.63-1.07).

Twenty-five studies examined hypotension, of which 9 favored propofol and 10 favored traditional sedatives, for an OR of 0.92 (95% CI, 0.64-1.32). Of the 20 studies that included arrhythmia, 8 favored propofol and 7 favored traditional sedatives, for an OR of 1.07 (95% CI, 0.68-1.68).

“Our results showed that propofol sedation for gastrointestinal endoscopic procedures, whether simple or advanced, did not increase the cardiopulmonary adverse event rate when compared with traditional sedative agents,” the authors concluded.

In terms of the risk of developing any of the aforementioned complications, of the 20 relevant studies, 9 found propofol to be safer versus 6 that found traditional sedatives to be the better option, yielding an overall OR of 0.77 (95% CI, 0.56-1.07) for propofol. For the subanalysis regarding which type of clinician administered each sedative, 25 studies contained relevant data, of which 9 studies reported gastroenterologists administering sedatives, 5 studies reported endoscopy nurses administering sedatives under the supervision of the gastroenterologist, and 11 studies reported either an anesthesiologist, intensive care unit physician, or critical care physician administering sedatives.

“Gastroenterologist-directed sedation with propofol was noninferior to nongastroenterologist sedation,” Dr. Wadhwa and his coinvestigators wrote. “The risk of complications was similar to [that of traditional sedatives] both during simple and advanced endoscopic procedures.”

While the authors point to the sheer size of the study population as a huge strength of these results, they also note that because this is a study-level analysis rather than one conducted on an individual level, there is an inherent limitation to this study. Furthermore, variations from study to study in how propofol was administered to each patient may have caused heterogeneity with the findings of the meta-analysis. A large clinical trial would be the next logical step to affirm what this analysis has found.

“Because it may not be feasible to perform such a study, this meta-analysis should provide a rough idea of the possible associations,” the authors wrote. “However, the difference in complications between propofol and other agents might not be clinically relevant owing to the lack of any serious complications such as intubations or deaths in the studies used in this meta-analysis.”

No funding source was reported for this study. Dr. Wadhwa and his coauthors reported no relevant financial disclosures.

 

 

Body

The use of propofol-mediated sedation and, in particular, anesthetist-directed sedation has become a hot-button item in the landscape of gastrointestinal endoscopy by virtue of its overall cost. Some experts place the cost of this at over $1.1 billion annually. Recent studies stemming from a large administrative database question the safety of propofol-mediated sedation when compared to the standard combination of a benzodiazepine and opioid. Still other studies have found that anesthesiologist-directed sedation did not improve the rate of polyp detection or polypectomy. Given these findings, our research group decided to embark upon a meta-analysis to further study the safety profile of propofol when compared to the combination of a benzodiazepine and opioid. We found that when compared to the traditional sedation agents, the pooled odds ratio of propofol-mediated sedation was not associated with a safety benefit in terms of the development of hypoxia or hypotension. We also found that the safety profile of propofol-mediated sedation was equivalent whether it was administered by a gastroenterologist or nongastroenterologist.

Dr. John Vargo
Does this answer the question? I think it is safe to say that for healthy patients undergoing elective upper endoscopy and colonoscopy that there is no safety benefit of propofol-mediated sedation compared with traditional agents. Our data also suggest that with appropriate patient selection and training that endoscopist-directed propofol sedation is a viable alternative to the traditional sedation with a combination of a benzodiazepine and opioid. The benefit of the agent may be its pharmacodynamics, which allow for a rapid targeting of the appropriate level of sedation and enhanced recovery, which lead to both augmented throughput and patient satisfaction. This has been well studied for endoscopist-directed propofol sedation when compared to traditional sedation regimens and may be true for anesthesiologist-directed sedation, although I know of no comparative data. Propofol sedation is a much more expensive alternative for healthy patients undergoing elective ambulatory endoscopy.

John Vargo, MD, MPH, is the department chair of gastroenterology and hepatology at Cleveland Clinic as well as vice chairman of Cleveland Clinic’s Digestive Disease Institute.

Publications
Topics
Sections
Body

The use of propofol-mediated sedation and, in particular, anesthetist-directed sedation has become a hot-button item in the landscape of gastrointestinal endoscopy by virtue of its overall cost. Some experts place the cost of this at over $1.1 billion annually. Recent studies stemming from a large administrative database question the safety of propofol-mediated sedation when compared to the standard combination of a benzodiazepine and opioid. Still other studies have found that anesthesiologist-directed sedation did not improve the rate of polyp detection or polypectomy. Given these findings, our research group decided to embark upon a meta-analysis to further study the safety profile of propofol when compared to the combination of a benzodiazepine and opioid. We found that when compared to the traditional sedation agents, the pooled odds ratio of propofol-mediated sedation was not associated with a safety benefit in terms of the development of hypoxia or hypotension. We also found that the safety profile of propofol-mediated sedation was equivalent whether it was administered by a gastroenterologist or nongastroenterologist.

Dr. John Vargo
Does this answer the question? I think it is safe to say that for healthy patients undergoing elective upper endoscopy and colonoscopy that there is no safety benefit of propofol-mediated sedation compared with traditional agents. Our data also suggest that with appropriate patient selection and training that endoscopist-directed propofol sedation is a viable alternative to the traditional sedation with a combination of a benzodiazepine and opioid. The benefit of the agent may be its pharmacodynamics, which allow for a rapid targeting of the appropriate level of sedation and enhanced recovery, which lead to both augmented throughput and patient satisfaction. This has been well studied for endoscopist-directed propofol sedation when compared to traditional sedation regimens and may be true for anesthesiologist-directed sedation, although I know of no comparative data. Propofol sedation is a much more expensive alternative for healthy patients undergoing elective ambulatory endoscopy.

John Vargo, MD, MPH, is the department chair of gastroenterology and hepatology at Cleveland Clinic as well as vice chairman of Cleveland Clinic’s Digestive Disease Institute.

Body

The use of propofol-mediated sedation and, in particular, anesthetist-directed sedation has become a hot-button item in the landscape of gastrointestinal endoscopy by virtue of its overall cost. Some experts place the cost of this at over $1.1 billion annually. Recent studies stemming from a large administrative database question the safety of propofol-mediated sedation when compared to the standard combination of a benzodiazepine and opioid. Still other studies have found that anesthesiologist-directed sedation did not improve the rate of polyp detection or polypectomy. Given these findings, our research group decided to embark upon a meta-analysis to further study the safety profile of propofol when compared to the combination of a benzodiazepine and opioid. We found that when compared to the traditional sedation agents, the pooled odds ratio of propofol-mediated sedation was not associated with a safety benefit in terms of the development of hypoxia or hypotension. We also found that the safety profile of propofol-mediated sedation was equivalent whether it was administered by a gastroenterologist or nongastroenterologist.

Dr. John Vargo
Does this answer the question? I think it is safe to say that for healthy patients undergoing elective upper endoscopy and colonoscopy that there is no safety benefit of propofol-mediated sedation compared with traditional agents. Our data also suggest that with appropriate patient selection and training that endoscopist-directed propofol sedation is a viable alternative to the traditional sedation with a combination of a benzodiazepine and opioid. The benefit of the agent may be its pharmacodynamics, which allow for a rapid targeting of the appropriate level of sedation and enhanced recovery, which lead to both augmented throughput and patient satisfaction. This has been well studied for endoscopist-directed propofol sedation when compared to traditional sedation regimens and may be true for anesthesiologist-directed sedation, although I know of no comparative data. Propofol sedation is a much more expensive alternative for healthy patients undergoing elective ambulatory endoscopy.

John Vargo, MD, MPH, is the department chair of gastroenterology and hepatology at Cleveland Clinic as well as vice chairman of Cleveland Clinic’s Digestive Disease Institute.

Title
Propofol sedation not worth the cost
Propofol sedation not worth the cost

 

For doctors performing gastrointestinal endoscopic procedures, use of propofol as a sedative instead of the more commonly used drugs carries about the same risk of causing cardiopulmonary adverse events, according to a study published in the February issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2016.07.013).

“Because of its popularity, propofol is being used for both simple endoscopic procedures such as esophagogastroduodenoscopy and colonoscopy, and advanced endoscopic procedures [but] despite the widespread use of propofol, significant concerns remain regarding its safety profile,” according to the authors of the study, led by Vaibhav Wadhwa, MD, of Fairview Hospital in Cleveland.
 

 

The use of propofol as a sedative in gastrointestinal endoscopic procedures has increased in recent years, but because of an increasing number of advanced and, therefore, more complicated procedures being performed, the safety of sedatives has come into question because of their more prolonged use. Before use of propofol became prevalent, the more traditionally used sedative was a combination of benzodiazepine with an opioid. While still used today, this combination has seen a dramatic decline in usage because of its longer recovery time and lower rates of satisfaction among both patients and doctors, according to the authors. Combinations including midazolam, meperidine, pethidine, remifentanil, and fentanyl have also been used.

To compare the safety of propofol and a more traditional sedative combination, Dr. Wadhwa and his coauthors conducted a meta-analysis of published studies in the Medline (Ovid), EMBASE, and the Cochrane controlled trials registry databases. All searches were for research conducted through September of 2014, with the Medline database search starting in 1960, and the EMBASE and Cochrane searches starting in 1980, yielding a total of 2,117 studies eligible for inclusion.

Of those, 1,568 remained after duplicates were removed, then 136 were screened after removal of those deemed irrelevant or otherwise unsuitable. From those 136, 83 were excluded for various reasons – because they featured either ineligible populations, or were retrospective studies, single-arm studies, or conference abstracts – leaving 53 full-text articles to be evaluated for inclusion in the study. Of those, 27 were deemed eligible and were ultimately included.

“The primary outcomes measured were cardiopulmonary complications such as hypoxia, if oxygen saturation decreased to less than 90%; hypotension, if systolic blood pressure decreased to less than 90 mm Hg; arrhythmias, including bradycardia, supraventricular and ventricular arrhythmias, and ectopy,” Dr. Wadhwa and his coauthors wrote. “A subgroup analysis also was performed to assess studies in which sedation was directed by gastroenterologists and was compared with nongastroenterologists.” Apnea was not measured because of the lack of studies that assessed it qualitatively.

Pooled odds ratios were used to measure and compare results. The 27 included studies featured data on a total of 2,518 patients. Traditional sedatives were used on 1,194 of these subjects, while the remaining 1,324 received propofol. Regarding hypoxia, 26 of the 27 studies addressed this, of which 13 concluded that propofol was safer and 9 found that traditional sedatives were safer, with a pooled OR for propofol of 0.82 (95% confidence interval [CI] 0.63-1.07).

Twenty-five studies examined hypotension, of which 9 favored propofol and 10 favored traditional sedatives, for an OR of 0.92 (95% CI, 0.64-1.32). Of the 20 studies that included arrhythmia, 8 favored propofol and 7 favored traditional sedatives, for an OR of 1.07 (95% CI, 0.68-1.68).

“Our results showed that propofol sedation for gastrointestinal endoscopic procedures, whether simple or advanced, did not increase the cardiopulmonary adverse event rate when compared with traditional sedative agents,” the authors concluded.

In terms of the risk of developing any of the aforementioned complications, of the 20 relevant studies, 9 found propofol to be safer versus 6 that found traditional sedatives to be the better option, yielding an overall OR of 0.77 (95% CI, 0.56-1.07) for propofol. For the subanalysis regarding which type of clinician administered each sedative, 25 studies contained relevant data, of which 9 studies reported gastroenterologists administering sedatives, 5 studies reported endoscopy nurses administering sedatives under the supervision of the gastroenterologist, and 11 studies reported either an anesthesiologist, intensive care unit physician, or critical care physician administering sedatives.

“Gastroenterologist-directed sedation with propofol was noninferior to nongastroenterologist sedation,” Dr. Wadhwa and his coinvestigators wrote. “The risk of complications was similar to [that of traditional sedatives] both during simple and advanced endoscopic procedures.”

While the authors point to the sheer size of the study population as a huge strength of these results, they also note that because this is a study-level analysis rather than one conducted on an individual level, there is an inherent limitation to this study. Furthermore, variations from study to study in how propofol was administered to each patient may have caused heterogeneity with the findings of the meta-analysis. A large clinical trial would be the next logical step to affirm what this analysis has found.

“Because it may not be feasible to perform such a study, this meta-analysis should provide a rough idea of the possible associations,” the authors wrote. “However, the difference in complications between propofol and other agents might not be clinically relevant owing to the lack of any serious complications such as intubations or deaths in the studies used in this meta-analysis.”

No funding source was reported for this study. Dr. Wadhwa and his coauthors reported no relevant financial disclosures.

 

 

 

For doctors performing gastrointestinal endoscopic procedures, use of propofol as a sedative instead of the more commonly used drugs carries about the same risk of causing cardiopulmonary adverse events, according to a study published in the February issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2016.07.013).

“Because of its popularity, propofol is being used for both simple endoscopic procedures such as esophagogastroduodenoscopy and colonoscopy, and advanced endoscopic procedures [but] despite the widespread use of propofol, significant concerns remain regarding its safety profile,” according to the authors of the study, led by Vaibhav Wadhwa, MD, of Fairview Hospital in Cleveland.
 

 

The use of propofol as a sedative in gastrointestinal endoscopic procedures has increased in recent years, but because of an increasing number of advanced and, therefore, more complicated procedures being performed, the safety of sedatives has come into question because of their more prolonged use. Before use of propofol became prevalent, the more traditionally used sedative was a combination of benzodiazepine with an opioid. While still used today, this combination has seen a dramatic decline in usage because of its longer recovery time and lower rates of satisfaction among both patients and doctors, according to the authors. Combinations including midazolam, meperidine, pethidine, remifentanil, and fentanyl have also been used.

To compare the safety of propofol and a more traditional sedative combination, Dr. Wadhwa and his coauthors conducted a meta-analysis of published studies in the Medline (Ovid), EMBASE, and the Cochrane controlled trials registry databases. All searches were for research conducted through September of 2014, with the Medline database search starting in 1960, and the EMBASE and Cochrane searches starting in 1980, yielding a total of 2,117 studies eligible for inclusion.

Of those, 1,568 remained after duplicates were removed, then 136 were screened after removal of those deemed irrelevant or otherwise unsuitable. From those 136, 83 were excluded for various reasons – because they featured either ineligible populations, or were retrospective studies, single-arm studies, or conference abstracts – leaving 53 full-text articles to be evaluated for inclusion in the study. Of those, 27 were deemed eligible and were ultimately included.

“The primary outcomes measured were cardiopulmonary complications such as hypoxia, if oxygen saturation decreased to less than 90%; hypotension, if systolic blood pressure decreased to less than 90 mm Hg; arrhythmias, including bradycardia, supraventricular and ventricular arrhythmias, and ectopy,” Dr. Wadhwa and his coauthors wrote. “A subgroup analysis also was performed to assess studies in which sedation was directed by gastroenterologists and was compared with nongastroenterologists.” Apnea was not measured because of the lack of studies that assessed it qualitatively.

Pooled odds ratios were used to measure and compare results. The 27 included studies featured data on a total of 2,518 patients. Traditional sedatives were used on 1,194 of these subjects, while the remaining 1,324 received propofol. Regarding hypoxia, 26 of the 27 studies addressed this, of which 13 concluded that propofol was safer and 9 found that traditional sedatives were safer, with a pooled OR for propofol of 0.82 (95% confidence interval [CI] 0.63-1.07).

Twenty-five studies examined hypotension, of which 9 favored propofol and 10 favored traditional sedatives, for an OR of 0.92 (95% CI, 0.64-1.32). Of the 20 studies that included arrhythmia, 8 favored propofol and 7 favored traditional sedatives, for an OR of 1.07 (95% CI, 0.68-1.68).

“Our results showed that propofol sedation for gastrointestinal endoscopic procedures, whether simple or advanced, did not increase the cardiopulmonary adverse event rate when compared with traditional sedative agents,” the authors concluded.

In terms of the risk of developing any of the aforementioned complications, of the 20 relevant studies, 9 found propofol to be safer versus 6 that found traditional sedatives to be the better option, yielding an overall OR of 0.77 (95% CI, 0.56-1.07) for propofol. For the subanalysis regarding which type of clinician administered each sedative, 25 studies contained relevant data, of which 9 studies reported gastroenterologists administering sedatives, 5 studies reported endoscopy nurses administering sedatives under the supervision of the gastroenterologist, and 11 studies reported either an anesthesiologist, intensive care unit physician, or critical care physician administering sedatives.

“Gastroenterologist-directed sedation with propofol was noninferior to nongastroenterologist sedation,” Dr. Wadhwa and his coinvestigators wrote. “The risk of complications was similar to [that of traditional sedatives] both during simple and advanced endoscopic procedures.”

While the authors point to the sheer size of the study population as a huge strength of these results, they also note that because this is a study-level analysis rather than one conducted on an individual level, there is an inherent limitation to this study. Furthermore, variations from study to study in how propofol was administered to each patient may have caused heterogeneity with the findings of the meta-analysis. A large clinical trial would be the next logical step to affirm what this analysis has found.

“Because it may not be feasible to perform such a study, this meta-analysis should provide a rough idea of the possible associations,” the authors wrote. “However, the difference in complications between propofol and other agents might not be clinically relevant owing to the lack of any serious complications such as intubations or deaths in the studies used in this meta-analysis.”

No funding source was reported for this study. Dr. Wadhwa and his coauthors reported no relevant financial disclosures.

 

 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Propofol for sedation in gastrointestinal endoscopic procedures carries a level of risk for cardiopulmonary adverse events similar to that of more traditional sedatives.

Major finding: Pooled odds ratio for propofol was 0.82 for hypoxia (95% CI, 0.63-1.07), 0.92 for hypotension (95% CI, 0.64-1.32), and 0.86 (95% CI, 0.56-1.34) for complication rate in advanced endoscopic procedures; subjects who received propofol were 39% less likely to have complications than were those who received traditional sedatives.

Data source: Retrospective meta-analysis of 27 studies involving 2,518 patients from 1966 through 2014.

Disclosures: The authors reported no relevant financial disclosures.

VIDEO: Protein-rich diet can help manage type 2 diabetes, NAFLD

Study’s methodology raises questions
Article Type
Changed
Tue, 05/03/2022 - 15:31

 

Patients with type 2 diabetes should be put on diets rich in either animal or plant protein to reduce not only liver fat, but insulin resistance and hepatic necroinflammation markers as well, according to a study published in the February issue of Gastroenterology (doi: 10.1053/j.gastro.2016.10.007).

“High-protein diets have shown variable and sometimes even favorable effects on glucose metabolism and insulin sensitivity in people with type 2 diabetes and it is unclear which metabolic pathways are involved,” wrote the authors of the study, led by Mariya Markova, MD, of the German Institute of Human Nutrition Potsdam-Rehbrücke in Nuthetal, Germany.

SOURCE: American Gastroenterological Association

Obesity and insulin resistance have long been linked to liver fat, with excessive amounts of the latter causing nonalcoholic fatty liver disease (NAFLD), with a significant risk of nonalcoholic steatohepatitis (NASH) developing as well. Compounding this issue, at least in the United States, are widespread dietary and nutritional habits that promote consumption of animal protein, carbohydrates, and saturated fats. This “hypercaloric Western style diet,” as the authors call it, exacerbates the accumulation of fat deposits in the liver and complicates the health of patients across the country, regardless of weight.

“Remarkably, diets restricted in methionine were shown to prevent the development of insulin resistance and of the metabolic syndrome in animal models [so] the type of protein may elicit different metabolic responses depending on the amino acid composition,” Dr. Markova and her coinvestigators noted. “It is therefore hypothesized that high-plant-protein diets exert favorable effects on hepatic fat content and metabolic responses as compared to high intake of animal protein rich in BCAA [branched-chain amino acids] and methionine,” both of which can be found in suitably low levels via plant protein.

Dr. Markova and her team devised a prospective, randomized, open-label clinical trial involving 44 patients with type 2 diabetes and NAFLD, all of whom were recruited at the department of clinical nutrition of the German Institute of Human Nutrition Potsdam-Rehbrücke between June 2013 and March 2015. Subjects were randomized into one of two cohorts, each of which were assigned a diet rich in either animal protein (AP) or plant protein (PP) for a period of 6 weeks. Median body mass index in the AP cohort was 31.0 ± 0.8, and was 29.4 ± 1.0 in the PP cohort.

The AP cohort diet consisted mainly of meat and dairy products, while legumes constituted the bulk of the PP cohort diet. Both diets were isocaloric and had the same macronutrient makeup: 30% protein, 40% carbohydrates, and 30% fat. Seven subjects dropped out prior to completion of the study; of the 37 that remained all the way through – 19 in the AP cohort, 18 in the PP cohort – the age range was 49-78 years. Subjects maintained the same physical exercise regimens throughout the study that they had beforehand, and were asked not to alter them. Hemoglobin A1c levels ranged from 5.8% to 8.8% at baseline, and evaluations were carried out at fasting levels for each subject.

Patients in both cohorts saw significant decreases in intrahepatic fat content by the end of the trial period. Those in the AP cohort saw decreases of 48.0% (P = .0002), while those in the PP cohort saw a decrease of 35.7% (P = .001). Perhaps most importantly, the reductions in both cohorts were not correlated to body weight. In addition, levels of fibroblast growth factor 21 (FGF21), which has been shown to be a predictive marker of NAFLD, decreased by nearly 50% for both AP and PP cohorts (P less than .0002 for both).

“Despite the elevated intake and postprandial uptake of methionine and BCAA in the AP group, there was no indication of negative effects of these components,” the authors stated in the study. “The origin of protein – animal or plant – did not play a major role. Both high-protein diets unexpectedly induced strong reductions of FGF21, which was associated with metabolic improvements and the decrease of IHL.”

Despite these findings, however, the 6-week time span used here is not sufficient to determine just how viable this diet may be in the long term, according to the authors. Further studies will be needed, and will need to take place over longer periods of time, to “show the durability of the responses and eventual adverse effects of the diets.” Furthermore, different age groups must be examined to find out if the benefits observed by Dr. Markova and her coinvestigators were somehow related to the age of these subjects.

The study was funded by grants from German Federal Ministry of Food and Agriculture and German Center for Diabetes Research. Dr. Markova and her coauthors did not report any financial disclosures.

 

 

Body

Human studies to assess the effects of isocaloric macronutrient substitution are fraught with difficulty. If one macronutrient is increased, what happens to the others? If you observe an effect, is it the phenomenon you were seeking due to the macronutrient you altered, or an epiphenomenon due to changes in the others?

Markova et al. attempted to study a 6-week “isocaloric” increase of animal vs. plant protein (from 17% to 30% of calories as protein). However, a decrease of percent fat from 41% to 30%, and a reduction in carbohydrate from 42% to 40% occurred commensurately. This brings up three concerns. First, despite the diet’s being “isocaloric,” weight and body mass index decreased by 2 kg and 0.8 kg/m2, respectively. Reductions in intrahepatic, visceral, and subcutaneous fat, and an increase in lean body mass were noted. So was the diet isocaloric? Protein reduces plasma ghrelin levels and is more satiating. Furthermore, metabolism of protein to ATP is inefficient compared to that of carbohydrate or fat. The authors say only that calories were “unrestricted.” These issues do not engender “isocaloric” confidence.

Dr. Robert H. Lustig
Dr. Robert H. Lustig
Second, animal protein (high branched-chain amino acid and methionine) consists of meat and dairy, but their fatty acid compositions are quite different. Dairy has odd-chain fatty acids, which are protective against type 2 diabetes, while meat has even-chain fatty acids, which may be more predisposing to disease. Did the change in fatty acids play a role, rather than the change in amino?

Lastly, the type of carbohydrate was not controlled for. Fructose is significantly more lipogenic than glucose. Yet they were lumped together as “carbohydrate,” and were uncontrolled. So what macronutrient really caused the reduction in liver fat? These methodological issues detract from the author’s message, and this study must be considered preliminary.

Robert H. Lustig, MD, MSL, is in the division of pediatric endocrinology, UCSF Benioff Children’s Hospital, San Francisco; member, UCSF Institute for Health Policy Studies. Dr. Lustig declared no conflicts of interest.

Publications
Topics
Sections
Body

Human studies to assess the effects of isocaloric macronutrient substitution are fraught with difficulty. If one macronutrient is increased, what happens to the others? If you observe an effect, is it the phenomenon you were seeking due to the macronutrient you altered, or an epiphenomenon due to changes in the others?

Markova et al. attempted to study a 6-week “isocaloric” increase of animal vs. plant protein (from 17% to 30% of calories as protein). However, a decrease of percent fat from 41% to 30%, and a reduction in carbohydrate from 42% to 40% occurred commensurately. This brings up three concerns. First, despite the diet’s being “isocaloric,” weight and body mass index decreased by 2 kg and 0.8 kg/m2, respectively. Reductions in intrahepatic, visceral, and subcutaneous fat, and an increase in lean body mass were noted. So was the diet isocaloric? Protein reduces plasma ghrelin levels and is more satiating. Furthermore, metabolism of protein to ATP is inefficient compared to that of carbohydrate or fat. The authors say only that calories were “unrestricted.” These issues do not engender “isocaloric” confidence.

Dr. Robert H. Lustig
Dr. Robert H. Lustig
Second, animal protein (high branched-chain amino acid and methionine) consists of meat and dairy, but their fatty acid compositions are quite different. Dairy has odd-chain fatty acids, which are protective against type 2 diabetes, while meat has even-chain fatty acids, which may be more predisposing to disease. Did the change in fatty acids play a role, rather than the change in amino?

Lastly, the type of carbohydrate was not controlled for. Fructose is significantly more lipogenic than glucose. Yet they were lumped together as “carbohydrate,” and were uncontrolled. So what macronutrient really caused the reduction in liver fat? These methodological issues detract from the author’s message, and this study must be considered preliminary.

Robert H. Lustig, MD, MSL, is in the division of pediatric endocrinology, UCSF Benioff Children’s Hospital, San Francisco; member, UCSF Institute for Health Policy Studies. Dr. Lustig declared no conflicts of interest.

Body

Human studies to assess the effects of isocaloric macronutrient substitution are fraught with difficulty. If one macronutrient is increased, what happens to the others? If you observe an effect, is it the phenomenon you were seeking due to the macronutrient you altered, or an epiphenomenon due to changes in the others?

Markova et al. attempted to study a 6-week “isocaloric” increase of animal vs. plant protein (from 17% to 30% of calories as protein). However, a decrease of percent fat from 41% to 30%, and a reduction in carbohydrate from 42% to 40% occurred commensurately. This brings up three concerns. First, despite the diet’s being “isocaloric,” weight and body mass index decreased by 2 kg and 0.8 kg/m2, respectively. Reductions in intrahepatic, visceral, and subcutaneous fat, and an increase in lean body mass were noted. So was the diet isocaloric? Protein reduces plasma ghrelin levels and is more satiating. Furthermore, metabolism of protein to ATP is inefficient compared to that of carbohydrate or fat. The authors say only that calories were “unrestricted.” These issues do not engender “isocaloric” confidence.

Dr. Robert H. Lustig
Dr. Robert H. Lustig
Second, animal protein (high branched-chain amino acid and methionine) consists of meat and dairy, but their fatty acid compositions are quite different. Dairy has odd-chain fatty acids, which are protective against type 2 diabetes, while meat has even-chain fatty acids, which may be more predisposing to disease. Did the change in fatty acids play a role, rather than the change in amino?

Lastly, the type of carbohydrate was not controlled for. Fructose is significantly more lipogenic than glucose. Yet they were lumped together as “carbohydrate,” and were uncontrolled. So what macronutrient really caused the reduction in liver fat? These methodological issues detract from the author’s message, and this study must be considered preliminary.

Robert H. Lustig, MD, MSL, is in the division of pediatric endocrinology, UCSF Benioff Children’s Hospital, San Francisco; member, UCSF Institute for Health Policy Studies. Dr. Lustig declared no conflicts of interest.

Title
Study’s methodology raises questions
Study’s methodology raises questions

 

Patients with type 2 diabetes should be put on diets rich in either animal or plant protein to reduce not only liver fat, but insulin resistance and hepatic necroinflammation markers as well, according to a study published in the February issue of Gastroenterology (doi: 10.1053/j.gastro.2016.10.007).

“High-protein diets have shown variable and sometimes even favorable effects on glucose metabolism and insulin sensitivity in people with type 2 diabetes and it is unclear which metabolic pathways are involved,” wrote the authors of the study, led by Mariya Markova, MD, of the German Institute of Human Nutrition Potsdam-Rehbrücke in Nuthetal, Germany.

SOURCE: American Gastroenterological Association

Obesity and insulin resistance have long been linked to liver fat, with excessive amounts of the latter causing nonalcoholic fatty liver disease (NAFLD), with a significant risk of nonalcoholic steatohepatitis (NASH) developing as well. Compounding this issue, at least in the United States, are widespread dietary and nutritional habits that promote consumption of animal protein, carbohydrates, and saturated fats. This “hypercaloric Western style diet,” as the authors call it, exacerbates the accumulation of fat deposits in the liver and complicates the health of patients across the country, regardless of weight.

“Remarkably, diets restricted in methionine were shown to prevent the development of insulin resistance and of the metabolic syndrome in animal models [so] the type of protein may elicit different metabolic responses depending on the amino acid composition,” Dr. Markova and her coinvestigators noted. “It is therefore hypothesized that high-plant-protein diets exert favorable effects on hepatic fat content and metabolic responses as compared to high intake of animal protein rich in BCAA [branched-chain amino acids] and methionine,” both of which can be found in suitably low levels via plant protein.

Dr. Markova and her team devised a prospective, randomized, open-label clinical trial involving 44 patients with type 2 diabetes and NAFLD, all of whom were recruited at the department of clinical nutrition of the German Institute of Human Nutrition Potsdam-Rehbrücke between June 2013 and March 2015. Subjects were randomized into one of two cohorts, each of which were assigned a diet rich in either animal protein (AP) or plant protein (PP) for a period of 6 weeks. Median body mass index in the AP cohort was 31.0 ± 0.8, and was 29.4 ± 1.0 in the PP cohort.

The AP cohort diet consisted mainly of meat and dairy products, while legumes constituted the bulk of the PP cohort diet. Both diets were isocaloric and had the same macronutrient makeup: 30% protein, 40% carbohydrates, and 30% fat. Seven subjects dropped out prior to completion of the study; of the 37 that remained all the way through – 19 in the AP cohort, 18 in the PP cohort – the age range was 49-78 years. Subjects maintained the same physical exercise regimens throughout the study that they had beforehand, and were asked not to alter them. Hemoglobin A1c levels ranged from 5.8% to 8.8% at baseline, and evaluations were carried out at fasting levels for each subject.

Patients in both cohorts saw significant decreases in intrahepatic fat content by the end of the trial period. Those in the AP cohort saw decreases of 48.0% (P = .0002), while those in the PP cohort saw a decrease of 35.7% (P = .001). Perhaps most importantly, the reductions in both cohorts were not correlated to body weight. In addition, levels of fibroblast growth factor 21 (FGF21), which has been shown to be a predictive marker of NAFLD, decreased by nearly 50% for both AP and PP cohorts (P less than .0002 for both).

“Despite the elevated intake and postprandial uptake of methionine and BCAA in the AP group, there was no indication of negative effects of these components,” the authors stated in the study. “The origin of protein – animal or plant – did not play a major role. Both high-protein diets unexpectedly induced strong reductions of FGF21, which was associated with metabolic improvements and the decrease of IHL.”

Despite these findings, however, the 6-week time span used here is not sufficient to determine just how viable this diet may be in the long term, according to the authors. Further studies will be needed, and will need to take place over longer periods of time, to “show the durability of the responses and eventual adverse effects of the diets.” Furthermore, different age groups must be examined to find out if the benefits observed by Dr. Markova and her coinvestigators were somehow related to the age of these subjects.

The study was funded by grants from German Federal Ministry of Food and Agriculture and German Center for Diabetes Research. Dr. Markova and her coauthors did not report any financial disclosures.

 

 

 

Patients with type 2 diabetes should be put on diets rich in either animal or plant protein to reduce not only liver fat, but insulin resistance and hepatic necroinflammation markers as well, according to a study published in the February issue of Gastroenterology (doi: 10.1053/j.gastro.2016.10.007).

“High-protein diets have shown variable and sometimes even favorable effects on glucose metabolism and insulin sensitivity in people with type 2 diabetes and it is unclear which metabolic pathways are involved,” wrote the authors of the study, led by Mariya Markova, MD, of the German Institute of Human Nutrition Potsdam-Rehbrücke in Nuthetal, Germany.

SOURCE: American Gastroenterological Association

Obesity and insulin resistance have long been linked to liver fat, with excessive amounts of the latter causing nonalcoholic fatty liver disease (NAFLD), with a significant risk of nonalcoholic steatohepatitis (NASH) developing as well. Compounding this issue, at least in the United States, are widespread dietary and nutritional habits that promote consumption of animal protein, carbohydrates, and saturated fats. This “hypercaloric Western style diet,” as the authors call it, exacerbates the accumulation of fat deposits in the liver and complicates the health of patients across the country, regardless of weight.

“Remarkably, diets restricted in methionine were shown to prevent the development of insulin resistance and of the metabolic syndrome in animal models [so] the type of protein may elicit different metabolic responses depending on the amino acid composition,” Dr. Markova and her coinvestigators noted. “It is therefore hypothesized that high-plant-protein diets exert favorable effects on hepatic fat content and metabolic responses as compared to high intake of animal protein rich in BCAA [branched-chain amino acids] and methionine,” both of which can be found in suitably low levels via plant protein.

Dr. Markova and her team devised a prospective, randomized, open-label clinical trial involving 44 patients with type 2 diabetes and NAFLD, all of whom were recruited at the department of clinical nutrition of the German Institute of Human Nutrition Potsdam-Rehbrücke between June 2013 and March 2015. Subjects were randomized into one of two cohorts, each of which were assigned a diet rich in either animal protein (AP) or plant protein (PP) for a period of 6 weeks. Median body mass index in the AP cohort was 31.0 ± 0.8, and was 29.4 ± 1.0 in the PP cohort.

The AP cohort diet consisted mainly of meat and dairy products, while legumes constituted the bulk of the PP cohort diet. Both diets were isocaloric and had the same macronutrient makeup: 30% protein, 40% carbohydrates, and 30% fat. Seven subjects dropped out prior to completion of the study; of the 37 that remained all the way through – 19 in the AP cohort, 18 in the PP cohort – the age range was 49-78 years. Subjects maintained the same physical exercise regimens throughout the study that they had beforehand, and were asked not to alter them. Hemoglobin A1c levels ranged from 5.8% to 8.8% at baseline, and evaluations were carried out at fasting levels for each subject.

Patients in both cohorts saw significant decreases in intrahepatic fat content by the end of the trial period. Those in the AP cohort saw decreases of 48.0% (P = .0002), while those in the PP cohort saw a decrease of 35.7% (P = .001). Perhaps most importantly, the reductions in both cohorts were not correlated to body weight. In addition, levels of fibroblast growth factor 21 (FGF21), which has been shown to be a predictive marker of NAFLD, decreased by nearly 50% for both AP and PP cohorts (P less than .0002 for both).

“Despite the elevated intake and postprandial uptake of methionine and BCAA in the AP group, there was no indication of negative effects of these components,” the authors stated in the study. “The origin of protein – animal or plant – did not play a major role. Both high-protein diets unexpectedly induced strong reductions of FGF21, which was associated with metabolic improvements and the decrease of IHL.”

Despite these findings, however, the 6-week time span used here is not sufficient to determine just how viable this diet may be in the long term, according to the authors. Further studies will be needed, and will need to take place over longer periods of time, to “show the durability of the responses and eventual adverse effects of the diets.” Furthermore, different age groups must be examined to find out if the benefits observed by Dr. Markova and her coinvestigators were somehow related to the age of these subjects.

The study was funded by grants from German Federal Ministry of Food and Agriculture and German Center for Diabetes Research. Dr. Markova and her coauthors did not report any financial disclosures.

 

 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Protein-rich diets can significantly reduce liver fat and markers of insulin resistance and hepatic necroinflammation in individuals with type 2 diabetes.

Major finding: Animal- and plant-protein diets reduced liver fat for type 2 diabetes patients by 36%-48% over the course of 6 months (P = .0002 and P = .001, respectively).

Data source: Prospective study of 37 type 2 diabetes patients from June 2013 to March 2015.

Disclosures: The German Federal Ministry of Food and Agriculture and German Center for Diabetes Research supported the study. The authors did not report any financial disclosures.

Severe postoperative pain following thoracotomy predicts persistent pain months later

Article Type
Changed
Wed, 01/02/2019 - 09:45

 

Patients who suffer from severe pain in the days immediately following an open thoracotomy are significantly more likely to still be experiencing pain from the procedure 6 months later, according to a study published in the Journal of Clinical Anesthesia.

“A recognized cause of persistent postsurgical pain is poorly controlled immediate postoperative pain,” wrote the authors, led by Gopinath Niraj, MD, of the University Hospitals of Leicester (England) NHS Trust. “Open thoracotomy can induce significant pain during the immediate postoperative period. Patients undergoing thoracotomy also have one of the greatest incidences of chronic postoperative pain and disability among all the surgical procedures.”

Dr. Niraj and his coinvestigators conducted an audit on 504 patients who underwent open thoracotomy at a single center between May 2010 and April 2012. The audit consisted of a questionnaire composed of 15 questions, which asked yes/no questions about the existence of and location of postoperative pain, and numerical questions regarding the severity of pain. Scores of 7 or higher on a 10-point scale indicated “severe pain,” according to the investigators (J Clin Anesth. 2017;36:174-7). Subjects were evaluated at 72 hours and at 6 months after the operation.

Of the 504 patients, there were 364 survivors, of which 306 received questionnaires. Of those 306, 133 (43%) reported at least five incidents of severe pain within 72 hours of undergoing the operation. Within this group, 109 (82%) reported feeling some amount of persistent pain 6 months later. Chronic post-thoracotomy pain was considered severe in 10% of those subjects, while 24% reported it as moderate and 48% said it was mild.

A total of 289 of the 306 subjects (95%) received an epidural analgesic in the 72 hours after thoracotomy. In terms of satisfaction with pain management, patients were overall positive; 36.3% rated it “excellent,” 43.8% called it “good,” while only 15.8% said it was “fair” and 3.8% said it was “poor.”

“Our audit has some limitations,” the authors noted. “The retrospective project relied on patient self-report and recall.”

Dr. Niraj and his coauthors did not report any financial conflicts. No funding sources for this study were disclosed.

Publications
Topics
Sections

 

Patients who suffer from severe pain in the days immediately following an open thoracotomy are significantly more likely to still be experiencing pain from the procedure 6 months later, according to a study published in the Journal of Clinical Anesthesia.

“A recognized cause of persistent postsurgical pain is poorly controlled immediate postoperative pain,” wrote the authors, led by Gopinath Niraj, MD, of the University Hospitals of Leicester (England) NHS Trust. “Open thoracotomy can induce significant pain during the immediate postoperative period. Patients undergoing thoracotomy also have one of the greatest incidences of chronic postoperative pain and disability among all the surgical procedures.”

Dr. Niraj and his coinvestigators conducted an audit on 504 patients who underwent open thoracotomy at a single center between May 2010 and April 2012. The audit consisted of a questionnaire composed of 15 questions, which asked yes/no questions about the existence of and location of postoperative pain, and numerical questions regarding the severity of pain. Scores of 7 or higher on a 10-point scale indicated “severe pain,” according to the investigators (J Clin Anesth. 2017;36:174-7). Subjects were evaluated at 72 hours and at 6 months after the operation.

Of the 504 patients, there were 364 survivors, of which 306 received questionnaires. Of those 306, 133 (43%) reported at least five incidents of severe pain within 72 hours of undergoing the operation. Within this group, 109 (82%) reported feeling some amount of persistent pain 6 months later. Chronic post-thoracotomy pain was considered severe in 10% of those subjects, while 24% reported it as moderate and 48% said it was mild.

A total of 289 of the 306 subjects (95%) received an epidural analgesic in the 72 hours after thoracotomy. In terms of satisfaction with pain management, patients were overall positive; 36.3% rated it “excellent,” 43.8% called it “good,” while only 15.8% said it was “fair” and 3.8% said it was “poor.”

“Our audit has some limitations,” the authors noted. “The retrospective project relied on patient self-report and recall.”

Dr. Niraj and his coauthors did not report any financial conflicts. No funding sources for this study were disclosed.

 

Patients who suffer from severe pain in the days immediately following an open thoracotomy are significantly more likely to still be experiencing pain from the procedure 6 months later, according to a study published in the Journal of Clinical Anesthesia.

“A recognized cause of persistent postsurgical pain is poorly controlled immediate postoperative pain,” wrote the authors, led by Gopinath Niraj, MD, of the University Hospitals of Leicester (England) NHS Trust. “Open thoracotomy can induce significant pain during the immediate postoperative period. Patients undergoing thoracotomy also have one of the greatest incidences of chronic postoperative pain and disability among all the surgical procedures.”

Dr. Niraj and his coinvestigators conducted an audit on 504 patients who underwent open thoracotomy at a single center between May 2010 and April 2012. The audit consisted of a questionnaire composed of 15 questions, which asked yes/no questions about the existence of and location of postoperative pain, and numerical questions regarding the severity of pain. Scores of 7 or higher on a 10-point scale indicated “severe pain,” according to the investigators (J Clin Anesth. 2017;36:174-7). Subjects were evaluated at 72 hours and at 6 months after the operation.

Of the 504 patients, there were 364 survivors, of which 306 received questionnaires. Of those 306, 133 (43%) reported at least five incidents of severe pain within 72 hours of undergoing the operation. Within this group, 109 (82%) reported feeling some amount of persistent pain 6 months later. Chronic post-thoracotomy pain was considered severe in 10% of those subjects, while 24% reported it as moderate and 48% said it was mild.

A total of 289 of the 306 subjects (95%) received an epidural analgesic in the 72 hours after thoracotomy. In terms of satisfaction with pain management, patients were overall positive; 36.3% rated it “excellent,” 43.8% called it “good,” while only 15.8% said it was “fair” and 3.8% said it was “poor.”

“Our audit has some limitations,” the authors noted. “The retrospective project relied on patient self-report and recall.”

Dr. Niraj and his coauthors did not report any financial conflicts. No funding sources for this study were disclosed.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL ANESTHESIA

Disallow All Ads
Vitals

 

Key clinical point: Patients who experience acute pain within 72 hours of undergoing thoracotomy are more likely to still be in pain 6 months after the procedure.

Major finding: 133 of 306 patients were in severe pain 72 hours after thoracotomy; of these, 109 (82%) still had pain 6 months later.

Data source: Retrospective, single-center study of 504 thoracotomy patients between May 2010 and April 2012.

Disclosures: Authors reported no financial disclosures nor funding source.

MRI useful to distinguish between PACNS and neurosarcoidosis

Article Type
Changed
Mon, 01/07/2019 - 12:48

 

– MRI can help to differentiate between neurosarcoidosis and primary angiitis of the central nervous system, according to a single-center study comparing the two conditions.

Patients with neurosarcoidosis were significantly more likely to display spinal cord, basal meningeal, and cranial nerve involvements than were patients with primary angiitis of the central nervous system (PACNS), making MRI an efficient tool in distinguishing between the two.

Deepak Chitnis/Frontline Medical News
Dr. Didem Saygin
“This will help us to make a differential diagnosis more accurately,” Didem Saygin, MD, of the Cleveland Clinic, explained at the annual meeting of the American College of Rheumatology. “Because we can differentiate these two conditions just with MRI, this means we can better approach the patients and give the appropriate treatments early.”

Dr. Saygin and her coinvestigators at the Cleveland Clinic recruited 34 patients with PACNS and 42 patients with neurosarcoidosis, all of whom had brain and/or spinal cord MRIs performed close to the time of presentation. The average age was 45.6 years in the PACNS group and 44.1 years in the neurosarcoidosis group. The MRIs were blindly reviewed by two neuroradiologists who examined and recorded data on pachymeninges, leptomeninges, basal meninges, cranial nerves, cerebral gray and white matter, and the spinal cord itself. The sites, presence, patterns, localization, and lateral involvement for these were noted, as well as any mass effect, parenchymal hemorrhaging, and ventriculomegaly.

Unilateral cranial nerve involvement appeared on MRI in 71% of neurosarcoidosis patients, compared with 3.0% for PACNS. Neurosarcoidosis patients also had consistently higher rates of involvement of the spinal cord (cervical, 62.5% vs. 0%; thoracic, 45.8% vs. 0%) as well as basal meninges (basal cistern, 21.6% vs. 0%; brain stem, 27.0% vs. 3.0%).

However, there was no significant difference between PACNS and neurosarcoidosis patients in terms of the pachymeningeal and leptomeningeal involvement, pituitary/sella turcica involvement, and mass effect, the latter of which was seen in 20% of PACNS patients and 14% of those with neurosarcoidosis.

The relatively small sample size of 76 patients is a limitation of the study, Dr. Saygin said.

No funding source was disclosed for this study. Dr. Saygin did not report any relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– MRI can help to differentiate between neurosarcoidosis and primary angiitis of the central nervous system, according to a single-center study comparing the two conditions.

Patients with neurosarcoidosis were significantly more likely to display spinal cord, basal meningeal, and cranial nerve involvements than were patients with primary angiitis of the central nervous system (PACNS), making MRI an efficient tool in distinguishing between the two.

Deepak Chitnis/Frontline Medical News
Dr. Didem Saygin
“This will help us to make a differential diagnosis more accurately,” Didem Saygin, MD, of the Cleveland Clinic, explained at the annual meeting of the American College of Rheumatology. “Because we can differentiate these two conditions just with MRI, this means we can better approach the patients and give the appropriate treatments early.”

Dr. Saygin and her coinvestigators at the Cleveland Clinic recruited 34 patients with PACNS and 42 patients with neurosarcoidosis, all of whom had brain and/or spinal cord MRIs performed close to the time of presentation. The average age was 45.6 years in the PACNS group and 44.1 years in the neurosarcoidosis group. The MRIs were blindly reviewed by two neuroradiologists who examined and recorded data on pachymeninges, leptomeninges, basal meninges, cranial nerves, cerebral gray and white matter, and the spinal cord itself. The sites, presence, patterns, localization, and lateral involvement for these were noted, as well as any mass effect, parenchymal hemorrhaging, and ventriculomegaly.

Unilateral cranial nerve involvement appeared on MRI in 71% of neurosarcoidosis patients, compared with 3.0% for PACNS. Neurosarcoidosis patients also had consistently higher rates of involvement of the spinal cord (cervical, 62.5% vs. 0%; thoracic, 45.8% vs. 0%) as well as basal meninges (basal cistern, 21.6% vs. 0%; brain stem, 27.0% vs. 3.0%).

However, there was no significant difference between PACNS and neurosarcoidosis patients in terms of the pachymeningeal and leptomeningeal involvement, pituitary/sella turcica involvement, and mass effect, the latter of which was seen in 20% of PACNS patients and 14% of those with neurosarcoidosis.

The relatively small sample size of 76 patients is a limitation of the study, Dr. Saygin said.

No funding source was disclosed for this study. Dr. Saygin did not report any relevant financial disclosures.

 

– MRI can help to differentiate between neurosarcoidosis and primary angiitis of the central nervous system, according to a single-center study comparing the two conditions.

Patients with neurosarcoidosis were significantly more likely to display spinal cord, basal meningeal, and cranial nerve involvements than were patients with primary angiitis of the central nervous system (PACNS), making MRI an efficient tool in distinguishing between the two.

Deepak Chitnis/Frontline Medical News
Dr. Didem Saygin
“This will help us to make a differential diagnosis more accurately,” Didem Saygin, MD, of the Cleveland Clinic, explained at the annual meeting of the American College of Rheumatology. “Because we can differentiate these two conditions just with MRI, this means we can better approach the patients and give the appropriate treatments early.”

Dr. Saygin and her coinvestigators at the Cleveland Clinic recruited 34 patients with PACNS and 42 patients with neurosarcoidosis, all of whom had brain and/or spinal cord MRIs performed close to the time of presentation. The average age was 45.6 years in the PACNS group and 44.1 years in the neurosarcoidosis group. The MRIs were blindly reviewed by two neuroradiologists who examined and recorded data on pachymeninges, leptomeninges, basal meninges, cranial nerves, cerebral gray and white matter, and the spinal cord itself. The sites, presence, patterns, localization, and lateral involvement for these were noted, as well as any mass effect, parenchymal hemorrhaging, and ventriculomegaly.

Unilateral cranial nerve involvement appeared on MRI in 71% of neurosarcoidosis patients, compared with 3.0% for PACNS. Neurosarcoidosis patients also had consistently higher rates of involvement of the spinal cord (cervical, 62.5% vs. 0%; thoracic, 45.8% vs. 0%) as well as basal meninges (basal cistern, 21.6% vs. 0%; brain stem, 27.0% vs. 3.0%).

However, there was no significant difference between PACNS and neurosarcoidosis patients in terms of the pachymeningeal and leptomeningeal involvement, pituitary/sella turcica involvement, and mass effect, the latter of which was seen in 20% of PACNS patients and 14% of those with neurosarcoidosis.

The relatively small sample size of 76 patients is a limitation of the study, Dr. Saygin said.

No funding source was disclosed for this study. Dr. Saygin did not report any relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE ACR ANNUAL MEETING

Disallow All Ads
Vitals

 

Key clinical point: Consider MRI to distinguish between neurosarcoidosis and primary angiitis of the central nervous system (PACNS).

Major finding: Unilateral cranial nerve involvement appeared on MRI in 71% of neurosarcoidosis patients, compared with 3.0% for PACNS.

Data source: Prospective cohort study of 34 PACNS patients and 42 neurosarcoidosis patients.

Disclosures: No relevant financial disclosures were reported.

Sofosbuvir, daclatasvir combo best treatment for HCV cryoglobulinemia vasculitis

Article Type
Changed
Sat, 12/08/2018 - 03:10

 

– A combined regimen of sofosbuvir and daclatasvir is the best option to treat patients with hepatitis C virus infections experiencing cryoglobulinemia vasculitis, according to the findings of a new study presented at the annual meeting of the American College of Rheumatology.

“The HCV cryoglobulinemia vasculitis is a very important vasculitis because it represents 5% of chronically infected HCV patients in the world,” explained David Saadoun, MD, of Sorbonne Universities, Paris. “It’s sometimes a life-threatening vasculitis because patients may develop inflammation [so] there’s a need for very active and well-tolerated treatment.”

Dr. David Saadoun
Dr. Saadoun and his coinvestigators recruited 35 HCV patients experiencing cryoglobulinemia vasculitis. The median age for the entire cohort was 57 years; 45% of subjects were female. Twenty-one patients had HCV genotype 1, two patients had genotype 2, seven had genotype 3, three had genotype 4, and two had genotype 5. All individuals were placed on a regimen of sofosbuvir (400 mg) and daclatasvir (60 mg), administered daily for 12-24 weeks.

The primary endpoint – complete response to treatment at the end of the regimen – was achieved in 91% of subjects by the end of 24 weeks. Furthermore, 50% of patients experienced complete immunological response, defined as the complete clearance of cryoglobulin, within 24 weeks. At 12 weeks, average cryoglobulin levels decreased from 0.36 ± 0.12 to 0.10 ± 0.08 g/L, (P = .019), while average aminotransferase levels decreased from 57.6 ± 7.1 to 20.4 ± 2.0 IU/mL, (P less than .01).

But perhaps most significant, according to Dr. Saadoun, is that less than 5% of subjects required any additional treatment via immunosuppressants, such as steroids or rituximab. Average HCV viral loads dropped from 5.6 to 1.18 IU/mL at week 4 (P less than .01), with similarly sustained results through to week 12, indicating good virological responses. No serious adverse events were reported by any subjects throughout the trial period.

“The limitation is that there are quite a few patients, because it is only 35 patients this time, [and] that it’s a prospective, open-label study with no comparators,” Dr. Saadoun explained, adding that, in terms of further research, “[any] new study would focus on the way to avoid rituximab and steroid use in these patients, and to also have more patients treated with this regimen.”

No funding source was disclosed for this study. Dr. Saadoun did not report any relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A combined regimen of sofosbuvir and daclatasvir is the best option to treat patients with hepatitis C virus infections experiencing cryoglobulinemia vasculitis, according to the findings of a new study presented at the annual meeting of the American College of Rheumatology.

“The HCV cryoglobulinemia vasculitis is a very important vasculitis because it represents 5% of chronically infected HCV patients in the world,” explained David Saadoun, MD, of Sorbonne Universities, Paris. “It’s sometimes a life-threatening vasculitis because patients may develop inflammation [so] there’s a need for very active and well-tolerated treatment.”

Dr. David Saadoun
Dr. Saadoun and his coinvestigators recruited 35 HCV patients experiencing cryoglobulinemia vasculitis. The median age for the entire cohort was 57 years; 45% of subjects were female. Twenty-one patients had HCV genotype 1, two patients had genotype 2, seven had genotype 3, three had genotype 4, and two had genotype 5. All individuals were placed on a regimen of sofosbuvir (400 mg) and daclatasvir (60 mg), administered daily for 12-24 weeks.

The primary endpoint – complete response to treatment at the end of the regimen – was achieved in 91% of subjects by the end of 24 weeks. Furthermore, 50% of patients experienced complete immunological response, defined as the complete clearance of cryoglobulin, within 24 weeks. At 12 weeks, average cryoglobulin levels decreased from 0.36 ± 0.12 to 0.10 ± 0.08 g/L, (P = .019), while average aminotransferase levels decreased from 57.6 ± 7.1 to 20.4 ± 2.0 IU/mL, (P less than .01).

But perhaps most significant, according to Dr. Saadoun, is that less than 5% of subjects required any additional treatment via immunosuppressants, such as steroids or rituximab. Average HCV viral loads dropped from 5.6 to 1.18 IU/mL at week 4 (P less than .01), with similarly sustained results through to week 12, indicating good virological responses. No serious adverse events were reported by any subjects throughout the trial period.

“The limitation is that there are quite a few patients, because it is only 35 patients this time, [and] that it’s a prospective, open-label study with no comparators,” Dr. Saadoun explained, adding that, in terms of further research, “[any] new study would focus on the way to avoid rituximab and steroid use in these patients, and to also have more patients treated with this regimen.”

No funding source was disclosed for this study. Dr. Saadoun did not report any relevant financial disclosures.

 

– A combined regimen of sofosbuvir and daclatasvir is the best option to treat patients with hepatitis C virus infections experiencing cryoglobulinemia vasculitis, according to the findings of a new study presented at the annual meeting of the American College of Rheumatology.

“The HCV cryoglobulinemia vasculitis is a very important vasculitis because it represents 5% of chronically infected HCV patients in the world,” explained David Saadoun, MD, of Sorbonne Universities, Paris. “It’s sometimes a life-threatening vasculitis because patients may develop inflammation [so] there’s a need for very active and well-tolerated treatment.”

Dr. David Saadoun
Dr. Saadoun and his coinvestigators recruited 35 HCV patients experiencing cryoglobulinemia vasculitis. The median age for the entire cohort was 57 years; 45% of subjects were female. Twenty-one patients had HCV genotype 1, two patients had genotype 2, seven had genotype 3, three had genotype 4, and two had genotype 5. All individuals were placed on a regimen of sofosbuvir (400 mg) and daclatasvir (60 mg), administered daily for 12-24 weeks.

The primary endpoint – complete response to treatment at the end of the regimen – was achieved in 91% of subjects by the end of 24 weeks. Furthermore, 50% of patients experienced complete immunological response, defined as the complete clearance of cryoglobulin, within 24 weeks. At 12 weeks, average cryoglobulin levels decreased from 0.36 ± 0.12 to 0.10 ± 0.08 g/L, (P = .019), while average aminotransferase levels decreased from 57.6 ± 7.1 to 20.4 ± 2.0 IU/mL, (P less than .01).

But perhaps most significant, according to Dr. Saadoun, is that less than 5% of subjects required any additional treatment via immunosuppressants, such as steroids or rituximab. Average HCV viral loads dropped from 5.6 to 1.18 IU/mL at week 4 (P less than .01), with similarly sustained results through to week 12, indicating good virological responses. No serious adverse events were reported by any subjects throughout the trial period.

“The limitation is that there are quite a few patients, because it is only 35 patients this time, [and] that it’s a prospective, open-label study with no comparators,” Dr. Saadoun explained, adding that, in terms of further research, “[any] new study would focus on the way to avoid rituximab and steroid use in these patients, and to also have more patients treated with this regimen.”

No funding source was disclosed for this study. Dr. Saadoun did not report any relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE ACR ANNUAL MEETING

Disallow All Ads
Vitals

 

Key clinical point: For patients with hepatitis C virus–induced cryoglobulinemia vasculitis, a combined regimen of sofosbuvir and daclatasvir provides the quickest, most effective, and safest route for treatment.

Major finding: Of 35 patients, 32 (91%) achieved complete clinical response in 6 months, with less than 5% requiring the use of additional immunosuppressants and none experiencing serious adverse events.

Data source: A prospective, open-label study of 35 patients with cryoglobulinemia vasculitis brought on by HCV infection.

Disclosures: Dr. Saadoun did not report any relevant financial disclosures.