The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.

jcom
Main menu
JCOM Main
Explore menu
JCOM Explore
Proclivity ID
18843001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date

Quality of Life and Population Health in Behavioral Health Care: A Retrospective, Cross-Sectional Study

Article Type
Changed
Display Headline
Quality of Life and Population Health in Behavioral Health Care: A Retrospective, Cross-Sectional Study

From Milwaukee County Behavioral Health Services, Milwaukee, WI.

Abstract

Objectives: The goal of this study was to determine whether a single-item quality of life (QOL) measure could serve as a useful population health–level metric within the Quadruple Aim framework in a publicly funded behavioral health system.

Design: This was a retrospective, cross-sectional study that examined the correlation between the single-item QOL measure and several other key measures of the social determinants of health and a composite measure of acute service utilization for all patients receiving mental health and substance use services in a community behavioral health system.

Methods: Data were collected for 4488 patients who had at least 1 assessment between October 1, 2020, and September 30, 2021. Data on social determinants of health were obtained through patient self-report; acute service use data were obtained from electronic health records.

Results: Statistical analyses revealed results in the expected direction for all relationships tested. Patients with higher QOL were more likely to report “Good” or better self-rated physical health, be employed, have a private residence, and report recent positive social interactions, and were less likely to have received acute services in the previous 90 days.

Conclusion: A single-item QOL measure shows promise as a general, minimally burdensome whole-system metric that can function as a target for population health management efforts in a large behavioral health system. Future research should explore whether this QOL measure is sensitive to change over time and examine its temporal relationship with other key outcome metrics.

Keywords: Quadruple Aim, single-item measures, social determinants of health, acute service utilization metrics.

 

 

The Triple Aim for health care—improving the individual experience of care, increasing the health of populations, and reducing the costs of care—was first proposed in 2008.1 More recently, some have advocated for an expanded focus to include a fourth aim: the quality of staff work life.2 Since this seminal paper was published, many health care systems have endeavored to adopt and implement the Quadruple Aim3,4; however, the concepts representing each of the aims are not universally defined,3 nor are the measures needed to populate the Quadruple Aim always available within the health system in question.5

Although several assessment models and frameworks that provide guidance to stakeholders have been developed,6,7 it is ultimately up to organizations themselves to determine which measures they should deploy to best represent the different quadrants of the Quadruple Aim.6 Evidence suggests, however, that quality measurement, and the administrative time required to conduct it, can be both financially and emotionally burdensome to providers and health systems.8-10 Thus, it is incumbent on organizations to select a set of measures that are not only meaningful but as parsimonious as possible.6,11,12

Quality of life (QOL) is a potential candidate to assess the aim of population health. Brief health-related QOL questions have long been used in epidemiological surveys, such as the Behavioral Risk Factor Surveillance System survey.13 Such questions are also a key component of community health frameworks, such as the County Health Rankings developed by the University of Wisconsin Population Health Institute.14 Furthermore, Humana recently revealed that increasing the number of physical and mental health “Healthy Days” (which are among the Centers for Disease Control and Prevention’s Health-Related Quality of Life questions15) among the members enrolled in their insurance plan would become a major goal for the organization.16,17 Many of these measures, while brief, focus on QOL as a function of health, often as a self-rated construct (from “Poor” to “Excellent”) or in the form of days of poor physical or mental health in the past 30 days,15 rather than evaluating QOL itself; however, several authors have pointed out that health status and QOL are related but distinct concepts.18,19

Brief single-item assessments focused specifically on QOL have been developed and implemented within nonclinical20 and clinical populations, including individuals with cancer,21 adults with disabilities,22 individuals with cystic fibrosis,23 and children with epilepsy.24 Despite the long history of QOL assessment in behavioral health treatment,25 single-item measures have not been widely implemented in this population.

Milwaukee County Behavioral Health Services (BHS), a publicly funded, county-based behavioral health care system in Milwaukee, Wisconsin, provides inpatient and ambulatory treatment, psychiatric emergency care, withdrawal management, care management, crisis services, and other support services to individuals in Milwaukee County. In 2018 the community services arm of BHS began implementing a single QOL question from the World Health Organization’s WHOQOL-BREF26: On a 5-point rating scale of “Very Poor” to “Very Good,” “How would you rate your overall quality of life right now?” Previous research by Atroszko and colleagues,20 which used a similar approach with the same item from the WHOQOL-BREF, reported correlations in the expected direction of the single-item QOL measure with perceived stress, depression, anxiety, loneliness, and daily hours of sleep. This study’s sample, however, comprised opportunistically recruited college students, not a clinical population. Further, the researchers did not examine the relationship of QOL with acute service utilization or other measures of the social determinants of health, such as housing, employment, or social connectedness.

The following study was designed to extend these results by focusing on a clinical population—individuals with mental health or substance use issues—being served in a large, publicly funded behavioral health system in Milwaukee, Wisconsin. The objective of this study was to determine whether a single-item QOL measure could be used as a brief, parsimonious measure of overall population health by examining its relationship with other key outcome measures for patients receiving services from BHS. This study was reviewed and approved by BHS’s Institutional Review Board.

 

 

Methods

All patients engaged in nonacute community services are offered a standardized assessment that includes, among other measures, items related to QOL, housing status, employment status, self-rated physical health, and social connectedness. This assessment is administered at intake, discharge, and every 6 months while patients are enrolled in services. Patients who received at least 1 assessment between October 1, 2020, and September 30, 2021, were included in the analyses. Patients receiving crisis, inpatient, or withdrawal management services alone (ie, did not receive any other community-based services) were not offered the standard assessment and thus were not included in the analyses. If patients had more than 1 assessment during this time period, QOL data from the last assessment were used. Data on housing (private residence status, defined as adults living alone or with others without supervision in a house or apartment), employment status, self-rated physical health, and social connectedness (measured by asking people whether they have had positive interactions with family or friends in the past 30 days) were extracted from the same timepoint as well.

Also included in the analyses were rates of acute service utilization, in which any patient with at least 1 visit to BHS’s psychiatric emergency department, withdrawal management facility, or psychiatric inpatient facility in the 90 days prior to the date of the assessment received a code of “Yes,” and any patient who did not receive any of these services received a code of “No.” Chi-square analyses were conducted to determine the relationship between QOL rankings (“Very Poor,” “Poor,” “Neither Good nor Poor,” “Good,” and “Very Good”) and housing, employment, self-rated physical health, social connectedness, and 90-day acute service use. All acute service utilization data were obtained from BHS’s electronic health records system. All data used in the study were stored on a secure, password-protected server. All analyses were conducted with SPSS software (SPSS 28; IBM).

Results

Data were available for 4488 patients who received an assessment between October 1, 2020, and September 30, 2021 (total numbers per item vary because some items had missing data; see supplementary eTables 1-3 for sample size per item). Demographics of the patient sample are listed in Table 1; the demographics of the patients who were missing data for specific outcomes are presented in eTables 1-3.

Demographics: Those With Complete vs Missing Housing Data

Demographics: Those With Complete vs Missing Employment Data

Demographics: Those With Complete vs Missing Self-Rated Physical Health Data

Demographics of Patient Sample

Statistical analyses revealed results in the expected direction for all relationships tested (Table 2). As patients’ self-reported QOL improved, so did the likelihood of higher rates of self-reported “Good” or better physical health, which was 576% higher among individuals who reported “Very Good” QOL relative to those who reported “Very Poor” QOL. Similarly, when compared with individuals with “Very Poor” QOL, individuals who reported “Very Good” QOL were 21.91% more likely to report having a private residence, 126.7% more likely to report being employed, and 29.17% more likely to report having had positive social interactions with family and friends in the past 30 days. There was an inverse relationship between QOL and the likelihood that a patient had received at least 1 admission for an acute service in the previous 90 days, such that patients who reported “Very Good” QOL were 86.34% less likely to have had an admission compared to patients with “Very Poor” QOL (2.8% vs 20.5%, respectively). The relationships among the criterion variables used in this study are presented in Table 3.

Relationship Between Quality of Life Scores and Key Outcomes

 

 

Discussion

The results of this preliminary analysis suggest that self-rated QOL is related to key health, social determinants of health, and acute service utilization metrics. These data are important for several reasons. First, because QOL is a diagnostically agnostic measure, it is a cross-cutting measure to use with clinically diverse populations receiving an array of different services. Second, at 1 item, the QOL measure is extremely brief and therefore minimally onerous to implement for both patients and administratively overburdened providers. Third, its correlation with other key metrics suggests that it can function as a broad population health measure for health care organizations because individuals with higher QOL will also likely have better outcomes in other key areas. This suggests that it has the potential to broadly represent the overall status of a population of patients, thus functioning as a type of “whole system” measure, which the Institute for Healthcare Improvement describes as “a small set of measures that reflect a health system’s overall performance on core dimensions of quality guided by the Triple Aim.”7 These whole system measures can help focus an organization’s strategic initiatives and efforts on the issues that matter most to the patients and community it serves.

Relationships Among Key Outcomes

The relationship of QOL to acute service utilization deserves special mention. As an administrative measure, utilization is not susceptible to the same response bias as the other self-reported variables. Furthermore, acute services are costly to health systems, and hospital readmissions are associated with payment reductions in the Centers for Medicare and Medicaid Services (CMS) Hospital Readmissions Reduction Program for hospitals that fail to meet certain performance targets.27 Thus, because of its alignment with federal mandates, improved QOL (and potentially concomitant decreases in acute service use) may have significant financial implications for health systems as well.

This study was limited by several factors. First, it was focused on a population receiving publicly funded behavioral health services with strict eligibility requirements, one of which stipulated that individuals must be at 200% or less of the Federal Poverty Level; therefore, the results might not be applicable to health systems with a more clinically or socioeconomically diverse patient population. Second, because these data are cross-sectional, it was not possible to determine whether QOL improved over time or whether changes in QOL covaried longitudinally with the other metrics under observation. For example, if patients’ QOL improved from the first to last assessment, did their employment or residential status improve as well, or were these patients more likely to be employed at their first assessment? Furthermore, if there was covariance, did changes in employment, housing status, and so on precede changes in QOL or vice versa? Multiple longitudinal observations would help to address these questions and will be the focus of future analyses.

Conclusion

This preliminary study suggests that a single-item QOL measure may be a valuable population health–level metric for health systems. It requires little administrative effort on the part of either the clinician or patient. It is also agnostic with regard to clinical issue or treatment approach and can therefore admit of a range of diagnoses or patient-specific, idiosyncratic recovery goals. It is correlated with other key health, social determinants of health, and acute service utilization indicators and can therefore serve as a “whole system” measure because of its ability to broadly represent improvements in an entire population. Furthermore, QOL is patient-centered in that data are obtained through patient self-report, which is a high priority for CMS and other health care organizations.28 In summary, a single-item QOL measure holds promise for health care organizations looking to implement the Quadruple Aim and assess the health of the populations they serve in a manner that is simple, efficient, and patient-centered.

Acknowledgments: The author thanks Jennifer Wittwer for her thoughtful comments on the initial draft of this manuscript and Gary Kraft for his help extracting the data used in the analyses.

Corresponding author: Walter Matthew Drymalski, PhD; walter.drymalski@milwaukeecountywi.gov

Disclosures: None reported.

References

1. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769. doi:10.1377/hlthaff.27.3.759

2. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. doi:10.1370/afm.1713

3. Hendrikx RJP, Drewes HW, Spreeuwenberg M, et al. Which triple aim related measures are being used to evaluate population management initiatives? An international comparative analysis. Health Policy. 2016;120(5):471-485. doi:10.1016/j.healthpol.2016.03.008

4. Whittington JW, Nolan K, Lewis N, Torres T. Pursuing the triple aim: the first 7 years. Milbank Q. 2015;93(2):263-300. doi:10.1111/1468-0009.12122

5. Ryan BL, Brown JB, Glazier RH, Hutchison B. Examining primary healthcare performance through a triple aim lens. Healthc Policy. 2016;11(3):19-31.

6. Stiefel M, Nolan K. A guide to measuring the Triple Aim: population health, experience of care, and per capita cost. Institute for Healthcare Improvement; 2012. Accessed November 1, 2022. https://nhchc.org/wp-content/uploads/2019/08/ihiguidetomeasuringtripleaimwhitepaper2012.pdf

7. Martin L, Nelson E, Rakover J, Chase A. Whole system measures 2.0: a compass for health system leaders. Institute for Healthcare Improvement; 2016. Accessed November 1, 2022. http://www.ihi.org:80/resources/Pages/IHIWhitePapers/Whole-System-Measures-Compass-for-Health-System-Leaders.aspx

8. Casalino LP, Gans D, Weber R, et al. US physician practices spend more than $15.4 billion annually to report quality measures. Health Aff (Millwood). 2016;35(3):401-406. doi:10.1377/hlthaff.2015.1258

9. Rao SK, Kimball AB, Lehrhoff SR, et al. The impact of administrative burden on academic physicians: results of a hospital-wide physician survey. Acad Med. 2017;92(2):237-243. doi:10.1097/ACM.0000000000001461

10. Woolhandler S, Himmelstein DU. Administrative work consumes one-sixth of U.S. physicians’ working hours and lowers their career satisfaction. Int J Health Serv. 2014;44(4):635-642. doi:10.2190/HS.44.4.a

11. Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf. 2012;21(11):964-968. doi:10.1136/bmjqs-2012-001081

12. Vital Signs: Core Metrics for Health and Health Care Progress. Washington, DC: National Academies Press; 2015. doi:10.17226/19402

13. Centers for Disease Control and Prevention. BRFSS questionnaires. Accessed November 1, 2022. https://www.cdc.gov/brfss/questionnaires/index.htm

14. County Health Rankings and Roadmaps. Measures & data sources. University of Wisconsin Population Health Institute. Accessed November 1, 2022. https://www.countyhealthrankings.org/explore-health-rankings/measures-data-sources

15. Centers for Disease Control and Prevention. Healthy days core module (CDC HRQOL-4). Accessed November 1, 2022. https://www.cdc.gov/hrqol/hrqol14_measure.htm

16. Cordier T, Song Y, Cambon J, et al. A bold goal: more healthy days through improved community health. Popul Health Manag. 2018;21(3):202-208. doi:10.1089/pop.2017.0142

17. Slabaugh SL, Shah M, Zack M, et al. Leveraging health-related quality of life in population health management: the case for healthy days. Popul Health Manag. 2017;20(1):13-22. doi:10.1089/pop.2015.0162

18. Karimi M, Brazier J. Health, health-related quality of life, and quality of life: what is the difference? Pharmacoeconomics. 2016;34(7):645-649. doi:10.1007/s40273-016-0389-9

19. Smith KW, Avis NE, Assmann SF. Distinguishing between quality of life and health status in quality of life research: a meta-analysis. Qual Life Res. 1999;8(5):447-459. doi:10.1023/a:1008928518577

20. Atroszko PA, Baginska P, Mokosinska M, et al. Validity and reliability of single-item self-report measures of general quality of life, general health and sleep quality. In: CER Comparative European Research 2015. Sciemcee Publishing; 2015:207-211.

21. Singh JA, Satele D, Pattabasavaiah S, et al. Normative data and clinically significant effect sizes for single-item numerical linear analogue self-assessment (LASA) scales. Health Qual Life Outcomes. 2014;12:187. doi:10.1186/s12955-014-0187-z

22. Siebens HC, Tsukerman D, Adkins RH, et al. Correlates of a single-item quality-of-life measure in people aging with disabilities. Am J Phys Med Rehabil. 2015;94(12):1065-1074. doi:10.1097/PHM.0000000000000298

23. Yohannes AM, Dodd M, Morris J, Webb K. Reliability and validity of a single item measure of quality of life scale for adult patients with cystic fibrosis. Health Qual Life Outcomes. 2011;9:105. doi:10.1186/1477-7525-9-105

24. Conway L, Widjaja E, Smith ML. Single-item measure for assessing quality of life in children with drug-resistant epilepsy. Epilepsia Open. 2017;3(1):46-54. doi:10.1002/epi4.12088

25. Barry MM, Zissi A. Quality of life as an outcome measure in evaluating mental health services: a review of the empirical evidence. Soc Psychiatry Psychiatr Epidemiol. 1997;32(1):38-47. doi:10.1007/BF00800666

26. Skevington SM, Lotfy M, O’Connell KA. The World Health Organization’s WHOQOL-BREF quality of life assessment: psychometric properties and results of the international field trial. Qual Life Res. 2004;13(2):299-310. doi:10.1023/B:QURE.0000018486.91360.00

27. Centers for Medicare & Medicaid Services. Hospital readmissions reduction program (HRRP). Accessed November 1, 2022. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program

28. Centers for Medicare & Medicaid Services. Patient-reported outcome measures. CMS Measures Management System. Published May 2022. Accessed November 1, 2022. https://www.cms.gov/files/document/blueprint-patient-reported-outcome-measures.pdf

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
214-219
Sections
Article PDF
Article PDF

From Milwaukee County Behavioral Health Services, Milwaukee, WI.

Abstract

Objectives: The goal of this study was to determine whether a single-item quality of life (QOL) measure could serve as a useful population health–level metric within the Quadruple Aim framework in a publicly funded behavioral health system.

Design: This was a retrospective, cross-sectional study that examined the correlation between the single-item QOL measure and several other key measures of the social determinants of health and a composite measure of acute service utilization for all patients receiving mental health and substance use services in a community behavioral health system.

Methods: Data were collected for 4488 patients who had at least 1 assessment between October 1, 2020, and September 30, 2021. Data on social determinants of health were obtained through patient self-report; acute service use data were obtained from electronic health records.

Results: Statistical analyses revealed results in the expected direction for all relationships tested. Patients with higher QOL were more likely to report “Good” or better self-rated physical health, be employed, have a private residence, and report recent positive social interactions, and were less likely to have received acute services in the previous 90 days.

Conclusion: A single-item QOL measure shows promise as a general, minimally burdensome whole-system metric that can function as a target for population health management efforts in a large behavioral health system. Future research should explore whether this QOL measure is sensitive to change over time and examine its temporal relationship with other key outcome metrics.

Keywords: Quadruple Aim, single-item measures, social determinants of health, acute service utilization metrics.

 

 

The Triple Aim for health care—improving the individual experience of care, increasing the health of populations, and reducing the costs of care—was first proposed in 2008.1 More recently, some have advocated for an expanded focus to include a fourth aim: the quality of staff work life.2 Since this seminal paper was published, many health care systems have endeavored to adopt and implement the Quadruple Aim3,4; however, the concepts representing each of the aims are not universally defined,3 nor are the measures needed to populate the Quadruple Aim always available within the health system in question.5

Although several assessment models and frameworks that provide guidance to stakeholders have been developed,6,7 it is ultimately up to organizations themselves to determine which measures they should deploy to best represent the different quadrants of the Quadruple Aim.6 Evidence suggests, however, that quality measurement, and the administrative time required to conduct it, can be both financially and emotionally burdensome to providers and health systems.8-10 Thus, it is incumbent on organizations to select a set of measures that are not only meaningful but as parsimonious as possible.6,11,12

Quality of life (QOL) is a potential candidate to assess the aim of population health. Brief health-related QOL questions have long been used in epidemiological surveys, such as the Behavioral Risk Factor Surveillance System survey.13 Such questions are also a key component of community health frameworks, such as the County Health Rankings developed by the University of Wisconsin Population Health Institute.14 Furthermore, Humana recently revealed that increasing the number of physical and mental health “Healthy Days” (which are among the Centers for Disease Control and Prevention’s Health-Related Quality of Life questions15) among the members enrolled in their insurance plan would become a major goal for the organization.16,17 Many of these measures, while brief, focus on QOL as a function of health, often as a self-rated construct (from “Poor” to “Excellent”) or in the form of days of poor physical or mental health in the past 30 days,15 rather than evaluating QOL itself; however, several authors have pointed out that health status and QOL are related but distinct concepts.18,19

Brief single-item assessments focused specifically on QOL have been developed and implemented within nonclinical20 and clinical populations, including individuals with cancer,21 adults with disabilities,22 individuals with cystic fibrosis,23 and children with epilepsy.24 Despite the long history of QOL assessment in behavioral health treatment,25 single-item measures have not been widely implemented in this population.

Milwaukee County Behavioral Health Services (BHS), a publicly funded, county-based behavioral health care system in Milwaukee, Wisconsin, provides inpatient and ambulatory treatment, psychiatric emergency care, withdrawal management, care management, crisis services, and other support services to individuals in Milwaukee County. In 2018 the community services arm of BHS began implementing a single QOL question from the World Health Organization’s WHOQOL-BREF26: On a 5-point rating scale of “Very Poor” to “Very Good,” “How would you rate your overall quality of life right now?” Previous research by Atroszko and colleagues,20 which used a similar approach with the same item from the WHOQOL-BREF, reported correlations in the expected direction of the single-item QOL measure with perceived stress, depression, anxiety, loneliness, and daily hours of sleep. This study’s sample, however, comprised opportunistically recruited college students, not a clinical population. Further, the researchers did not examine the relationship of QOL with acute service utilization or other measures of the social determinants of health, such as housing, employment, or social connectedness.

The following study was designed to extend these results by focusing on a clinical population—individuals with mental health or substance use issues—being served in a large, publicly funded behavioral health system in Milwaukee, Wisconsin. The objective of this study was to determine whether a single-item QOL measure could be used as a brief, parsimonious measure of overall population health by examining its relationship with other key outcome measures for patients receiving services from BHS. This study was reviewed and approved by BHS’s Institutional Review Board.

 

 

Methods

All patients engaged in nonacute community services are offered a standardized assessment that includes, among other measures, items related to QOL, housing status, employment status, self-rated physical health, and social connectedness. This assessment is administered at intake, discharge, and every 6 months while patients are enrolled in services. Patients who received at least 1 assessment between October 1, 2020, and September 30, 2021, were included in the analyses. Patients receiving crisis, inpatient, or withdrawal management services alone (ie, did not receive any other community-based services) were not offered the standard assessment and thus were not included in the analyses. If patients had more than 1 assessment during this time period, QOL data from the last assessment were used. Data on housing (private residence status, defined as adults living alone or with others without supervision in a house or apartment), employment status, self-rated physical health, and social connectedness (measured by asking people whether they have had positive interactions with family or friends in the past 30 days) were extracted from the same timepoint as well.

Also included in the analyses were rates of acute service utilization, in which any patient with at least 1 visit to BHS’s psychiatric emergency department, withdrawal management facility, or psychiatric inpatient facility in the 90 days prior to the date of the assessment received a code of “Yes,” and any patient who did not receive any of these services received a code of “No.” Chi-square analyses were conducted to determine the relationship between QOL rankings (“Very Poor,” “Poor,” “Neither Good nor Poor,” “Good,” and “Very Good”) and housing, employment, self-rated physical health, social connectedness, and 90-day acute service use. All acute service utilization data were obtained from BHS’s electronic health records system. All data used in the study were stored on a secure, password-protected server. All analyses were conducted with SPSS software (SPSS 28; IBM).

Results

Data were available for 4488 patients who received an assessment between October 1, 2020, and September 30, 2021 (total numbers per item vary because some items had missing data; see supplementary eTables 1-3 for sample size per item). Demographics of the patient sample are listed in Table 1; the demographics of the patients who were missing data for specific outcomes are presented in eTables 1-3.

Demographics: Those With Complete vs Missing Housing Data

Demographics: Those With Complete vs Missing Employment Data

Demographics: Those With Complete vs Missing Self-Rated Physical Health Data

Demographics of Patient Sample

Statistical analyses revealed results in the expected direction for all relationships tested (Table 2). As patients’ self-reported QOL improved, so did the likelihood of higher rates of self-reported “Good” or better physical health, which was 576% higher among individuals who reported “Very Good” QOL relative to those who reported “Very Poor” QOL. Similarly, when compared with individuals with “Very Poor” QOL, individuals who reported “Very Good” QOL were 21.91% more likely to report having a private residence, 126.7% more likely to report being employed, and 29.17% more likely to report having had positive social interactions with family and friends in the past 30 days. There was an inverse relationship between QOL and the likelihood that a patient had received at least 1 admission for an acute service in the previous 90 days, such that patients who reported “Very Good” QOL were 86.34% less likely to have had an admission compared to patients with “Very Poor” QOL (2.8% vs 20.5%, respectively). The relationships among the criterion variables used in this study are presented in Table 3.

Relationship Between Quality of Life Scores and Key Outcomes

 

 

Discussion

The results of this preliminary analysis suggest that self-rated QOL is related to key health, social determinants of health, and acute service utilization metrics. These data are important for several reasons. First, because QOL is a diagnostically agnostic measure, it is a cross-cutting measure to use with clinically diverse populations receiving an array of different services. Second, at 1 item, the QOL measure is extremely brief and therefore minimally onerous to implement for both patients and administratively overburdened providers. Third, its correlation with other key metrics suggests that it can function as a broad population health measure for health care organizations because individuals with higher QOL will also likely have better outcomes in other key areas. This suggests that it has the potential to broadly represent the overall status of a population of patients, thus functioning as a type of “whole system” measure, which the Institute for Healthcare Improvement describes as “a small set of measures that reflect a health system’s overall performance on core dimensions of quality guided by the Triple Aim.”7 These whole system measures can help focus an organization’s strategic initiatives and efforts on the issues that matter most to the patients and community it serves.

Relationships Among Key Outcomes

The relationship of QOL to acute service utilization deserves special mention. As an administrative measure, utilization is not susceptible to the same response bias as the other self-reported variables. Furthermore, acute services are costly to health systems, and hospital readmissions are associated with payment reductions in the Centers for Medicare and Medicaid Services (CMS) Hospital Readmissions Reduction Program for hospitals that fail to meet certain performance targets.27 Thus, because of its alignment with federal mandates, improved QOL (and potentially concomitant decreases in acute service use) may have significant financial implications for health systems as well.

This study was limited by several factors. First, it was focused on a population receiving publicly funded behavioral health services with strict eligibility requirements, one of which stipulated that individuals must be at 200% or less of the Federal Poverty Level; therefore, the results might not be applicable to health systems with a more clinically or socioeconomically diverse patient population. Second, because these data are cross-sectional, it was not possible to determine whether QOL improved over time or whether changes in QOL covaried longitudinally with the other metrics under observation. For example, if patients’ QOL improved from the first to last assessment, did their employment or residential status improve as well, or were these patients more likely to be employed at their first assessment? Furthermore, if there was covariance, did changes in employment, housing status, and so on precede changes in QOL or vice versa? Multiple longitudinal observations would help to address these questions and will be the focus of future analyses.

Conclusion

This preliminary study suggests that a single-item QOL measure may be a valuable population health–level metric for health systems. It requires little administrative effort on the part of either the clinician or patient. It is also agnostic with regard to clinical issue or treatment approach and can therefore admit of a range of diagnoses or patient-specific, idiosyncratic recovery goals. It is correlated with other key health, social determinants of health, and acute service utilization indicators and can therefore serve as a “whole system” measure because of its ability to broadly represent improvements in an entire population. Furthermore, QOL is patient-centered in that data are obtained through patient self-report, which is a high priority for CMS and other health care organizations.28 In summary, a single-item QOL measure holds promise for health care organizations looking to implement the Quadruple Aim and assess the health of the populations they serve in a manner that is simple, efficient, and patient-centered.

Acknowledgments: The author thanks Jennifer Wittwer for her thoughtful comments on the initial draft of this manuscript and Gary Kraft for his help extracting the data used in the analyses.

Corresponding author: Walter Matthew Drymalski, PhD; walter.drymalski@milwaukeecountywi.gov

Disclosures: None reported.

From Milwaukee County Behavioral Health Services, Milwaukee, WI.

Abstract

Objectives: The goal of this study was to determine whether a single-item quality of life (QOL) measure could serve as a useful population health–level metric within the Quadruple Aim framework in a publicly funded behavioral health system.

Design: This was a retrospective, cross-sectional study that examined the correlation between the single-item QOL measure and several other key measures of the social determinants of health and a composite measure of acute service utilization for all patients receiving mental health and substance use services in a community behavioral health system.

Methods: Data were collected for 4488 patients who had at least 1 assessment between October 1, 2020, and September 30, 2021. Data on social determinants of health were obtained through patient self-report; acute service use data were obtained from electronic health records.

Results: Statistical analyses revealed results in the expected direction for all relationships tested. Patients with higher QOL were more likely to report “Good” or better self-rated physical health, be employed, have a private residence, and report recent positive social interactions, and were less likely to have received acute services in the previous 90 days.

Conclusion: A single-item QOL measure shows promise as a general, minimally burdensome whole-system metric that can function as a target for population health management efforts in a large behavioral health system. Future research should explore whether this QOL measure is sensitive to change over time and examine its temporal relationship with other key outcome metrics.

Keywords: Quadruple Aim, single-item measures, social determinants of health, acute service utilization metrics.

 

 

The Triple Aim for health care—improving the individual experience of care, increasing the health of populations, and reducing the costs of care—was first proposed in 2008.1 More recently, some have advocated for an expanded focus to include a fourth aim: the quality of staff work life.2 Since this seminal paper was published, many health care systems have endeavored to adopt and implement the Quadruple Aim3,4; however, the concepts representing each of the aims are not universally defined,3 nor are the measures needed to populate the Quadruple Aim always available within the health system in question.5

Although several assessment models and frameworks that provide guidance to stakeholders have been developed,6,7 it is ultimately up to organizations themselves to determine which measures they should deploy to best represent the different quadrants of the Quadruple Aim.6 Evidence suggests, however, that quality measurement, and the administrative time required to conduct it, can be both financially and emotionally burdensome to providers and health systems.8-10 Thus, it is incumbent on organizations to select a set of measures that are not only meaningful but as parsimonious as possible.6,11,12

Quality of life (QOL) is a potential candidate to assess the aim of population health. Brief health-related QOL questions have long been used in epidemiological surveys, such as the Behavioral Risk Factor Surveillance System survey.13 Such questions are also a key component of community health frameworks, such as the County Health Rankings developed by the University of Wisconsin Population Health Institute.14 Furthermore, Humana recently revealed that increasing the number of physical and mental health “Healthy Days” (which are among the Centers for Disease Control and Prevention’s Health-Related Quality of Life questions15) among the members enrolled in their insurance plan would become a major goal for the organization.16,17 Many of these measures, while brief, focus on QOL as a function of health, often as a self-rated construct (from “Poor” to “Excellent”) or in the form of days of poor physical or mental health in the past 30 days,15 rather than evaluating QOL itself; however, several authors have pointed out that health status and QOL are related but distinct concepts.18,19

Brief single-item assessments focused specifically on QOL have been developed and implemented within nonclinical20 and clinical populations, including individuals with cancer,21 adults with disabilities,22 individuals with cystic fibrosis,23 and children with epilepsy.24 Despite the long history of QOL assessment in behavioral health treatment,25 single-item measures have not been widely implemented in this population.

Milwaukee County Behavioral Health Services (BHS), a publicly funded, county-based behavioral health care system in Milwaukee, Wisconsin, provides inpatient and ambulatory treatment, psychiatric emergency care, withdrawal management, care management, crisis services, and other support services to individuals in Milwaukee County. In 2018 the community services arm of BHS began implementing a single QOL question from the World Health Organization’s WHOQOL-BREF26: On a 5-point rating scale of “Very Poor” to “Very Good,” “How would you rate your overall quality of life right now?” Previous research by Atroszko and colleagues,20 which used a similar approach with the same item from the WHOQOL-BREF, reported correlations in the expected direction of the single-item QOL measure with perceived stress, depression, anxiety, loneliness, and daily hours of sleep. This study’s sample, however, comprised opportunistically recruited college students, not a clinical population. Further, the researchers did not examine the relationship of QOL with acute service utilization or other measures of the social determinants of health, such as housing, employment, or social connectedness.

The following study was designed to extend these results by focusing on a clinical population—individuals with mental health or substance use issues—being served in a large, publicly funded behavioral health system in Milwaukee, Wisconsin. The objective of this study was to determine whether a single-item QOL measure could be used as a brief, parsimonious measure of overall population health by examining its relationship with other key outcome measures for patients receiving services from BHS. This study was reviewed and approved by BHS’s Institutional Review Board.

 

 

Methods

All patients engaged in nonacute community services are offered a standardized assessment that includes, among other measures, items related to QOL, housing status, employment status, self-rated physical health, and social connectedness. This assessment is administered at intake, discharge, and every 6 months while patients are enrolled in services. Patients who received at least 1 assessment between October 1, 2020, and September 30, 2021, were included in the analyses. Patients receiving crisis, inpatient, or withdrawal management services alone (ie, did not receive any other community-based services) were not offered the standard assessment and thus were not included in the analyses. If patients had more than 1 assessment during this time period, QOL data from the last assessment were used. Data on housing (private residence status, defined as adults living alone or with others without supervision in a house or apartment), employment status, self-rated physical health, and social connectedness (measured by asking people whether they have had positive interactions with family or friends in the past 30 days) were extracted from the same timepoint as well.

Also included in the analyses were rates of acute service utilization, in which any patient with at least 1 visit to BHS’s psychiatric emergency department, withdrawal management facility, or psychiatric inpatient facility in the 90 days prior to the date of the assessment received a code of “Yes,” and any patient who did not receive any of these services received a code of “No.” Chi-square analyses were conducted to determine the relationship between QOL rankings (“Very Poor,” “Poor,” “Neither Good nor Poor,” “Good,” and “Very Good”) and housing, employment, self-rated physical health, social connectedness, and 90-day acute service use. All acute service utilization data were obtained from BHS’s electronic health records system. All data used in the study were stored on a secure, password-protected server. All analyses were conducted with SPSS software (SPSS 28; IBM).

Results

Data were available for 4488 patients who received an assessment between October 1, 2020, and September 30, 2021 (total numbers per item vary because some items had missing data; see supplementary eTables 1-3 for sample size per item). Demographics of the patient sample are listed in Table 1; the demographics of the patients who were missing data for specific outcomes are presented in eTables 1-3.

Demographics: Those With Complete vs Missing Housing Data

Demographics: Those With Complete vs Missing Employment Data

Demographics: Those With Complete vs Missing Self-Rated Physical Health Data

Demographics of Patient Sample

Statistical analyses revealed results in the expected direction for all relationships tested (Table 2). As patients’ self-reported QOL improved, so did the likelihood of higher rates of self-reported “Good” or better physical health, which was 576% higher among individuals who reported “Very Good” QOL relative to those who reported “Very Poor” QOL. Similarly, when compared with individuals with “Very Poor” QOL, individuals who reported “Very Good” QOL were 21.91% more likely to report having a private residence, 126.7% more likely to report being employed, and 29.17% more likely to report having had positive social interactions with family and friends in the past 30 days. There was an inverse relationship between QOL and the likelihood that a patient had received at least 1 admission for an acute service in the previous 90 days, such that patients who reported “Very Good” QOL were 86.34% less likely to have had an admission compared to patients with “Very Poor” QOL (2.8% vs 20.5%, respectively). The relationships among the criterion variables used in this study are presented in Table 3.

Relationship Between Quality of Life Scores and Key Outcomes

 

 

Discussion

The results of this preliminary analysis suggest that self-rated QOL is related to key health, social determinants of health, and acute service utilization metrics. These data are important for several reasons. First, because QOL is a diagnostically agnostic measure, it is a cross-cutting measure to use with clinically diverse populations receiving an array of different services. Second, at 1 item, the QOL measure is extremely brief and therefore minimally onerous to implement for both patients and administratively overburdened providers. Third, its correlation with other key metrics suggests that it can function as a broad population health measure for health care organizations because individuals with higher QOL will also likely have better outcomes in other key areas. This suggests that it has the potential to broadly represent the overall status of a population of patients, thus functioning as a type of “whole system” measure, which the Institute for Healthcare Improvement describes as “a small set of measures that reflect a health system’s overall performance on core dimensions of quality guided by the Triple Aim.”7 These whole system measures can help focus an organization’s strategic initiatives and efforts on the issues that matter most to the patients and community it serves.

Relationships Among Key Outcomes

The relationship of QOL to acute service utilization deserves special mention. As an administrative measure, utilization is not susceptible to the same response bias as the other self-reported variables. Furthermore, acute services are costly to health systems, and hospital readmissions are associated with payment reductions in the Centers for Medicare and Medicaid Services (CMS) Hospital Readmissions Reduction Program for hospitals that fail to meet certain performance targets.27 Thus, because of its alignment with federal mandates, improved QOL (and potentially concomitant decreases in acute service use) may have significant financial implications for health systems as well.

This study was limited by several factors. First, it was focused on a population receiving publicly funded behavioral health services with strict eligibility requirements, one of which stipulated that individuals must be at 200% or less of the Federal Poverty Level; therefore, the results might not be applicable to health systems with a more clinically or socioeconomically diverse patient population. Second, because these data are cross-sectional, it was not possible to determine whether QOL improved over time or whether changes in QOL covaried longitudinally with the other metrics under observation. For example, if patients’ QOL improved from the first to last assessment, did their employment or residential status improve as well, or were these patients more likely to be employed at their first assessment? Furthermore, if there was covariance, did changes in employment, housing status, and so on precede changes in QOL or vice versa? Multiple longitudinal observations would help to address these questions and will be the focus of future analyses.

Conclusion

This preliminary study suggests that a single-item QOL measure may be a valuable population health–level metric for health systems. It requires little administrative effort on the part of either the clinician or patient. It is also agnostic with regard to clinical issue or treatment approach and can therefore admit of a range of diagnoses or patient-specific, idiosyncratic recovery goals. It is correlated with other key health, social determinants of health, and acute service utilization indicators and can therefore serve as a “whole system” measure because of its ability to broadly represent improvements in an entire population. Furthermore, QOL is patient-centered in that data are obtained through patient self-report, which is a high priority for CMS and other health care organizations.28 In summary, a single-item QOL measure holds promise for health care organizations looking to implement the Quadruple Aim and assess the health of the populations they serve in a manner that is simple, efficient, and patient-centered.

Acknowledgments: The author thanks Jennifer Wittwer for her thoughtful comments on the initial draft of this manuscript and Gary Kraft for his help extracting the data used in the analyses.

Corresponding author: Walter Matthew Drymalski, PhD; walter.drymalski@milwaukeecountywi.gov

Disclosures: None reported.

References

1. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769. doi:10.1377/hlthaff.27.3.759

2. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. doi:10.1370/afm.1713

3. Hendrikx RJP, Drewes HW, Spreeuwenberg M, et al. Which triple aim related measures are being used to evaluate population management initiatives? An international comparative analysis. Health Policy. 2016;120(5):471-485. doi:10.1016/j.healthpol.2016.03.008

4. Whittington JW, Nolan K, Lewis N, Torres T. Pursuing the triple aim: the first 7 years. Milbank Q. 2015;93(2):263-300. doi:10.1111/1468-0009.12122

5. Ryan BL, Brown JB, Glazier RH, Hutchison B. Examining primary healthcare performance through a triple aim lens. Healthc Policy. 2016;11(3):19-31.

6. Stiefel M, Nolan K. A guide to measuring the Triple Aim: population health, experience of care, and per capita cost. Institute for Healthcare Improvement; 2012. Accessed November 1, 2022. https://nhchc.org/wp-content/uploads/2019/08/ihiguidetomeasuringtripleaimwhitepaper2012.pdf

7. Martin L, Nelson E, Rakover J, Chase A. Whole system measures 2.0: a compass for health system leaders. Institute for Healthcare Improvement; 2016. Accessed November 1, 2022. http://www.ihi.org:80/resources/Pages/IHIWhitePapers/Whole-System-Measures-Compass-for-Health-System-Leaders.aspx

8. Casalino LP, Gans D, Weber R, et al. US physician practices spend more than $15.4 billion annually to report quality measures. Health Aff (Millwood). 2016;35(3):401-406. doi:10.1377/hlthaff.2015.1258

9. Rao SK, Kimball AB, Lehrhoff SR, et al. The impact of administrative burden on academic physicians: results of a hospital-wide physician survey. Acad Med. 2017;92(2):237-243. doi:10.1097/ACM.0000000000001461

10. Woolhandler S, Himmelstein DU. Administrative work consumes one-sixth of U.S. physicians’ working hours and lowers their career satisfaction. Int J Health Serv. 2014;44(4):635-642. doi:10.2190/HS.44.4.a

11. Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf. 2012;21(11):964-968. doi:10.1136/bmjqs-2012-001081

12. Vital Signs: Core Metrics for Health and Health Care Progress. Washington, DC: National Academies Press; 2015. doi:10.17226/19402

13. Centers for Disease Control and Prevention. BRFSS questionnaires. Accessed November 1, 2022. https://www.cdc.gov/brfss/questionnaires/index.htm

14. County Health Rankings and Roadmaps. Measures & data sources. University of Wisconsin Population Health Institute. Accessed November 1, 2022. https://www.countyhealthrankings.org/explore-health-rankings/measures-data-sources

15. Centers for Disease Control and Prevention. Healthy days core module (CDC HRQOL-4). Accessed November 1, 2022. https://www.cdc.gov/hrqol/hrqol14_measure.htm

16. Cordier T, Song Y, Cambon J, et al. A bold goal: more healthy days through improved community health. Popul Health Manag. 2018;21(3):202-208. doi:10.1089/pop.2017.0142

17. Slabaugh SL, Shah M, Zack M, et al. Leveraging health-related quality of life in population health management: the case for healthy days. Popul Health Manag. 2017;20(1):13-22. doi:10.1089/pop.2015.0162

18. Karimi M, Brazier J. Health, health-related quality of life, and quality of life: what is the difference? Pharmacoeconomics. 2016;34(7):645-649. doi:10.1007/s40273-016-0389-9

19. Smith KW, Avis NE, Assmann SF. Distinguishing between quality of life and health status in quality of life research: a meta-analysis. Qual Life Res. 1999;8(5):447-459. doi:10.1023/a:1008928518577

20. Atroszko PA, Baginska P, Mokosinska M, et al. Validity and reliability of single-item self-report measures of general quality of life, general health and sleep quality. In: CER Comparative European Research 2015. Sciemcee Publishing; 2015:207-211.

21. Singh JA, Satele D, Pattabasavaiah S, et al. Normative data and clinically significant effect sizes for single-item numerical linear analogue self-assessment (LASA) scales. Health Qual Life Outcomes. 2014;12:187. doi:10.1186/s12955-014-0187-z

22. Siebens HC, Tsukerman D, Adkins RH, et al. Correlates of a single-item quality-of-life measure in people aging with disabilities. Am J Phys Med Rehabil. 2015;94(12):1065-1074. doi:10.1097/PHM.0000000000000298

23. Yohannes AM, Dodd M, Morris J, Webb K. Reliability and validity of a single item measure of quality of life scale for adult patients with cystic fibrosis. Health Qual Life Outcomes. 2011;9:105. doi:10.1186/1477-7525-9-105

24. Conway L, Widjaja E, Smith ML. Single-item measure for assessing quality of life in children with drug-resistant epilepsy. Epilepsia Open. 2017;3(1):46-54. doi:10.1002/epi4.12088

25. Barry MM, Zissi A. Quality of life as an outcome measure in evaluating mental health services: a review of the empirical evidence. Soc Psychiatry Psychiatr Epidemiol. 1997;32(1):38-47. doi:10.1007/BF00800666

26. Skevington SM, Lotfy M, O’Connell KA. The World Health Organization’s WHOQOL-BREF quality of life assessment: psychometric properties and results of the international field trial. Qual Life Res. 2004;13(2):299-310. doi:10.1023/B:QURE.0000018486.91360.00

27. Centers for Medicare & Medicaid Services. Hospital readmissions reduction program (HRRP). Accessed November 1, 2022. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program

28. Centers for Medicare & Medicaid Services. Patient-reported outcome measures. CMS Measures Management System. Published May 2022. Accessed November 1, 2022. https://www.cms.gov/files/document/blueprint-patient-reported-outcome-measures.pdf

References

1. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769. doi:10.1377/hlthaff.27.3.759

2. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. doi:10.1370/afm.1713

3. Hendrikx RJP, Drewes HW, Spreeuwenberg M, et al. Which triple aim related measures are being used to evaluate population management initiatives? An international comparative analysis. Health Policy. 2016;120(5):471-485. doi:10.1016/j.healthpol.2016.03.008

4. Whittington JW, Nolan K, Lewis N, Torres T. Pursuing the triple aim: the first 7 years. Milbank Q. 2015;93(2):263-300. doi:10.1111/1468-0009.12122

5. Ryan BL, Brown JB, Glazier RH, Hutchison B. Examining primary healthcare performance through a triple aim lens. Healthc Policy. 2016;11(3):19-31.

6. Stiefel M, Nolan K. A guide to measuring the Triple Aim: population health, experience of care, and per capita cost. Institute for Healthcare Improvement; 2012. Accessed November 1, 2022. https://nhchc.org/wp-content/uploads/2019/08/ihiguidetomeasuringtripleaimwhitepaper2012.pdf

7. Martin L, Nelson E, Rakover J, Chase A. Whole system measures 2.0: a compass for health system leaders. Institute for Healthcare Improvement; 2016. Accessed November 1, 2022. http://www.ihi.org:80/resources/Pages/IHIWhitePapers/Whole-System-Measures-Compass-for-Health-System-Leaders.aspx

8. Casalino LP, Gans D, Weber R, et al. US physician practices spend more than $15.4 billion annually to report quality measures. Health Aff (Millwood). 2016;35(3):401-406. doi:10.1377/hlthaff.2015.1258

9. Rao SK, Kimball AB, Lehrhoff SR, et al. The impact of administrative burden on academic physicians: results of a hospital-wide physician survey. Acad Med. 2017;92(2):237-243. doi:10.1097/ACM.0000000000001461

10. Woolhandler S, Himmelstein DU. Administrative work consumes one-sixth of U.S. physicians’ working hours and lowers their career satisfaction. Int J Health Serv. 2014;44(4):635-642. doi:10.2190/HS.44.4.a

11. Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf. 2012;21(11):964-968. doi:10.1136/bmjqs-2012-001081

12. Vital Signs: Core Metrics for Health and Health Care Progress. Washington, DC: National Academies Press; 2015. doi:10.17226/19402

13. Centers for Disease Control and Prevention. BRFSS questionnaires. Accessed November 1, 2022. https://www.cdc.gov/brfss/questionnaires/index.htm

14. County Health Rankings and Roadmaps. Measures & data sources. University of Wisconsin Population Health Institute. Accessed November 1, 2022. https://www.countyhealthrankings.org/explore-health-rankings/measures-data-sources

15. Centers for Disease Control and Prevention. Healthy days core module (CDC HRQOL-4). Accessed November 1, 2022. https://www.cdc.gov/hrqol/hrqol14_measure.htm

16. Cordier T, Song Y, Cambon J, et al. A bold goal: more healthy days through improved community health. Popul Health Manag. 2018;21(3):202-208. doi:10.1089/pop.2017.0142

17. Slabaugh SL, Shah M, Zack M, et al. Leveraging health-related quality of life in population health management: the case for healthy days. Popul Health Manag. 2017;20(1):13-22. doi:10.1089/pop.2015.0162

18. Karimi M, Brazier J. Health, health-related quality of life, and quality of life: what is the difference? Pharmacoeconomics. 2016;34(7):645-649. doi:10.1007/s40273-016-0389-9

19. Smith KW, Avis NE, Assmann SF. Distinguishing between quality of life and health status in quality of life research: a meta-analysis. Qual Life Res. 1999;8(5):447-459. doi:10.1023/a:1008928518577

20. Atroszko PA, Baginska P, Mokosinska M, et al. Validity and reliability of single-item self-report measures of general quality of life, general health and sleep quality. In: CER Comparative European Research 2015. Sciemcee Publishing; 2015:207-211.

21. Singh JA, Satele D, Pattabasavaiah S, et al. Normative data and clinically significant effect sizes for single-item numerical linear analogue self-assessment (LASA) scales. Health Qual Life Outcomes. 2014;12:187. doi:10.1186/s12955-014-0187-z

22. Siebens HC, Tsukerman D, Adkins RH, et al. Correlates of a single-item quality-of-life measure in people aging with disabilities. Am J Phys Med Rehabil. 2015;94(12):1065-1074. doi:10.1097/PHM.0000000000000298

23. Yohannes AM, Dodd M, Morris J, Webb K. Reliability and validity of a single item measure of quality of life scale for adult patients with cystic fibrosis. Health Qual Life Outcomes. 2011;9:105. doi:10.1186/1477-7525-9-105

24. Conway L, Widjaja E, Smith ML. Single-item measure for assessing quality of life in children with drug-resistant epilepsy. Epilepsia Open. 2017;3(1):46-54. doi:10.1002/epi4.12088

25. Barry MM, Zissi A. Quality of life as an outcome measure in evaluating mental health services: a review of the empirical evidence. Soc Psychiatry Psychiatr Epidemiol. 1997;32(1):38-47. doi:10.1007/BF00800666

26. Skevington SM, Lotfy M, O’Connell KA. The World Health Organization’s WHOQOL-BREF quality of life assessment: psychometric properties and results of the international field trial. Qual Life Res. 2004;13(2):299-310. doi:10.1023/B:QURE.0000018486.91360.00

27. Centers for Medicare & Medicaid Services. Hospital readmissions reduction program (HRRP). Accessed November 1, 2022. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program

28. Centers for Medicare & Medicaid Services. Patient-reported outcome measures. CMS Measures Management System. Published May 2022. Accessed November 1, 2022. https://www.cms.gov/files/document/blueprint-patient-reported-outcome-measures.pdf

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
214-219
Page Number
214-219
Publications
Publications
Topics
Article Type
Display Headline
Quality of Life and Population Health in Behavioral Health Care: A Retrospective, Cross-Sectional Study
Display Headline
Quality of Life and Population Health in Behavioral Health Care: A Retrospective, Cross-Sectional Study
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Neurosurgery Operating Room Efficiency During the COVID-19 Era

Article Type
Changed
Display Headline
Neurosurgery Operating Room Efficiency During the COVID-19 Era

From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).

ABSTRACT

Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.

Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.

Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).

Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.

Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.

The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.

Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.

 

 

Methods

To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.

Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.

Results

First-Start Time

First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P=.004) (Table 1).

First-Start Time Analysis

The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.

(A) Unadjusted and (B) adjusted first-start delay in operating room efficiency relative to COVID-19 census.

Turnover Time

Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.

Turnover Time Analysis

(A) Unadjusted and (B) adjusted turnover time in operating room efficiency relative to COVID-19 census.

 

 

Discussion

We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.

After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.

After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.

Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.

A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13

 

 

Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.

Limitations

Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.

Conclusion

The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.

Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; david.c.liles.1@vumc.org

Disclosures: None reported.

References

1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017

2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x

3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79

4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657

5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279

6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592

7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157

8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130

9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142

10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520

11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044

12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173

13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010

14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5

15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
208-213
Sections
Article PDF
Article PDF

From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).

ABSTRACT

Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.

Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.

Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).

Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.

Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.

The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.

Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.

 

 

Methods

To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.

Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.

Results

First-Start Time

First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P=.004) (Table 1).

First-Start Time Analysis

The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.

(A) Unadjusted and (B) adjusted first-start delay in operating room efficiency relative to COVID-19 census.

Turnover Time

Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.

Turnover Time Analysis

(A) Unadjusted and (B) adjusted turnover time in operating room efficiency relative to COVID-19 census.

 

 

Discussion

We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.

After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.

After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.

Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.

A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13

 

 

Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.

Limitations

Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.

Conclusion

The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.

Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; david.c.liles.1@vumc.org

Disclosures: None reported.

From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).

ABSTRACT

Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.

Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.

Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).

Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.

Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.

The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.

Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.

 

 

Methods

To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.

Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.

Results

First-Start Time

First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P=.004) (Table 1).

First-Start Time Analysis

The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.

(A) Unadjusted and (B) adjusted first-start delay in operating room efficiency relative to COVID-19 census.

Turnover Time

Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.

Turnover Time Analysis

(A) Unadjusted and (B) adjusted turnover time in operating room efficiency relative to COVID-19 census.

 

 

Discussion

We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.

After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.

After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.

Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.

A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13

 

 

Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.

Limitations

Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.

Conclusion

The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.

Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; david.c.liles.1@vumc.org

Disclosures: None reported.

References

1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017

2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x

3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79

4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657

5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279

6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592

7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157

8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130

9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142

10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520

11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044

12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173

13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010

14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5

15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691

References

1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017

2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x

3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79

4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657

5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279

6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592

7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157

8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130

9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142

10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520

11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044

12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173

13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010

14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5

15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
208-213
Page Number
208-213
Publications
Publications
Topics
Article Type
Display Headline
Neurosurgery Operating Room Efficiency During the COVID-19 Era
Display Headline
Neurosurgery Operating Room Efficiency During the COVID-19 Era
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Best Practice Implementation and Clinical Inertia

Article Type
Changed
Display Headline
Best Practice Implementation and Clinical Inertia

From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.

Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3

Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.

The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.

Trajectory of innovations, dissemination, and organizational adaptations

Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.

Corresponding author: Ebrahim Barkoudah, MD, MPH; ebarkoudah@bwh.harvard.edu

Disclosures: None reported.

References

1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012

2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690

3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003

4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007

5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677

6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001

7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019

8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0

9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957

10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
206-207
Sections
Article PDF
Article PDF

From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.

Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3

Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.

The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.

Trajectory of innovations, dissemination, and organizational adaptations

Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.

Corresponding author: Ebrahim Barkoudah, MD, MPH; ebarkoudah@bwh.harvard.edu

Disclosures: None reported.

From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.

Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3

Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.

The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.

Trajectory of innovations, dissemination, and organizational adaptations

Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.

Corresponding author: Ebrahim Barkoudah, MD, MPH; ebarkoudah@bwh.harvard.edu

Disclosures: None reported.

References

1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012

2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690

3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003

4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007

5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677

6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001

7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019

8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0

9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957

10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007

References

1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012

2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690

3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003

4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007

5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677

6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001

7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019

8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0

9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957

10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
206-207
Page Number
206-207
Publications
Publications
Topics
Article Type
Display Headline
Best Practice Implementation and Clinical Inertia
Display Headline
Best Practice Implementation and Clinical Inertia
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction

Article Type
Changed
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
202-205
Sections
Article PDF
Article PDF

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
202-205
Page Number
202-205
Publications
Publications
Topics
Article Type
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane

Article Type
Changed
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
199-201
Sections
Article PDF
Article PDF

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
199-201
Page Number
199-201
Publications
Publications
Topics
Article Type
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 

Article Type
Changed
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
196-198
Sections
Article PDF
Article PDF

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
196-198
Page Number
196-198
Publications
Publications
Topics
Article Type
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Residents react: Has residency become easier or overly difficult?

Article Type
Changed

Medical residents have cleared many hurdles to get where they are, as detailed in Medscape’s Residents Salary and Debt Report 2022 which explains their challenges with compensation and school loans as well as long hours and problematic personal relationships.

Whereas 72% of residents described themselves as “very satisfied” or “satisfied” with their professional training experience, only 27% felt that highly about how well they’re paid. Satisfaction levels increased somewhat farther into residency, reaching 35% in year 5.

Respondents to the survey described mixed feelings about residency, with some concluding it is a rite of passage.
 

Do residents have it easier today?

If so, is that rite of passage getting any easier? You’ll get different answers from residents and physicians.

Medscape asked respondents whether their journey to residency was made easier once the Step 1 exam was converted to pass-fail, and interviews brought online, because of the COVID-19 pandemic.

Many residents conceded their journey became easier, less stressful, and less expensive under the new Step 1 formats. One respondent said he was freed up to focus more intently on higher-yield academic goals such as research.

Another respondent called the pass/fail change a “total game-changer,” as it lets applicants apply to all specialties while having other qualifications than test scores considered. A resident who took Step 1 before pass/fail was instituted described the “insurmountable stress associated with studying for Step 1 to get the highest score you possibly could.”

But not all residents liked the difficulty in being able to differentiate themselves, beyond med school pedigrees, in the absence of Step 1 scores.

Meanwhile, some doctors posting comments to the Medscape report strongly disagreed with the idea that residency life is getting harder. They depict residency as a rite of passage under the best of circumstances.

“Whatever issues there may be [today’s residents] are still making eight times what I got and, from what I’ve seen, we had a lot more independent responsibilities,” one physician commenter said.

Other doctors were more sympathetic and worried about the future price to be paid for hardships during residency. “Compensation should not be tied to the willingness to sacrifice the most beautiful years of life,” one commentator wrote.
 

Online interviews: Pros and cons

Many resident respondents celebrated the opportunity to interview for residency programs online. Some who traveled to in-person interviews before the pandemic said they racked up as much as $10,000 in travel costs, adding to their debt loads.

But not everyone was a fan. Other residents sniped that peers can apply to more residencies and “hoard” interviews, making the competition that much harder.

And how useful are online interviews to a prospective resident? “Virtual interviews are terrible for getting a true sense for a program or even the people,” a 1st-year family medicine resident complained. And it’s harder for an applicant “to shine when you’re on Zoom,” a 1st-year internal medicine resident opined.
 

Whether to report harassment

In survey, respondents were asked whether they ever witnessed sexual abuse, harassment, or misconduct; and if so, what they did about it. Among those who did, many opted to take no action, fearing retaliation or retribution. “I saw a resident made out to be a ‘problem resident’ when reporting it and then ultimately fired,” one respondent recounted.

Other residents said they felt unsure about the protocol, whom to report to, or even what constituted harassment or misconduct. “I didn’t realize [an incident] was harassment until later,” one resident said. Others thought “minor” or “subtle” incidents did not warrant action; “they are typically microaggressions and appear accepted within the culture of the institution.”

Residents’ confusion heightened when the perpetrator was a patient. “I’m not sure what to do about that,” a respondent acknowledged. An emergency medicine resident added, “most of the time … it is the patients who are acting inappropriately, saying inappropriate things, etc. There is no way to file a complaint like that.”
 

Rewards and challenges for residents

Among the most rewarding parts of residency that respondents described were developing specific skills such as surgical techniques, job security, and “learning a little day by day” in the words of a 1st-year gastroenterology resident.

Others felt gratified by the chances to help patients and families, their teams, and to advance social justice and health equity.

But challenges abound – chiefly money struggles. A 3rd-year psychiatry resident lamented “being financially strapped in the prime of my life from student loans and low wages.”

Stress and emotional fatigue also came up often as major challenges. “Constantly being told to do more, more presentations, more papers, more research, more studying,” a 5th-year neurosurgery resident bemoaned. “Being expected to be at the top of my game despite being sleep-deprived, depressed, and burned out,” a 3rd-year ob.gyn. resident groused.

But some physician commenters urged residents to look for long-term growth behind the challenges. “Yes, it was hard, but the experience was phenomenal, and I am glad I did it,” one doctor said.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Medical residents have cleared many hurdles to get where they are, as detailed in Medscape’s Residents Salary and Debt Report 2022 which explains their challenges with compensation and school loans as well as long hours and problematic personal relationships.

Whereas 72% of residents described themselves as “very satisfied” or “satisfied” with their professional training experience, only 27% felt that highly about how well they’re paid. Satisfaction levels increased somewhat farther into residency, reaching 35% in year 5.

Respondents to the survey described mixed feelings about residency, with some concluding it is a rite of passage.
 

Do residents have it easier today?

If so, is that rite of passage getting any easier? You’ll get different answers from residents and physicians.

Medscape asked respondents whether their journey to residency was made easier once the Step 1 exam was converted to pass-fail, and interviews brought online, because of the COVID-19 pandemic.

Many residents conceded their journey became easier, less stressful, and less expensive under the new Step 1 formats. One respondent said he was freed up to focus more intently on higher-yield academic goals such as research.

Another respondent called the pass/fail change a “total game-changer,” as it lets applicants apply to all specialties while having other qualifications than test scores considered. A resident who took Step 1 before pass/fail was instituted described the “insurmountable stress associated with studying for Step 1 to get the highest score you possibly could.”

But not all residents liked the difficulty in being able to differentiate themselves, beyond med school pedigrees, in the absence of Step 1 scores.

Meanwhile, some doctors posting comments to the Medscape report strongly disagreed with the idea that residency life is getting harder. They depict residency as a rite of passage under the best of circumstances.

“Whatever issues there may be [today’s residents] are still making eight times what I got and, from what I’ve seen, we had a lot more independent responsibilities,” one physician commenter said.

Other doctors were more sympathetic and worried about the future price to be paid for hardships during residency. “Compensation should not be tied to the willingness to sacrifice the most beautiful years of life,” one commentator wrote.
 

Online interviews: Pros and cons

Many resident respondents celebrated the opportunity to interview for residency programs online. Some who traveled to in-person interviews before the pandemic said they racked up as much as $10,000 in travel costs, adding to their debt loads.

But not everyone was a fan. Other residents sniped that peers can apply to more residencies and “hoard” interviews, making the competition that much harder.

And how useful are online interviews to a prospective resident? “Virtual interviews are terrible for getting a true sense for a program or even the people,” a 1st-year family medicine resident complained. And it’s harder for an applicant “to shine when you’re on Zoom,” a 1st-year internal medicine resident opined.
 

Whether to report harassment

In survey, respondents were asked whether they ever witnessed sexual abuse, harassment, or misconduct; and if so, what they did about it. Among those who did, many opted to take no action, fearing retaliation or retribution. “I saw a resident made out to be a ‘problem resident’ when reporting it and then ultimately fired,” one respondent recounted.

Other residents said they felt unsure about the protocol, whom to report to, or even what constituted harassment or misconduct. “I didn’t realize [an incident] was harassment until later,” one resident said. Others thought “minor” or “subtle” incidents did not warrant action; “they are typically microaggressions and appear accepted within the culture of the institution.”

Residents’ confusion heightened when the perpetrator was a patient. “I’m not sure what to do about that,” a respondent acknowledged. An emergency medicine resident added, “most of the time … it is the patients who are acting inappropriately, saying inappropriate things, etc. There is no way to file a complaint like that.”
 

Rewards and challenges for residents

Among the most rewarding parts of residency that respondents described were developing specific skills such as surgical techniques, job security, and “learning a little day by day” in the words of a 1st-year gastroenterology resident.

Others felt gratified by the chances to help patients and families, their teams, and to advance social justice and health equity.

But challenges abound – chiefly money struggles. A 3rd-year psychiatry resident lamented “being financially strapped in the prime of my life from student loans and low wages.”

Stress and emotional fatigue also came up often as major challenges. “Constantly being told to do more, more presentations, more papers, more research, more studying,” a 5th-year neurosurgery resident bemoaned. “Being expected to be at the top of my game despite being sleep-deprived, depressed, and burned out,” a 3rd-year ob.gyn. resident groused.

But some physician commenters urged residents to look for long-term growth behind the challenges. “Yes, it was hard, but the experience was phenomenal, and I am glad I did it,” one doctor said.

A version of this article first appeared on Medscape.com.

Medical residents have cleared many hurdles to get where they are, as detailed in Medscape’s Residents Salary and Debt Report 2022 which explains their challenges with compensation and school loans as well as long hours and problematic personal relationships.

Whereas 72% of residents described themselves as “very satisfied” or “satisfied” with their professional training experience, only 27% felt that highly about how well they’re paid. Satisfaction levels increased somewhat farther into residency, reaching 35% in year 5.

Respondents to the survey described mixed feelings about residency, with some concluding it is a rite of passage.
 

Do residents have it easier today?

If so, is that rite of passage getting any easier? You’ll get different answers from residents and physicians.

Medscape asked respondents whether their journey to residency was made easier once the Step 1 exam was converted to pass-fail, and interviews brought online, because of the COVID-19 pandemic.

Many residents conceded their journey became easier, less stressful, and less expensive under the new Step 1 formats. One respondent said he was freed up to focus more intently on higher-yield academic goals such as research.

Another respondent called the pass/fail change a “total game-changer,” as it lets applicants apply to all specialties while having other qualifications than test scores considered. A resident who took Step 1 before pass/fail was instituted described the “insurmountable stress associated with studying for Step 1 to get the highest score you possibly could.”

But not all residents liked the difficulty in being able to differentiate themselves, beyond med school pedigrees, in the absence of Step 1 scores.

Meanwhile, some doctors posting comments to the Medscape report strongly disagreed with the idea that residency life is getting harder. They depict residency as a rite of passage under the best of circumstances.

“Whatever issues there may be [today’s residents] are still making eight times what I got and, from what I’ve seen, we had a lot more independent responsibilities,” one physician commenter said.

Other doctors were more sympathetic and worried about the future price to be paid for hardships during residency. “Compensation should not be tied to the willingness to sacrifice the most beautiful years of life,” one commentator wrote.
 

Online interviews: Pros and cons

Many resident respondents celebrated the opportunity to interview for residency programs online. Some who traveled to in-person interviews before the pandemic said they racked up as much as $10,000 in travel costs, adding to their debt loads.

But not everyone was a fan. Other residents sniped that peers can apply to more residencies and “hoard” interviews, making the competition that much harder.

And how useful are online interviews to a prospective resident? “Virtual interviews are terrible for getting a true sense for a program or even the people,” a 1st-year family medicine resident complained. And it’s harder for an applicant “to shine when you’re on Zoom,” a 1st-year internal medicine resident opined.
 

Whether to report harassment

In survey, respondents were asked whether they ever witnessed sexual abuse, harassment, or misconduct; and if so, what they did about it. Among those who did, many opted to take no action, fearing retaliation or retribution. “I saw a resident made out to be a ‘problem resident’ when reporting it and then ultimately fired,” one respondent recounted.

Other residents said they felt unsure about the protocol, whom to report to, or even what constituted harassment or misconduct. “I didn’t realize [an incident] was harassment until later,” one resident said. Others thought “minor” or “subtle” incidents did not warrant action; “they are typically microaggressions and appear accepted within the culture of the institution.”

Residents’ confusion heightened when the perpetrator was a patient. “I’m not sure what to do about that,” a respondent acknowledged. An emergency medicine resident added, “most of the time … it is the patients who are acting inappropriately, saying inappropriate things, etc. There is no way to file a complaint like that.”
 

Rewards and challenges for residents

Among the most rewarding parts of residency that respondents described were developing specific skills such as surgical techniques, job security, and “learning a little day by day” in the words of a 1st-year gastroenterology resident.

Others felt gratified by the chances to help patients and families, their teams, and to advance social justice and health equity.

But challenges abound – chiefly money struggles. A 3rd-year psychiatry resident lamented “being financially strapped in the prime of my life from student loans and low wages.”

Stress and emotional fatigue also came up often as major challenges. “Constantly being told to do more, more presentations, more papers, more research, more studying,” a 5th-year neurosurgery resident bemoaned. “Being expected to be at the top of my game despite being sleep-deprived, depressed, and burned out,” a 3rd-year ob.gyn. resident groused.

But some physician commenters urged residents to look for long-term growth behind the challenges. “Yes, it was hard, but the experience was phenomenal, and I am glad I did it,” one doctor said.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A plane crash interrupts a doctor’s vacation

Article Type
Changed

Emergencies happen anywhere, anytime – and sometimes physicians find themselves in situations where they are the only ones who can help. “Is There a Doctor in the House?” is a new series telling these stories.

When the plane crashed, I was asleep. I had arrived the evening before with my wife and three sons at a house on Kezar Lake on the Maine–New Hampshire border. We were going to spend a week there with my wife’s four brothers and their families. I was woken by people screaming my name. I jumped out of bed and ran downstairs. My kids had been watching a float plane circling and gliding along the lake. It had crashed into the water and flipped upside down. My oldest brother-in-law jumped into his ski boat and we sped out to the scene.

All we can see are the plane’s pontoons. The rest is underwater. A woman has already surfaced, screaming. I dive in.

I find the woman’s husband and 3-year-old son struggling to get free from the plane through the smashed windshield. They manage to get to the surface. The pilot is dead, impaled through the chest by the left wing strut.

The big problem: A little girl, whom I would learn later is named Lauren, remained trapped. The water is murky but I can see her, a 5- or 6-year-old girl with this long hair, strapped in upside down and unconscious.

The mom and I dive down over and over, pulling and ripping at the door. We cannot get it open. Finally, I’m able to bend the door open enough where I can reach in, but I can’t undo the seatbelt. In my mind, I’m debating, should I try and go through the front windshield? I’m getting really tired, I can tell there’s fuel in the water, and I don’t want to drown in the plane. So I pop up to the surface and yell, “Does anyone have a knife?”

My brother-in-law shoots back to shore in the boat, screaming, “Get a knife!” My niece gets in the boat with one. I’m standing on the pontoon, and my niece is in the front of the boat calling, “Uncle Todd! Uncle Todd!” and she throws the knife. It goes way over my head. I can’t even jump for it, it’s so high.

I have to get the knife. So, I dive into the water to try and find it. Somehow, the black knife has landed on the white wing, 4 or 5 feet under the water. Pure luck. It could have sunk down a hundred feet into the lake. I grab the knife and hand it to the mom, Beth. She’s able to cut the seatbelt, and we both pull Lauren to the surface.

I lay her out on the pontoon. She has no pulse and her pupils are fixed and dilated. Her mom is yelling, “She’s dead, isn’t she?” I start CPR. My skin and eyes are burning from the airplane fuel in the water. I get her breathing, and her heart comes back very quickly. Lauren starts to vomit and I’m trying to keep her airway clear. She’s breathing spontaneously and she has a pulse, so I decide it’s time to move her to shore.

We pull the boat up to the dock and Lauren’s now having anoxic seizures. Her brain has been without oxygen, and now she’s getting perfused again. We get her to shore and lay her on the lawn. I’m still doing mouth-to-mouth, but she’s seizing like crazy, and I don’t have any way to control that. Beth is crying and wants to hold her daughter gently while I’m working.

Someone had called 911, and finally this dude shows up with an ambulance, and it’s like something out of World War II. All he has is an oxygen tank, but the mask is old and cracked. It’s too big for Lauren, but it sort of fits me, so I’m sucking in oxygen and blowing it into the girl’s mouth. I’m doing whatever I can, but I don’t have an IV to start. I have no fluids. I got nothing.

As it happens, I’d done my emergency medicine training at Maine Medical Center, so I tell someone to call them and get a Life Flight chopper. We have to drive somewhere where the chopper can land, so we take the ambulance to the parking lot of the closest store called the Wicked Good Store. That’s a common thing in Maine. Everything is “wicked good.”

The whole town is there by that point. The chopper arrives. The ambulance doors pop open and a woman says, “Todd?” And I say, “Heather?”

Heather is an emergency flight nurse whom I’d trained with many years ago. There’s immediate trust. She has all the right equipment. We put in breathing tubes and IVs. We stop Lauren from seizing. The kid is soon stable.

There is only one extra seat in the chopper, so I tell Beth to go. They take off.

Suddenly, I begin to doubt my decision. Lauren had been underwater for 15 minutes at minimum. I know how long that is. Did I do the right thing? Did I resuscitate a brain-dead child? I didn’t think about it at the time, but if that patient had come to me in the emergency department, I’m honestly not sure what I would have done.

So, I go home. And I don’t get a call. The FAA and sheriff arrive to take statements from us. I don’t hear from anyone.

The next day I start calling. No one will tell me anything, so I finally get to one of the pediatric ICU attendings who had trained me. He says Lauren literally woke up and said, “I have to go pee.” And that was it. She was 100% normal. I couldn’t believe it.

Here’s a theory: In kids, there’s something called the glottic reflex. I think her glottic reflex went off as soon as she hit the water, which basically closed her airway. So when she passed out, she could never get enough water in her lungs and still had enough air in there to keep her alive. Later, I got a call from her uncle. He could barely get the words out because he was in tears. He said Lauren was doing beautifully.  

Three days later, I drove to Lauren’s house with my wife and kids. I had her read to me. I watched her play on the jungle gym for motor function. All sorts of stuff. She was totally normal.

Beth told us that the night before the accident, her mother had given the women in her family what she called a “miracle bracelet,” a bracelet that is supposed to give you one miracle in your life. Beth said she had the bracelet on her wrist the day of the accident, and now it’s gone. “Saving Lauren’s life was my miracle,” she said.

Funny thing: For 20 years, I ran all the EMS, police, fire, ambulance, in Boulder, Colo., where I live. I wrote all the protocols, and I would never advise any of my paramedics to dive into jet fuel to save someone. That was risky. But at the time, it was totally automatic. I think it taught me not to give up in certain situations, because you really don’t know.

Dr. Dorfman is an emergency medicine physician in Boulder, Colo., and medical director at Cedalion Health.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Emergencies happen anywhere, anytime – and sometimes physicians find themselves in situations where they are the only ones who can help. “Is There a Doctor in the House?” is a new series telling these stories.

When the plane crashed, I was asleep. I had arrived the evening before with my wife and three sons at a house on Kezar Lake on the Maine–New Hampshire border. We were going to spend a week there with my wife’s four brothers and their families. I was woken by people screaming my name. I jumped out of bed and ran downstairs. My kids had been watching a float plane circling and gliding along the lake. It had crashed into the water and flipped upside down. My oldest brother-in-law jumped into his ski boat and we sped out to the scene.

All we can see are the plane’s pontoons. The rest is underwater. A woman has already surfaced, screaming. I dive in.

I find the woman’s husband and 3-year-old son struggling to get free from the plane through the smashed windshield. They manage to get to the surface. The pilot is dead, impaled through the chest by the left wing strut.

The big problem: A little girl, whom I would learn later is named Lauren, remained trapped. The water is murky but I can see her, a 5- or 6-year-old girl with this long hair, strapped in upside down and unconscious.

The mom and I dive down over and over, pulling and ripping at the door. We cannot get it open. Finally, I’m able to bend the door open enough where I can reach in, but I can’t undo the seatbelt. In my mind, I’m debating, should I try and go through the front windshield? I’m getting really tired, I can tell there’s fuel in the water, and I don’t want to drown in the plane. So I pop up to the surface and yell, “Does anyone have a knife?”

My brother-in-law shoots back to shore in the boat, screaming, “Get a knife!” My niece gets in the boat with one. I’m standing on the pontoon, and my niece is in the front of the boat calling, “Uncle Todd! Uncle Todd!” and she throws the knife. It goes way over my head. I can’t even jump for it, it’s so high.

I have to get the knife. So, I dive into the water to try and find it. Somehow, the black knife has landed on the white wing, 4 or 5 feet under the water. Pure luck. It could have sunk down a hundred feet into the lake. I grab the knife and hand it to the mom, Beth. She’s able to cut the seatbelt, and we both pull Lauren to the surface.

I lay her out on the pontoon. She has no pulse and her pupils are fixed and dilated. Her mom is yelling, “She’s dead, isn’t she?” I start CPR. My skin and eyes are burning from the airplane fuel in the water. I get her breathing, and her heart comes back very quickly. Lauren starts to vomit and I’m trying to keep her airway clear. She’s breathing spontaneously and she has a pulse, so I decide it’s time to move her to shore.

We pull the boat up to the dock and Lauren’s now having anoxic seizures. Her brain has been without oxygen, and now she’s getting perfused again. We get her to shore and lay her on the lawn. I’m still doing mouth-to-mouth, but she’s seizing like crazy, and I don’t have any way to control that. Beth is crying and wants to hold her daughter gently while I’m working.

Someone had called 911, and finally this dude shows up with an ambulance, and it’s like something out of World War II. All he has is an oxygen tank, but the mask is old and cracked. It’s too big for Lauren, but it sort of fits me, so I’m sucking in oxygen and blowing it into the girl’s mouth. I’m doing whatever I can, but I don’t have an IV to start. I have no fluids. I got nothing.

As it happens, I’d done my emergency medicine training at Maine Medical Center, so I tell someone to call them and get a Life Flight chopper. We have to drive somewhere where the chopper can land, so we take the ambulance to the parking lot of the closest store called the Wicked Good Store. That’s a common thing in Maine. Everything is “wicked good.”

The whole town is there by that point. The chopper arrives. The ambulance doors pop open and a woman says, “Todd?” And I say, “Heather?”

Heather is an emergency flight nurse whom I’d trained with many years ago. There’s immediate trust. She has all the right equipment. We put in breathing tubes and IVs. We stop Lauren from seizing. The kid is soon stable.

There is only one extra seat in the chopper, so I tell Beth to go. They take off.

Suddenly, I begin to doubt my decision. Lauren had been underwater for 15 minutes at minimum. I know how long that is. Did I do the right thing? Did I resuscitate a brain-dead child? I didn’t think about it at the time, but if that patient had come to me in the emergency department, I’m honestly not sure what I would have done.

So, I go home. And I don’t get a call. The FAA and sheriff arrive to take statements from us. I don’t hear from anyone.

The next day I start calling. No one will tell me anything, so I finally get to one of the pediatric ICU attendings who had trained me. He says Lauren literally woke up and said, “I have to go pee.” And that was it. She was 100% normal. I couldn’t believe it.

Here’s a theory: In kids, there’s something called the glottic reflex. I think her glottic reflex went off as soon as she hit the water, which basically closed her airway. So when she passed out, she could never get enough water in her lungs and still had enough air in there to keep her alive. Later, I got a call from her uncle. He could barely get the words out because he was in tears. He said Lauren was doing beautifully.  

Three days later, I drove to Lauren’s house with my wife and kids. I had her read to me. I watched her play on the jungle gym for motor function. All sorts of stuff. She was totally normal.

Beth told us that the night before the accident, her mother had given the women in her family what she called a “miracle bracelet,” a bracelet that is supposed to give you one miracle in your life. Beth said she had the bracelet on her wrist the day of the accident, and now it’s gone. “Saving Lauren’s life was my miracle,” she said.

Funny thing: For 20 years, I ran all the EMS, police, fire, ambulance, in Boulder, Colo., where I live. I wrote all the protocols, and I would never advise any of my paramedics to dive into jet fuel to save someone. That was risky. But at the time, it was totally automatic. I think it taught me not to give up in certain situations, because you really don’t know.

Dr. Dorfman is an emergency medicine physician in Boulder, Colo., and medical director at Cedalion Health.
 

A version of this article first appeared on Medscape.com.

Emergencies happen anywhere, anytime – and sometimes physicians find themselves in situations where they are the only ones who can help. “Is There a Doctor in the House?” is a new series telling these stories.

When the plane crashed, I was asleep. I had arrived the evening before with my wife and three sons at a house on Kezar Lake on the Maine–New Hampshire border. We were going to spend a week there with my wife’s four brothers and their families. I was woken by people screaming my name. I jumped out of bed and ran downstairs. My kids had been watching a float plane circling and gliding along the lake. It had crashed into the water and flipped upside down. My oldest brother-in-law jumped into his ski boat and we sped out to the scene.

All we can see are the plane’s pontoons. The rest is underwater. A woman has already surfaced, screaming. I dive in.

I find the woman’s husband and 3-year-old son struggling to get free from the plane through the smashed windshield. They manage to get to the surface. The pilot is dead, impaled through the chest by the left wing strut.

The big problem: A little girl, whom I would learn later is named Lauren, remained trapped. The water is murky but I can see her, a 5- or 6-year-old girl with this long hair, strapped in upside down and unconscious.

The mom and I dive down over and over, pulling and ripping at the door. We cannot get it open. Finally, I’m able to bend the door open enough where I can reach in, but I can’t undo the seatbelt. In my mind, I’m debating, should I try and go through the front windshield? I’m getting really tired, I can tell there’s fuel in the water, and I don’t want to drown in the plane. So I pop up to the surface and yell, “Does anyone have a knife?”

My brother-in-law shoots back to shore in the boat, screaming, “Get a knife!” My niece gets in the boat with one. I’m standing on the pontoon, and my niece is in the front of the boat calling, “Uncle Todd! Uncle Todd!” and she throws the knife. It goes way over my head. I can’t even jump for it, it’s so high.

I have to get the knife. So, I dive into the water to try and find it. Somehow, the black knife has landed on the white wing, 4 or 5 feet under the water. Pure luck. It could have sunk down a hundred feet into the lake. I grab the knife and hand it to the mom, Beth. She’s able to cut the seatbelt, and we both pull Lauren to the surface.

I lay her out on the pontoon. She has no pulse and her pupils are fixed and dilated. Her mom is yelling, “She’s dead, isn’t she?” I start CPR. My skin and eyes are burning from the airplane fuel in the water. I get her breathing, and her heart comes back very quickly. Lauren starts to vomit and I’m trying to keep her airway clear. She’s breathing spontaneously and she has a pulse, so I decide it’s time to move her to shore.

We pull the boat up to the dock and Lauren’s now having anoxic seizures. Her brain has been without oxygen, and now she’s getting perfused again. We get her to shore and lay her on the lawn. I’m still doing mouth-to-mouth, but she’s seizing like crazy, and I don’t have any way to control that. Beth is crying and wants to hold her daughter gently while I’m working.

Someone had called 911, and finally this dude shows up with an ambulance, and it’s like something out of World War II. All he has is an oxygen tank, but the mask is old and cracked. It’s too big for Lauren, but it sort of fits me, so I’m sucking in oxygen and blowing it into the girl’s mouth. I’m doing whatever I can, but I don’t have an IV to start. I have no fluids. I got nothing.

As it happens, I’d done my emergency medicine training at Maine Medical Center, so I tell someone to call them and get a Life Flight chopper. We have to drive somewhere where the chopper can land, so we take the ambulance to the parking lot of the closest store called the Wicked Good Store. That’s a common thing in Maine. Everything is “wicked good.”

The whole town is there by that point. The chopper arrives. The ambulance doors pop open and a woman says, “Todd?” And I say, “Heather?”

Heather is an emergency flight nurse whom I’d trained with many years ago. There’s immediate trust. She has all the right equipment. We put in breathing tubes and IVs. We stop Lauren from seizing. The kid is soon stable.

There is only one extra seat in the chopper, so I tell Beth to go. They take off.

Suddenly, I begin to doubt my decision. Lauren had been underwater for 15 minutes at minimum. I know how long that is. Did I do the right thing? Did I resuscitate a brain-dead child? I didn’t think about it at the time, but if that patient had come to me in the emergency department, I’m honestly not sure what I would have done.

So, I go home. And I don’t get a call. The FAA and sheriff arrive to take statements from us. I don’t hear from anyone.

The next day I start calling. No one will tell me anything, so I finally get to one of the pediatric ICU attendings who had trained me. He says Lauren literally woke up and said, “I have to go pee.” And that was it. She was 100% normal. I couldn’t believe it.

Here’s a theory: In kids, there’s something called the glottic reflex. I think her glottic reflex went off as soon as she hit the water, which basically closed her airway. So when she passed out, she could never get enough water in her lungs and still had enough air in there to keep her alive. Later, I got a call from her uncle. He could barely get the words out because he was in tears. He said Lauren was doing beautifully.  

Three days later, I drove to Lauren’s house with my wife and kids. I had her read to me. I watched her play on the jungle gym for motor function. All sorts of stuff. She was totally normal.

Beth told us that the night before the accident, her mother had given the women in her family what she called a “miracle bracelet,” a bracelet that is supposed to give you one miracle in your life. Beth said she had the bracelet on her wrist the day of the accident, and now it’s gone. “Saving Lauren’s life was my miracle,” she said.

Funny thing: For 20 years, I ran all the EMS, police, fire, ambulance, in Boulder, Colo., where I live. I wrote all the protocols, and I would never advise any of my paramedics to dive into jet fuel to save someone. That was risky. But at the time, it was totally automatic. I think it taught me not to give up in certain situations, because you really don’t know.

Dr. Dorfman is an emergency medicine physician in Boulder, Colo., and medical director at Cedalion Health.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Optimize HF meds rapidly and fully after hospital discharge: STRONG-HF

Article Type
Changed

– Clinicians who prescribe heart failure meds are holding the best hand they’ve ever had, but with so much underuse and suboptimal dosing in actual practice, it seems many may not appreciate the value of their cards. But a major randomized trial that has captured the field’s attention may embolden them to go all in.

Results showed that a strategy of early, rapid up-titration of multiple guideline-directed meds in patients hospitalized with heart failure, compared with a usual-care approach, cut their 6-month risk for death or HF readmission by a steep 34% (P = .002).

The drugs had been started and partly up-titrated in the hospital with the goal of full up-titration within 2 weeks after discharge.

Patients well tolerated the high-intensity approach, researchers said. Their quality-of-life scores improved (P < .0001) compared with the usual-care group, and adverse events were considered few and manageable in the international trial with more than 1,000 patients.

Safety on the high-intensity strategy depended on close patient monitoring at frequently planned clinic visits along with guidance for the up-titrations from clinical signs and natriuretic peptide levels, observed Alexandre Mebazaa, MD, PhD, University of Paris and Public Hospitals of Paris.

Dr. Mebazaa is principal investigator on the trial, called STRONG-HF, which he presented at the American Heart Association scientific sessions, held in Chicago and virtually. He is also lead author on the study’s same-day publication in the Lancet.

The high-intensity strategy’s superiority emerged early in the trial, which was halted early on the data safety monitoring board’s recommendation, with about 90% of follow-ups completed. The board “felt it was unethical to keep patients in usual care,” Dr. Mebazaa said at a press conference.
 

A dramatic change

The next step, he said, will be to educate the heart failure community on the high-intensity care technique so it can swiftly enter clinical practice. Currently in acute heart failure, “very few patients are monitored after discharge and treated with full doses of heart failure therapies.”

Adoption of the strategy “would be a dramatic change from what’s currently being done,” said Martin B. Leon, MD, NewYork-Presbyterian/Columbia University Irving Medical Center, New York, who moderated the press conference.

Only an estimated 5% of patients with HF in the United States receive full guideline-directed medical therapy, Dr. Leon said, “so the generalizability of this strategy, with careful follow-up that has safety involved in it, is absolutely crucial.”

But the potential impact of this high-intensity approach on resource use is unknown, raising questions about how widely and consistently it could be implemented, said Dr. Leon, who is not connected with STRONG-HF.

The trial called for in-hospital initiation of the three distinct drug classes that, at the time, were the core of guideline-directed HF therapy, with up-titration to 50% of recommended dosage by hospital discharge, and then to 100% within 2 weeks later.

The meds included a beta-blocker, a mineralocorticoid receptor antagonist (MRA), and a renin-angiotensin system inhibitor (RASI). The latter could be an ACE inhibitor, angiotensin-receptor blocker (ARB), or angiotensin receptor-neprilysin inhibitor (ARNI).
 

How about a fourth drug?

Conspicuously absent from the list, for contemporary practice, was an SGLT2 inhibitor, a class that entered the HF guidelines well after STRONG-HF was designed. They would undoubtedly join the other three agents were the high-intensity strategy to enter practice, potentially changing its complexity and safety profile.

But Dr. Mebazaa and other experts don’t see that as a big challenge and would expect a smooth transition to a high-intensity approach that also includes the SGLT2 inhibitors.

STRONG-HF was necessary in part because many clinicians have been “reluctant” to take full advantage of three agents that had been the basis of guideline-directed therapy, he told this news organization.

That reluctance stemmed from concerns that beta-blockers might worsen the heart failure, ACE inhibitors could hurt the kidneys, or MRAs might cause hyperkalemia, Dr. Mebazaa said. The STRONG-HF high-intensity regimen, therefore, demanded multiple clinic visits for close follow-up.

But the SGLT2 inhibitors “are known to be rather safe drugs, at least much safer than the three others,” he said. So, it seems unlikely that their addition to a beta-blocker, RASI, and MRA in patients with HF would worsen the risk of adverse events.

John G.F. Cleland, MD, PhD, agrees. With addition of the fourth agent, “You may need to be a little bit more careful with renal function, just in that first couple of weeks,” he told this news organization. “But I think it would be easy to add an SGLT2 inhibitor into this regimen. And in general, there’s no titration with an SGLT2 inhibitor, so they’ll all be on full dose predischarge.”

Given the drugs’ diuretic-like action, moreover, some patients might be able to pull back on their loop diuretics, speculated Dr. Cleland, from the University of Glasgow’s School of Health and Wellbeing.

The prospect of a high-intensity strategy’s wide implementation in practice presents both “challenges and opportunities,” Amanda R. Vest, MBBS, MPH, Tufts University, Boston, told this news organization.

“There may be additional challenges in terms of ensuring we avoid hypotension or acute kidney injury in the up-titration phase,” said Dr. Vest, who is medical director of her center’s cardiac transplantation program but not connected with STRONG-HF.

“But it also gives us opportunities,” she added, “because there are some patients, especially in that vulnerable postdischarge phase, who are actually much more able to tolerate introduction of an SGLT2 inhibitor than, for example, an ACE inhibitor, ARB, or ARNI – or maybe a beta-blocker if they’ve been in a low cardiac-output state.” Effective dosing would depend on “the personalization and skill of the clinician in optimizing the medications in their correct sequence,” Dr. Vest said.

“It’s challenging to think that we would ever get to 100% up-titration,” she added, “and even in this excellent study, they didn’t get to 100%.” But as clinicians gain experience with the high-intensity strategy, especially as the SGLT2 inhibitors are included, “I think we can reasonably expect more progress to be made in these up-titration skills.”
 

No restrictions on LVEF

The researchers entered 1,078 patients hospitalized with acute HF in 14 countries across Africa, Europe, the Middle East, and South America, and randomly assigned them to the high-intensity management strategy or usual care.

About 60% of the patients were male and 77% were White. There were no entry restrictions based on left ventricular ejection fraction (LVEF), which exceeded 40% in almost a third of cases.

In the high-intensity care group’s 542 patients, the three agents were up-titrated to 50% of the maximum guideline-recommended dosage prior to hospital discharge, and to 100% within 2 weeks after discharge. Symptoms and laboratory biomarkers, including natriuretic peptides, were monitored closely at four planned clinical visits over the following 6 weeks.

The 536 patients assigned to usual care were discharged and managed according to local standards, with their meds handled by their own primary care doctors or cardiologists, the published report notes. They were reevaluated by STRONG-HF clinicians 90 days after discharge.

The number of clinic visits in the first 90 postdischarge days averaged 4.8 in the high-intensity care group and 1.0 for those receiving usual care. Full up-titration was far more likely in the high-intensity care group: 55% vs. 2% for RASI agents, 49% vs. 4% for beta-blockers, and 84% vs. 46% for MRAs.

They also fared significantly better on all measured parameters associated with decongestion, including weight, prevalence of peripheral edema, jugular venous pressure, NYHA functional class, and natriuretic peptide levels, the researchers said.

The primary endpoint of 180-day death from any cause or HF readmission was met by 15.2% of the high-intensity care group and 23.3% of usual-care patients, for an adjusted risk ratio (RR) of 0.66 (95% CI, 0.50-0.86; P = .0021).

Subgroup analyses saw no significant interactions by age, sex, race, geography, or baseline blood pressure, renal function, or LVEF. Patients with higher vs. lower baseline natriuretic peptide levels trend toward better responses to high-intensity care (P = .08)
 

The COVID effect

The group performed a sensitivity analysis that excluded deaths attributed to COVID-19 in STRONG-HF, which launched prior to the pandemic. The high-intensity strategy’s benefit for the primary endpoint grew, with an adjusted RR of 0.61 (95% CI, 0.46-0.82; P = .0005). There was no corresponding effect on death from any cause (P = .15).

Treatment-related adverse effects in the overall trial were seen in 41.1% of the high-intensity care group and in 29.5% of those assigned to usual care.

The higher rate in the high-intensity care arm “may be related to their higher number of [clinic] visits compared to usual care,” Dr. Mebazaa said. “However, serious adverse events and fatal adverse events were similar in both arms.”

Cardiac failure was the most common adverse event, developing in about 15% in both groups. It was followed by hypotension, hyperkalemia, and renal impairment, according to the published report.

Dr. Cleland cautioned that the risk of adverse events would potentially be higher should the high-intensity strategy become common clinical practice. The median age in STRONG-HF was 63, which is “10-15 years younger, on average, than the population with recently admitted heart failure that we see. There’s no doubt that older people have more multimorbidity.”

STRONG-HF was funded by Roche Diagnostics. Dr. Mebazaa discloses receiving grants from Roche Diagnostics, Abbott Laboratories, 4TEEN4, and Windtree Therapeutics; honoraria for lectures from Roche Diagnostics, Bayer, and Merck, Sharp & Dohme; and consulting for Corteria Pharmaceuticals, S-form Pharma, FIRE-1, Implicity, 4TEEN4, and Adrenomed; and to being a co-inventor on a patent involving combination therapy for patients having acute or persistent dyspnea.

Dr. Vest reports modest relationships with Boehringer Ingelheim, Corvia, and CareDx; and receiving research grants from the American Heart Association and the National Institutes of Health. Dr. Cleland discloses receiving honoraria from Idorsia; and research grants from Vifor Pharma, Medtronic, Bayer, and Bristol-Myers Squibb. Dr. Leon had no disclosures.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Clinicians who prescribe heart failure meds are holding the best hand they’ve ever had, but with so much underuse and suboptimal dosing in actual practice, it seems many may not appreciate the value of their cards. But a major randomized trial that has captured the field’s attention may embolden them to go all in.

Results showed that a strategy of early, rapid up-titration of multiple guideline-directed meds in patients hospitalized with heart failure, compared with a usual-care approach, cut their 6-month risk for death or HF readmission by a steep 34% (P = .002).

The drugs had been started and partly up-titrated in the hospital with the goal of full up-titration within 2 weeks after discharge.

Patients well tolerated the high-intensity approach, researchers said. Their quality-of-life scores improved (P < .0001) compared with the usual-care group, and adverse events were considered few and manageable in the international trial with more than 1,000 patients.

Safety on the high-intensity strategy depended on close patient monitoring at frequently planned clinic visits along with guidance for the up-titrations from clinical signs and natriuretic peptide levels, observed Alexandre Mebazaa, MD, PhD, University of Paris and Public Hospitals of Paris.

Dr. Mebazaa is principal investigator on the trial, called STRONG-HF, which he presented at the American Heart Association scientific sessions, held in Chicago and virtually. He is also lead author on the study’s same-day publication in the Lancet.

The high-intensity strategy’s superiority emerged early in the trial, which was halted early on the data safety monitoring board’s recommendation, with about 90% of follow-ups completed. The board “felt it was unethical to keep patients in usual care,” Dr. Mebazaa said at a press conference.
 

A dramatic change

The next step, he said, will be to educate the heart failure community on the high-intensity care technique so it can swiftly enter clinical practice. Currently in acute heart failure, “very few patients are monitored after discharge and treated with full doses of heart failure therapies.”

Adoption of the strategy “would be a dramatic change from what’s currently being done,” said Martin B. Leon, MD, NewYork-Presbyterian/Columbia University Irving Medical Center, New York, who moderated the press conference.

Only an estimated 5% of patients with HF in the United States receive full guideline-directed medical therapy, Dr. Leon said, “so the generalizability of this strategy, with careful follow-up that has safety involved in it, is absolutely crucial.”

But the potential impact of this high-intensity approach on resource use is unknown, raising questions about how widely and consistently it could be implemented, said Dr. Leon, who is not connected with STRONG-HF.

The trial called for in-hospital initiation of the three distinct drug classes that, at the time, were the core of guideline-directed HF therapy, with up-titration to 50% of recommended dosage by hospital discharge, and then to 100% within 2 weeks later.

The meds included a beta-blocker, a mineralocorticoid receptor antagonist (MRA), and a renin-angiotensin system inhibitor (RASI). The latter could be an ACE inhibitor, angiotensin-receptor blocker (ARB), or angiotensin receptor-neprilysin inhibitor (ARNI).
 

How about a fourth drug?

Conspicuously absent from the list, for contemporary practice, was an SGLT2 inhibitor, a class that entered the HF guidelines well after STRONG-HF was designed. They would undoubtedly join the other three agents were the high-intensity strategy to enter practice, potentially changing its complexity and safety profile.

But Dr. Mebazaa and other experts don’t see that as a big challenge and would expect a smooth transition to a high-intensity approach that also includes the SGLT2 inhibitors.

STRONG-HF was necessary in part because many clinicians have been “reluctant” to take full advantage of three agents that had been the basis of guideline-directed therapy, he told this news organization.

That reluctance stemmed from concerns that beta-blockers might worsen the heart failure, ACE inhibitors could hurt the kidneys, or MRAs might cause hyperkalemia, Dr. Mebazaa said. The STRONG-HF high-intensity regimen, therefore, demanded multiple clinic visits for close follow-up.

But the SGLT2 inhibitors “are known to be rather safe drugs, at least much safer than the three others,” he said. So, it seems unlikely that their addition to a beta-blocker, RASI, and MRA in patients with HF would worsen the risk of adverse events.

John G.F. Cleland, MD, PhD, agrees. With addition of the fourth agent, “You may need to be a little bit more careful with renal function, just in that first couple of weeks,” he told this news organization. “But I think it would be easy to add an SGLT2 inhibitor into this regimen. And in general, there’s no titration with an SGLT2 inhibitor, so they’ll all be on full dose predischarge.”

Given the drugs’ diuretic-like action, moreover, some patients might be able to pull back on their loop diuretics, speculated Dr. Cleland, from the University of Glasgow’s School of Health and Wellbeing.

The prospect of a high-intensity strategy’s wide implementation in practice presents both “challenges and opportunities,” Amanda R. Vest, MBBS, MPH, Tufts University, Boston, told this news organization.

“There may be additional challenges in terms of ensuring we avoid hypotension or acute kidney injury in the up-titration phase,” said Dr. Vest, who is medical director of her center’s cardiac transplantation program but not connected with STRONG-HF.

“But it also gives us opportunities,” she added, “because there are some patients, especially in that vulnerable postdischarge phase, who are actually much more able to tolerate introduction of an SGLT2 inhibitor than, for example, an ACE inhibitor, ARB, or ARNI – or maybe a beta-blocker if they’ve been in a low cardiac-output state.” Effective dosing would depend on “the personalization and skill of the clinician in optimizing the medications in their correct sequence,” Dr. Vest said.

“It’s challenging to think that we would ever get to 100% up-titration,” she added, “and even in this excellent study, they didn’t get to 100%.” But as clinicians gain experience with the high-intensity strategy, especially as the SGLT2 inhibitors are included, “I think we can reasonably expect more progress to be made in these up-titration skills.”
 

No restrictions on LVEF

The researchers entered 1,078 patients hospitalized with acute HF in 14 countries across Africa, Europe, the Middle East, and South America, and randomly assigned them to the high-intensity management strategy or usual care.

About 60% of the patients were male and 77% were White. There were no entry restrictions based on left ventricular ejection fraction (LVEF), which exceeded 40% in almost a third of cases.

In the high-intensity care group’s 542 patients, the three agents were up-titrated to 50% of the maximum guideline-recommended dosage prior to hospital discharge, and to 100% within 2 weeks after discharge. Symptoms and laboratory biomarkers, including natriuretic peptides, were monitored closely at four planned clinical visits over the following 6 weeks.

The 536 patients assigned to usual care were discharged and managed according to local standards, with their meds handled by their own primary care doctors or cardiologists, the published report notes. They were reevaluated by STRONG-HF clinicians 90 days after discharge.

The number of clinic visits in the first 90 postdischarge days averaged 4.8 in the high-intensity care group and 1.0 for those receiving usual care. Full up-titration was far more likely in the high-intensity care group: 55% vs. 2% for RASI agents, 49% vs. 4% for beta-blockers, and 84% vs. 46% for MRAs.

They also fared significantly better on all measured parameters associated with decongestion, including weight, prevalence of peripheral edema, jugular venous pressure, NYHA functional class, and natriuretic peptide levels, the researchers said.

The primary endpoint of 180-day death from any cause or HF readmission was met by 15.2% of the high-intensity care group and 23.3% of usual-care patients, for an adjusted risk ratio (RR) of 0.66 (95% CI, 0.50-0.86; P = .0021).

Subgroup analyses saw no significant interactions by age, sex, race, geography, or baseline blood pressure, renal function, or LVEF. Patients with higher vs. lower baseline natriuretic peptide levels trend toward better responses to high-intensity care (P = .08)
 

The COVID effect

The group performed a sensitivity analysis that excluded deaths attributed to COVID-19 in STRONG-HF, which launched prior to the pandemic. The high-intensity strategy’s benefit for the primary endpoint grew, with an adjusted RR of 0.61 (95% CI, 0.46-0.82; P = .0005). There was no corresponding effect on death from any cause (P = .15).

Treatment-related adverse effects in the overall trial were seen in 41.1% of the high-intensity care group and in 29.5% of those assigned to usual care.

The higher rate in the high-intensity care arm “may be related to their higher number of [clinic] visits compared to usual care,” Dr. Mebazaa said. “However, serious adverse events and fatal adverse events were similar in both arms.”

Cardiac failure was the most common adverse event, developing in about 15% in both groups. It was followed by hypotension, hyperkalemia, and renal impairment, according to the published report.

Dr. Cleland cautioned that the risk of adverse events would potentially be higher should the high-intensity strategy become common clinical practice. The median age in STRONG-HF was 63, which is “10-15 years younger, on average, than the population with recently admitted heart failure that we see. There’s no doubt that older people have more multimorbidity.”

STRONG-HF was funded by Roche Diagnostics. Dr. Mebazaa discloses receiving grants from Roche Diagnostics, Abbott Laboratories, 4TEEN4, and Windtree Therapeutics; honoraria for lectures from Roche Diagnostics, Bayer, and Merck, Sharp & Dohme; and consulting for Corteria Pharmaceuticals, S-form Pharma, FIRE-1, Implicity, 4TEEN4, and Adrenomed; and to being a co-inventor on a patent involving combination therapy for patients having acute or persistent dyspnea.

Dr. Vest reports modest relationships with Boehringer Ingelheim, Corvia, and CareDx; and receiving research grants from the American Heart Association and the National Institutes of Health. Dr. Cleland discloses receiving honoraria from Idorsia; and research grants from Vifor Pharma, Medtronic, Bayer, and Bristol-Myers Squibb. Dr. Leon had no disclosures.

A version of this article first appeared on Medscape.com.

– Clinicians who prescribe heart failure meds are holding the best hand they’ve ever had, but with so much underuse and suboptimal dosing in actual practice, it seems many may not appreciate the value of their cards. But a major randomized trial that has captured the field’s attention may embolden them to go all in.

Results showed that a strategy of early, rapid up-titration of multiple guideline-directed meds in patients hospitalized with heart failure, compared with a usual-care approach, cut their 6-month risk for death or HF readmission by a steep 34% (P = .002).

The drugs had been started and partly up-titrated in the hospital with the goal of full up-titration within 2 weeks after discharge.

Patients well tolerated the high-intensity approach, researchers said. Their quality-of-life scores improved (P < .0001) compared with the usual-care group, and adverse events were considered few and manageable in the international trial with more than 1,000 patients.

Safety on the high-intensity strategy depended on close patient monitoring at frequently planned clinic visits along with guidance for the up-titrations from clinical signs and natriuretic peptide levels, observed Alexandre Mebazaa, MD, PhD, University of Paris and Public Hospitals of Paris.

Dr. Mebazaa is principal investigator on the trial, called STRONG-HF, which he presented at the American Heart Association scientific sessions, held in Chicago and virtually. He is also lead author on the study’s same-day publication in the Lancet.

The high-intensity strategy’s superiority emerged early in the trial, which was halted early on the data safety monitoring board’s recommendation, with about 90% of follow-ups completed. The board “felt it was unethical to keep patients in usual care,” Dr. Mebazaa said at a press conference.
 

A dramatic change

The next step, he said, will be to educate the heart failure community on the high-intensity care technique so it can swiftly enter clinical practice. Currently in acute heart failure, “very few patients are monitored after discharge and treated with full doses of heart failure therapies.”

Adoption of the strategy “would be a dramatic change from what’s currently being done,” said Martin B. Leon, MD, NewYork-Presbyterian/Columbia University Irving Medical Center, New York, who moderated the press conference.

Only an estimated 5% of patients with HF in the United States receive full guideline-directed medical therapy, Dr. Leon said, “so the generalizability of this strategy, with careful follow-up that has safety involved in it, is absolutely crucial.”

But the potential impact of this high-intensity approach on resource use is unknown, raising questions about how widely and consistently it could be implemented, said Dr. Leon, who is not connected with STRONG-HF.

The trial called for in-hospital initiation of the three distinct drug classes that, at the time, were the core of guideline-directed HF therapy, with up-titration to 50% of recommended dosage by hospital discharge, and then to 100% within 2 weeks later.

The meds included a beta-blocker, a mineralocorticoid receptor antagonist (MRA), and a renin-angiotensin system inhibitor (RASI). The latter could be an ACE inhibitor, angiotensin-receptor blocker (ARB), or angiotensin receptor-neprilysin inhibitor (ARNI).
 

How about a fourth drug?

Conspicuously absent from the list, for contemporary practice, was an SGLT2 inhibitor, a class that entered the HF guidelines well after STRONG-HF was designed. They would undoubtedly join the other three agents were the high-intensity strategy to enter practice, potentially changing its complexity and safety profile.

But Dr. Mebazaa and other experts don’t see that as a big challenge and would expect a smooth transition to a high-intensity approach that also includes the SGLT2 inhibitors.

STRONG-HF was necessary in part because many clinicians have been “reluctant” to take full advantage of three agents that had been the basis of guideline-directed therapy, he told this news organization.

That reluctance stemmed from concerns that beta-blockers might worsen the heart failure, ACE inhibitors could hurt the kidneys, or MRAs might cause hyperkalemia, Dr. Mebazaa said. The STRONG-HF high-intensity regimen, therefore, demanded multiple clinic visits for close follow-up.

But the SGLT2 inhibitors “are known to be rather safe drugs, at least much safer than the three others,” he said. So, it seems unlikely that their addition to a beta-blocker, RASI, and MRA in patients with HF would worsen the risk of adverse events.

John G.F. Cleland, MD, PhD, agrees. With addition of the fourth agent, “You may need to be a little bit more careful with renal function, just in that first couple of weeks,” he told this news organization. “But I think it would be easy to add an SGLT2 inhibitor into this regimen. And in general, there’s no titration with an SGLT2 inhibitor, so they’ll all be on full dose predischarge.”

Given the drugs’ diuretic-like action, moreover, some patients might be able to pull back on their loop diuretics, speculated Dr. Cleland, from the University of Glasgow’s School of Health and Wellbeing.

The prospect of a high-intensity strategy’s wide implementation in practice presents both “challenges and opportunities,” Amanda R. Vest, MBBS, MPH, Tufts University, Boston, told this news organization.

“There may be additional challenges in terms of ensuring we avoid hypotension or acute kidney injury in the up-titration phase,” said Dr. Vest, who is medical director of her center’s cardiac transplantation program but not connected with STRONG-HF.

“But it also gives us opportunities,” she added, “because there are some patients, especially in that vulnerable postdischarge phase, who are actually much more able to tolerate introduction of an SGLT2 inhibitor than, for example, an ACE inhibitor, ARB, or ARNI – or maybe a beta-blocker if they’ve been in a low cardiac-output state.” Effective dosing would depend on “the personalization and skill of the clinician in optimizing the medications in their correct sequence,” Dr. Vest said.

“It’s challenging to think that we would ever get to 100% up-titration,” she added, “and even in this excellent study, they didn’t get to 100%.” But as clinicians gain experience with the high-intensity strategy, especially as the SGLT2 inhibitors are included, “I think we can reasonably expect more progress to be made in these up-titration skills.”
 

No restrictions on LVEF

The researchers entered 1,078 patients hospitalized with acute HF in 14 countries across Africa, Europe, the Middle East, and South America, and randomly assigned them to the high-intensity management strategy or usual care.

About 60% of the patients were male and 77% were White. There were no entry restrictions based on left ventricular ejection fraction (LVEF), which exceeded 40% in almost a third of cases.

In the high-intensity care group’s 542 patients, the three agents were up-titrated to 50% of the maximum guideline-recommended dosage prior to hospital discharge, and to 100% within 2 weeks after discharge. Symptoms and laboratory biomarkers, including natriuretic peptides, were monitored closely at four planned clinical visits over the following 6 weeks.

The 536 patients assigned to usual care were discharged and managed according to local standards, with their meds handled by their own primary care doctors or cardiologists, the published report notes. They were reevaluated by STRONG-HF clinicians 90 days after discharge.

The number of clinic visits in the first 90 postdischarge days averaged 4.8 in the high-intensity care group and 1.0 for those receiving usual care. Full up-titration was far more likely in the high-intensity care group: 55% vs. 2% for RASI agents, 49% vs. 4% for beta-blockers, and 84% vs. 46% for MRAs.

They also fared significantly better on all measured parameters associated with decongestion, including weight, prevalence of peripheral edema, jugular venous pressure, NYHA functional class, and natriuretic peptide levels, the researchers said.

The primary endpoint of 180-day death from any cause or HF readmission was met by 15.2% of the high-intensity care group and 23.3% of usual-care patients, for an adjusted risk ratio (RR) of 0.66 (95% CI, 0.50-0.86; P = .0021).

Subgroup analyses saw no significant interactions by age, sex, race, geography, or baseline blood pressure, renal function, or LVEF. Patients with higher vs. lower baseline natriuretic peptide levels trend toward better responses to high-intensity care (P = .08)
 

The COVID effect

The group performed a sensitivity analysis that excluded deaths attributed to COVID-19 in STRONG-HF, which launched prior to the pandemic. The high-intensity strategy’s benefit for the primary endpoint grew, with an adjusted RR of 0.61 (95% CI, 0.46-0.82; P = .0005). There was no corresponding effect on death from any cause (P = .15).

Treatment-related adverse effects in the overall trial were seen in 41.1% of the high-intensity care group and in 29.5% of those assigned to usual care.

The higher rate in the high-intensity care arm “may be related to their higher number of [clinic] visits compared to usual care,” Dr. Mebazaa said. “However, serious adverse events and fatal adverse events were similar in both arms.”

Cardiac failure was the most common adverse event, developing in about 15% in both groups. It was followed by hypotension, hyperkalemia, and renal impairment, according to the published report.

Dr. Cleland cautioned that the risk of adverse events would potentially be higher should the high-intensity strategy become common clinical practice. The median age in STRONG-HF was 63, which is “10-15 years younger, on average, than the population with recently admitted heart failure that we see. There’s no doubt that older people have more multimorbidity.”

STRONG-HF was funded by Roche Diagnostics. Dr. Mebazaa discloses receiving grants from Roche Diagnostics, Abbott Laboratories, 4TEEN4, and Windtree Therapeutics; honoraria for lectures from Roche Diagnostics, Bayer, and Merck, Sharp & Dohme; and consulting for Corteria Pharmaceuticals, S-form Pharma, FIRE-1, Implicity, 4TEEN4, and Adrenomed; and to being a co-inventor on a patent involving combination therapy for patients having acute or persistent dyspnea.

Dr. Vest reports modest relationships with Boehringer Ingelheim, Corvia, and CareDx; and receiving research grants from the American Heart Association and the National Institutes of Health. Dr. Cleland discloses receiving honoraria from Idorsia; and research grants from Vifor Pharma, Medtronic, Bayer, and Bristol-Myers Squibb. Dr. Leon had no disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT AHA 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Flu vaccination associated with reduced stroke risk

Article Type
Changed

Influenza vaccination is associated with a reduced risk of stroke among adults, even if they aren’t at high risk for stroke, according to new research.

The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.

“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.

“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”

The study was published in the Lancet Public Health.
 

Large effect size

The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.

The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.

Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.

About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.

Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.

The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.

The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.

“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.

Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
 

 

 

Promoting cardiovascular health

In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.

Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.

Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.

“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”

Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
 

‘Call to action’

Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”

Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.

“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”

The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Influenza vaccination is associated with a reduced risk of stroke among adults, even if they aren’t at high risk for stroke, according to new research.

The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.

“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.

“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”

The study was published in the Lancet Public Health.
 

Large effect size

The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.

The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.

Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.

About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.

Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.

The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.

The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.

“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.

Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
 

 

 

Promoting cardiovascular health

In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.

Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.

Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.

“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”

Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
 

‘Call to action’

Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”

Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.

“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”

The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Influenza vaccination is associated with a reduced risk of stroke among adults, even if they aren’t at high risk for stroke, according to new research.

The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.

“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.

“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”

The study was published in the Lancet Public Health.
 

Large effect size

The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.

The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.

Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.

About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.

Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.

The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.

The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.

“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.

Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
 

 

 

Promoting cardiovascular health

In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.

Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.

Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.

“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”

Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
 

‘Call to action’

Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”

Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.

“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”

The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM LANCET PUBLIC HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article