Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

The Association of Inpatient Occupancy with Hospital-Acquired Clostridium difficile Infection

Article Type
Changed
Fri, 10/04/2019 - 16:31

High hospital occupancy is a fundamental challenge faced by healthcare systems in the United States.1-3 However, few studies have examined the effect of high occupancy on outcomes in the inpatient setting,4-9 and these showed mixed results. Hospital-acquired conditions (HACs), such as Clostridium difficile infection (CDI), are quality indicators for inpatient care and part of the Centers for Medicare and Medicaid Services’ Hospital-Acquired Conditions Reductions Program.10-12 However, few studies—largely conducted outside of the US—have evaluated the association between inpatient occupancy and HACs. These studies showed increasing hospital-acquired infection rates with increasing occupancy.13-15 Past studies of hospital occupancy have relied on annual average licensed bed counts, which are not a reliable measure of available and staffed beds and do not account for variations in patient volume and bed supply.16 Using a novel measure of inpatient occupancy, we tested the hypothesis that increasing inpatient occupancy is associated with a greater likelihood of CDI.

METHODS

We performed a retrospective analysis of administrative data from non-federal, acute care hospitals in California during 2008–2012 using the Office of Statewide Health Planning and Development (OSHPD) Patient Discharge Data set, a complete census of all CA licensed general acute care hospital discharge records. This study was approved by the OSHPD Committee for the Protection of Human Subjects and was deemed exempt by our institution’s Institutional Review Board.

Selection of Participants

The study population consisted of fee-for-service Medicare enrollees ≥65 years admitted through the emergency department (ED) with a hospital length of stay (HLOS) <50 days and a primary discharge diagnosis of acute myocardial infarction (MI), pneumonia (PNA), or heart failure (HF; [identified through the respective Clinical Classification Software [CCS]).

The sample was restricted to discharges with a HLOS of <50 days, because those with longer HLOS (0.01% of study sample) were likely different in ways that may bias our findings (eg, they will likely be sicker). We limited our study to admissions through the ED to reduce potential selection bias by excluding elective admissions and hospital-to-hospital transfers, which are likely dependent on occupancy. MI, HF, and PNA diagnoses were selected because they are prevalent and have high inpatient mortality, allowing us to examine the effect of occupancy on some of the sickest inpatients.17

Hospital-acquired cases of CDI were identified as discharges (using ICD-9 code 008.45 for CDI) that were not marked as present-on-admission (POA) using the method described by Zhan et al.18 To avoid small facility outlying effects, we included hospitals that had 100 or more MI, HF, and PNA discharges that met the inclusion criteria over the study years.

OSHPD inpatient data were combined with OSHPD hospital annual financial data that contain hospital-level variables including ownership (City/County, District, Investor, and Non-Profit), geography (based on health services area), teaching status, urbanicity, and size based on the number of average annual licensed beds. If characteristics were not available for a given hospital for 1 or more years, the information from the closest available year was used for that hospital (replacement required for 10,504 (1.5%) cases; 4,856 otherwise eligible cases (0.7%) were dropped because the hospital was not included in the annual financial data for any year. Approximately 0.2% of records had invalid values for disposition, payer, or admission route, and were therefore dropped. Patient residence zip code-level socioeconomic status was measured using the percentage of families living below the poverty line, median family income, and the percentage of individuals with less than a high school degree among those aged ≥ 25 years19; these measures were divided into 3 groups (bottom quartile, top quartile, and middle 50%) for analysis.

 

 

Measure of Occupancy

Calculating Daily Census and Bed Capacity

We calculated the daily census using admission date and HLOS for each observation in our dataset. We approximated the bed capacity as the maximum daily census in the 121-day window (+/- 60 days) around each census day in each hospital. The 121-day window was chosen to increase the likelihood of capturing changes in bed availability (eg, due to unit closures) and seasonal variability. Our daily census does not include patients admitted with psychiatric and obstetrics diagnoses and long-term care/rehabilitation stays (identified through CCS categories and excluded) because these patients are not likely to compete for the same hospital resources as those receiving care for MI, HF, and PNA. See Appendix Table 1 for definition of the occupancy terms.

Calculating Relative Daily Occupancy

We developed a raw hospital-specific occupancy measure by dividing the daily census by the maximum census in each 121-day window for each hospital. We converted these raw measures to percentiles within the 121-day window to create a daily relative occupancy measure. For example, median level occupancy day would correspond to an occupancy of 0.5; a minimum or maximum occupancy day would correspond to 0 or 1, respectively. We preferred a relative occupancy measure because it assumes that what constitutes “high occupancy” likely depends on the usual occupancy level of the facility.

Measuring Admission Day Occupancy and Average Occupancy over Hospitalization

Using the relative daily occupancy values, we constructed patient-level variables representing occupancy on admission day and average occupancy during hospitalization.

Data Analysis

First, we estimated descriptive statistics of the sample for occupancy, patient-level (eg, age, race, gender, and severity of illness), hospital-level (eg, size, teaching status, and urbanicity), and incident-level (day-of-the-week and season) variables. Next, we used logistic regression with cluster standard errors to estimate the adjusted and unadjusted association of occupancy with CDI. For this analysis, occupancy was broken into 4 groups: 0.00-0.25 (low occupancy); 0.26-0.50; 0.51-0.75; and 0.76-1.00 (high occupancy), with the 0.0-0.25 group treated as the reference level. We fit separate models for admission and average occupancy and re-ran the latter model including HLOS as a sensitivity analysis.

RESULTS

Study Population and Hospitals

Across 327 hospitals, 558,829 discharges (including deaths) met our inclusion criteria and there were 2045 admissions with CDI. The hospital and discharge characteristics are reported in Appendix Table 2.

Relationship of Occupancy with CDI

With regard to admission occupancy, the 0.26-0.50 group did not have a significantly higher rate of CDI than the low occupancy group. Both the 0.51-0.75 and the 0.76-1.00 occupancy groups had 15% lower odds of CDI compared to the low occupancy group (Table). The adjusted results were similar, although the comparison between the low and high occupancy groups was marginally nonsignificant.

With regard to average occupancy, intermediate levels of occupancy (ie, 0.26-0.50 and 0.51-0.75 groups) had over 3-fold increased odds of CDI relative to the low occupancy group; the high occupancy group did not have significantly different odds of CDI compared to the low occupancy group (Table 1). The adjusted results were similar with no changes in statistical significance. Including HLOS tempered the adjusted odds of CDI to 1.6 for intermediate levels of occupancy, but these remained significantly higher than high or low occupancy.

DISCUSSION

Hospital occupancy is related to CDI. However, contrary to expectation, we found that higher admission and average occupancy over hospitalization were not related to more hospital-acquired CDI. CDI rates were highest for intermediate levels of average occupancy with lower CDI rates at high and low occupancy. CDI had an inverse relationship with admission occupancy.

These findings suggest that an exploration of the processes associated with hospitals accommodating higher occupancy might elucidate measures to reduce CDI. How do staffing, implementation of policies, and routine procedures vary when hospitals are busy or quiet? What aspects of care delivery that function well during high and low occupancy periods breakdown during intermediate occupancy? Hospital policies, practices, and procedures during different phases of occupancy might inform best practices. These data suggest that hospital occupancy level should be a routinely collected data element by infection control officers and that this should be linked with protocols triggered or modified with high or low occupancy that might affect HACs.

Previous studies in Europe found increasing hospital-acquired infection rates with increasing occupancy.13-15 The authors postulated that increasing occupancy may limit available resources and increase nursing workloads, negatively impacting adherence to hand hygiene and cleaning protocols .8 However, these studies did not account for infections that were POA. In addition, our study examined hospitals in California after the 2006 implementation of the minimum nurse staffing policy, which means that staff to patient ratios could not fall below fixed thresholds that were typically higher than pre-policy ratios.19

This study had limitations pertaining to coded administrative data, including quality of coding and data validity. However, OSHPD has strict data reporting processes.20 This study focused on 1 state; however, California is large with a demographically diverse population and hospital types, characteristics that would help generalize findings. Furthermore, when using the average occupancy measure, we could not determine whether the complication was acquired during the high occupancy period of the hospitalization.

Higher admission day occupancy was associated with lower likelihood of CDI, and CDI rates were lower at high and low average occupancy. These findings should prompt exploration of how hospitals react to occupancy changes and how those care processes translate into HACs in order to inform best practices for hospital care.

 

 

Acknowledgments

The authors would like to thank Ms. Amanda Kogowski, MPH and Mr. Rekar Taymour, MS for their editorial assistance with drafting the manuscript.

Disclosures

The authors have no conflicts to disclose.

Funding 

This study was funded by the National Institute on Aging.

Files
References

1. Siegel B, Wilson MJ, Sickler D. Enhancing work flow to reduce crowding. Jt Comm J Qual Patient Saf. 2007;33(11):57-67. PubMed
2. Institute of Medicine Committee on the Future of Emergency Care in the U. S. Health System. The future of emergency care in the United States health system. Ann Emerg Med. 2006;48(2):115-120. DOI:10.1016/j.annemergmed.2006.06.015. PubMed
3. Weissman JS, Rothschild JM, Bendavid E, et al. Hospital workload and adverse events. Med Care. 2007;45(5):448-455. DOI: 10.1097/01.mlr.0000257231.86368.09. PubMed
4. Fieldston ES, Hall M, Shah SS, et al. Addressing inpatient crowding by smoothing occupancy at children’s hospitals. JHM. 2011;6(8):466-473. DOI: 10.1186/s12245-014-0025-4. PubMed
5. Evans WN, Kim B. Patient outcomes when hospitals experience a surge in admissions. J Health Econ. 2006;25(2):365-388. DOI: 10.1016/j.jhealeco.2005.10.003. PubMed
6. Bair AE, Song WT, Chen Y-C, Morris BA. The impact of inpatient boarding on ED efficiency: a discrete-event simulation study. J Med Syst. 2010;34(5):919-929. DOI: 10.1007/s10916-009-9307-4. PubMed
7. Schilling PL, Campbell Jr DA, Englesbe MJ, Davis MM. A comparison of in-hospital mortality risk conferred by high hospital occupancy, differences in nurse staffing levels, weekend admission, and seasonal influenza. Med Care. 2010;48(3):224-232. DOI: 10.1097/MLR.0b013e3181c162c0. PubMed
8. Schwierz C, Augurzky B, Focke A, Wasem J. Demand, selection and patient outcomes in German acute care hospitals. Health Econ. 2012;21(3):209-221. PubMed
9. Sharma R, Stano M, Gehring R. Short‐term fluctuations in hospital demand: implications for admission, discharge, and discriminatory behavior. RAND J. Econ. 2008;39(2):586-606. PubMed
10. Centers for Medicare and Medicaid Services. Hospital-Acquired Condition Reduction Program (HACRP). 2016; https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/HAC-Reduction-Program.html. Accessed October 05, 2017. 
11. Cunningham JB, Kernohan G, Rush T. Bed occupancy, turnover intervals and MRSA rates in English hospitals. Br J Nurs. 2006;15(12):656-660. DOI: 10.12968/bjon.2006.15.12.21398. PubMed
12. Cunningham JB, Kernohan WG, Rush T. Bed occupancy, turnover interval and MRSA rates in Northern Ireland. Br J Nurs. 2006;15(6):324-328. DOI: 10.12968/bjon.2006.15.6.20680. PubMed
13. Kaier K, Luft D, Dettenkofer M, Kist M, Frank U. Correlations between bed occupancy rates and Clostridium difficile infections: a time-series analysis. Epidemiol Infect. 2011;139(3):482-485. DOI: 10.1017/S0950268810001214. PubMed
14. Rafferty AM, Clarke SP, Coles J, et al. Outcomes of variation in hospital nurse staffing in English hospitals: cross-sectional analysis of survey data and discharge records. Int J Nurs Stud. 2007;44(2):175-182. DOI: 10.1016/j.ijnurstu.2006.08.003. PubMed
15. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. DOI: 10.1056/NEJMsa003376. PubMed
16. Zhan C, Elixhauser A, Richards CL Jr, et al. Identification of hospital-acquired catheter-associated urinary tract infections from Medicare claims: sensitivity and positive predictive value. Med Care. 2009;47(3):364-369. DOI: 10.1097/MLR.0b013e31818af83d. PubMed
17. U.S. American factfinder. United States Census Bureau; 2016. 
18. McHugh MD, Ma C. Hospital nursing and 30-day readmissions among Medicare patients with heart failure, acute myocardial infarction, and pneumonia. Med Care. 2013;51(1):52. DOI: 10.1097/MLR.0b013e3182763284. PubMed
19. Coffman JM, Seago JA, Spetz J. Minimum nurse-to-patient ratios in acute care hospitals in California. Health Aff. 2002;21(5):53-64. DOI:10.1377/hlthaff.21.5.53 PubMed
20. State of California. Medical Information Reporting for California (MIRCal) Regulations. 2016. 

Article PDF
Issue
Journal of Hospital Medicine 13(10)
Publications
Topics
Page Number
698-701. Published online first June 27, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

High hospital occupancy is a fundamental challenge faced by healthcare systems in the United States.1-3 However, few studies have examined the effect of high occupancy on outcomes in the inpatient setting,4-9 and these showed mixed results. Hospital-acquired conditions (HACs), such as Clostridium difficile infection (CDI), are quality indicators for inpatient care and part of the Centers for Medicare and Medicaid Services’ Hospital-Acquired Conditions Reductions Program.10-12 However, few studies—largely conducted outside of the US—have evaluated the association between inpatient occupancy and HACs. These studies showed increasing hospital-acquired infection rates with increasing occupancy.13-15 Past studies of hospital occupancy have relied on annual average licensed bed counts, which are not a reliable measure of available and staffed beds and do not account for variations in patient volume and bed supply.16 Using a novel measure of inpatient occupancy, we tested the hypothesis that increasing inpatient occupancy is associated with a greater likelihood of CDI.

METHODS

We performed a retrospective analysis of administrative data from non-federal, acute care hospitals in California during 2008–2012 using the Office of Statewide Health Planning and Development (OSHPD) Patient Discharge Data set, a complete census of all CA licensed general acute care hospital discharge records. This study was approved by the OSHPD Committee for the Protection of Human Subjects and was deemed exempt by our institution’s Institutional Review Board.

Selection of Participants

The study population consisted of fee-for-service Medicare enrollees ≥65 years admitted through the emergency department (ED) with a hospital length of stay (HLOS) <50 days and a primary discharge diagnosis of acute myocardial infarction (MI), pneumonia (PNA), or heart failure (HF; [identified through the respective Clinical Classification Software [CCS]).

The sample was restricted to discharges with a HLOS of <50 days, because those with longer HLOS (0.01% of study sample) were likely different in ways that may bias our findings (eg, they will likely be sicker). We limited our study to admissions through the ED to reduce potential selection bias by excluding elective admissions and hospital-to-hospital transfers, which are likely dependent on occupancy. MI, HF, and PNA diagnoses were selected because they are prevalent and have high inpatient mortality, allowing us to examine the effect of occupancy on some of the sickest inpatients.17

Hospital-acquired cases of CDI were identified as discharges (using ICD-9 code 008.45 for CDI) that were not marked as present-on-admission (POA) using the method described by Zhan et al.18 To avoid small facility outlying effects, we included hospitals that had 100 or more MI, HF, and PNA discharges that met the inclusion criteria over the study years.

OSHPD inpatient data were combined with OSHPD hospital annual financial data that contain hospital-level variables including ownership (City/County, District, Investor, and Non-Profit), geography (based on health services area), teaching status, urbanicity, and size based on the number of average annual licensed beds. If characteristics were not available for a given hospital for 1 or more years, the information from the closest available year was used for that hospital (replacement required for 10,504 (1.5%) cases; 4,856 otherwise eligible cases (0.7%) were dropped because the hospital was not included in the annual financial data for any year. Approximately 0.2% of records had invalid values for disposition, payer, or admission route, and were therefore dropped. Patient residence zip code-level socioeconomic status was measured using the percentage of families living below the poverty line, median family income, and the percentage of individuals with less than a high school degree among those aged ≥ 25 years19; these measures were divided into 3 groups (bottom quartile, top quartile, and middle 50%) for analysis.

 

 

Measure of Occupancy

Calculating Daily Census and Bed Capacity

We calculated the daily census using admission date and HLOS for each observation in our dataset. We approximated the bed capacity as the maximum daily census in the 121-day window (+/- 60 days) around each census day in each hospital. The 121-day window was chosen to increase the likelihood of capturing changes in bed availability (eg, due to unit closures) and seasonal variability. Our daily census does not include patients admitted with psychiatric and obstetrics diagnoses and long-term care/rehabilitation stays (identified through CCS categories and excluded) because these patients are not likely to compete for the same hospital resources as those receiving care for MI, HF, and PNA. See Appendix Table 1 for definition of the occupancy terms.

Calculating Relative Daily Occupancy

We developed a raw hospital-specific occupancy measure by dividing the daily census by the maximum census in each 121-day window for each hospital. We converted these raw measures to percentiles within the 121-day window to create a daily relative occupancy measure. For example, median level occupancy day would correspond to an occupancy of 0.5; a minimum or maximum occupancy day would correspond to 0 or 1, respectively. We preferred a relative occupancy measure because it assumes that what constitutes “high occupancy” likely depends on the usual occupancy level of the facility.

Measuring Admission Day Occupancy and Average Occupancy over Hospitalization

Using the relative daily occupancy values, we constructed patient-level variables representing occupancy on admission day and average occupancy during hospitalization.

Data Analysis

First, we estimated descriptive statistics of the sample for occupancy, patient-level (eg, age, race, gender, and severity of illness), hospital-level (eg, size, teaching status, and urbanicity), and incident-level (day-of-the-week and season) variables. Next, we used logistic regression with cluster standard errors to estimate the adjusted and unadjusted association of occupancy with CDI. For this analysis, occupancy was broken into 4 groups: 0.00-0.25 (low occupancy); 0.26-0.50; 0.51-0.75; and 0.76-1.00 (high occupancy), with the 0.0-0.25 group treated as the reference level. We fit separate models for admission and average occupancy and re-ran the latter model including HLOS as a sensitivity analysis.

RESULTS

Study Population and Hospitals

Across 327 hospitals, 558,829 discharges (including deaths) met our inclusion criteria and there were 2045 admissions with CDI. The hospital and discharge characteristics are reported in Appendix Table 2.

Relationship of Occupancy with CDI

With regard to admission occupancy, the 0.26-0.50 group did not have a significantly higher rate of CDI than the low occupancy group. Both the 0.51-0.75 and the 0.76-1.00 occupancy groups had 15% lower odds of CDI compared to the low occupancy group (Table). The adjusted results were similar, although the comparison between the low and high occupancy groups was marginally nonsignificant.

With regard to average occupancy, intermediate levels of occupancy (ie, 0.26-0.50 and 0.51-0.75 groups) had over 3-fold increased odds of CDI relative to the low occupancy group; the high occupancy group did not have significantly different odds of CDI compared to the low occupancy group (Table 1). The adjusted results were similar with no changes in statistical significance. Including HLOS tempered the adjusted odds of CDI to 1.6 for intermediate levels of occupancy, but these remained significantly higher than high or low occupancy.

DISCUSSION

Hospital occupancy is related to CDI. However, contrary to expectation, we found that higher admission and average occupancy over hospitalization were not related to more hospital-acquired CDI. CDI rates were highest for intermediate levels of average occupancy with lower CDI rates at high and low occupancy. CDI had an inverse relationship with admission occupancy.

These findings suggest that an exploration of the processes associated with hospitals accommodating higher occupancy might elucidate measures to reduce CDI. How do staffing, implementation of policies, and routine procedures vary when hospitals are busy or quiet? What aspects of care delivery that function well during high and low occupancy periods breakdown during intermediate occupancy? Hospital policies, practices, and procedures during different phases of occupancy might inform best practices. These data suggest that hospital occupancy level should be a routinely collected data element by infection control officers and that this should be linked with protocols triggered or modified with high or low occupancy that might affect HACs.

Previous studies in Europe found increasing hospital-acquired infection rates with increasing occupancy.13-15 The authors postulated that increasing occupancy may limit available resources and increase nursing workloads, negatively impacting adherence to hand hygiene and cleaning protocols .8 However, these studies did not account for infections that were POA. In addition, our study examined hospitals in California after the 2006 implementation of the minimum nurse staffing policy, which means that staff to patient ratios could not fall below fixed thresholds that were typically higher than pre-policy ratios.19

This study had limitations pertaining to coded administrative data, including quality of coding and data validity. However, OSHPD has strict data reporting processes.20 This study focused on 1 state; however, California is large with a demographically diverse population and hospital types, characteristics that would help generalize findings. Furthermore, when using the average occupancy measure, we could not determine whether the complication was acquired during the high occupancy period of the hospitalization.

Higher admission day occupancy was associated with lower likelihood of CDI, and CDI rates were lower at high and low average occupancy. These findings should prompt exploration of how hospitals react to occupancy changes and how those care processes translate into HACs in order to inform best practices for hospital care.

 

 

Acknowledgments

The authors would like to thank Ms. Amanda Kogowski, MPH and Mr. Rekar Taymour, MS for their editorial assistance with drafting the manuscript.

Disclosures

The authors have no conflicts to disclose.

Funding 

This study was funded by the National Institute on Aging.

High hospital occupancy is a fundamental challenge faced by healthcare systems in the United States.1-3 However, few studies have examined the effect of high occupancy on outcomes in the inpatient setting,4-9 and these showed mixed results. Hospital-acquired conditions (HACs), such as Clostridium difficile infection (CDI), are quality indicators for inpatient care and part of the Centers for Medicare and Medicaid Services’ Hospital-Acquired Conditions Reductions Program.10-12 However, few studies—largely conducted outside of the US—have evaluated the association between inpatient occupancy and HACs. These studies showed increasing hospital-acquired infection rates with increasing occupancy.13-15 Past studies of hospital occupancy have relied on annual average licensed bed counts, which are not a reliable measure of available and staffed beds and do not account for variations in patient volume and bed supply.16 Using a novel measure of inpatient occupancy, we tested the hypothesis that increasing inpatient occupancy is associated with a greater likelihood of CDI.

METHODS

We performed a retrospective analysis of administrative data from non-federal, acute care hospitals in California during 2008–2012 using the Office of Statewide Health Planning and Development (OSHPD) Patient Discharge Data set, a complete census of all CA licensed general acute care hospital discharge records. This study was approved by the OSHPD Committee for the Protection of Human Subjects and was deemed exempt by our institution’s Institutional Review Board.

Selection of Participants

The study population consisted of fee-for-service Medicare enrollees ≥65 years admitted through the emergency department (ED) with a hospital length of stay (HLOS) <50 days and a primary discharge diagnosis of acute myocardial infarction (MI), pneumonia (PNA), or heart failure (HF; [identified through the respective Clinical Classification Software [CCS]).

The sample was restricted to discharges with a HLOS of <50 days, because those with longer HLOS (0.01% of study sample) were likely different in ways that may bias our findings (eg, they will likely be sicker). We limited our study to admissions through the ED to reduce potential selection bias by excluding elective admissions and hospital-to-hospital transfers, which are likely dependent on occupancy. MI, HF, and PNA diagnoses were selected because they are prevalent and have high inpatient mortality, allowing us to examine the effect of occupancy on some of the sickest inpatients.17

Hospital-acquired cases of CDI were identified as discharges (using ICD-9 code 008.45 for CDI) that were not marked as present-on-admission (POA) using the method described by Zhan et al.18 To avoid small facility outlying effects, we included hospitals that had 100 or more MI, HF, and PNA discharges that met the inclusion criteria over the study years.

OSHPD inpatient data were combined with OSHPD hospital annual financial data that contain hospital-level variables including ownership (City/County, District, Investor, and Non-Profit), geography (based on health services area), teaching status, urbanicity, and size based on the number of average annual licensed beds. If characteristics were not available for a given hospital for 1 or more years, the information from the closest available year was used for that hospital (replacement required for 10,504 (1.5%) cases; 4,856 otherwise eligible cases (0.7%) were dropped because the hospital was not included in the annual financial data for any year. Approximately 0.2% of records had invalid values for disposition, payer, or admission route, and were therefore dropped. Patient residence zip code-level socioeconomic status was measured using the percentage of families living below the poverty line, median family income, and the percentage of individuals with less than a high school degree among those aged ≥ 25 years19; these measures were divided into 3 groups (bottom quartile, top quartile, and middle 50%) for analysis.

 

 

Measure of Occupancy

Calculating Daily Census and Bed Capacity

We calculated the daily census using admission date and HLOS for each observation in our dataset. We approximated the bed capacity as the maximum daily census in the 121-day window (+/- 60 days) around each census day in each hospital. The 121-day window was chosen to increase the likelihood of capturing changes in bed availability (eg, due to unit closures) and seasonal variability. Our daily census does not include patients admitted with psychiatric and obstetrics diagnoses and long-term care/rehabilitation stays (identified through CCS categories and excluded) because these patients are not likely to compete for the same hospital resources as those receiving care for MI, HF, and PNA. See Appendix Table 1 for definition of the occupancy terms.

Calculating Relative Daily Occupancy

We developed a raw hospital-specific occupancy measure by dividing the daily census by the maximum census in each 121-day window for each hospital. We converted these raw measures to percentiles within the 121-day window to create a daily relative occupancy measure. For example, median level occupancy day would correspond to an occupancy of 0.5; a minimum or maximum occupancy day would correspond to 0 or 1, respectively. We preferred a relative occupancy measure because it assumes that what constitutes “high occupancy” likely depends on the usual occupancy level of the facility.

Measuring Admission Day Occupancy and Average Occupancy over Hospitalization

Using the relative daily occupancy values, we constructed patient-level variables representing occupancy on admission day and average occupancy during hospitalization.

Data Analysis

First, we estimated descriptive statistics of the sample for occupancy, patient-level (eg, age, race, gender, and severity of illness), hospital-level (eg, size, teaching status, and urbanicity), and incident-level (day-of-the-week and season) variables. Next, we used logistic regression with cluster standard errors to estimate the adjusted and unadjusted association of occupancy with CDI. For this analysis, occupancy was broken into 4 groups: 0.00-0.25 (low occupancy); 0.26-0.50; 0.51-0.75; and 0.76-1.00 (high occupancy), with the 0.0-0.25 group treated as the reference level. We fit separate models for admission and average occupancy and re-ran the latter model including HLOS as a sensitivity analysis.

RESULTS

Study Population and Hospitals

Across 327 hospitals, 558,829 discharges (including deaths) met our inclusion criteria and there were 2045 admissions with CDI. The hospital and discharge characteristics are reported in Appendix Table 2.

Relationship of Occupancy with CDI

With regard to admission occupancy, the 0.26-0.50 group did not have a significantly higher rate of CDI than the low occupancy group. Both the 0.51-0.75 and the 0.76-1.00 occupancy groups had 15% lower odds of CDI compared to the low occupancy group (Table). The adjusted results were similar, although the comparison between the low and high occupancy groups was marginally nonsignificant.

With regard to average occupancy, intermediate levels of occupancy (ie, 0.26-0.50 and 0.51-0.75 groups) had over 3-fold increased odds of CDI relative to the low occupancy group; the high occupancy group did not have significantly different odds of CDI compared to the low occupancy group (Table 1). The adjusted results were similar with no changes in statistical significance. Including HLOS tempered the adjusted odds of CDI to 1.6 for intermediate levels of occupancy, but these remained significantly higher than high or low occupancy.

DISCUSSION

Hospital occupancy is related to CDI. However, contrary to expectation, we found that higher admission and average occupancy over hospitalization were not related to more hospital-acquired CDI. CDI rates were highest for intermediate levels of average occupancy with lower CDI rates at high and low occupancy. CDI had an inverse relationship with admission occupancy.

These findings suggest that an exploration of the processes associated with hospitals accommodating higher occupancy might elucidate measures to reduce CDI. How do staffing, implementation of policies, and routine procedures vary when hospitals are busy or quiet? What aspects of care delivery that function well during high and low occupancy periods breakdown during intermediate occupancy? Hospital policies, practices, and procedures during different phases of occupancy might inform best practices. These data suggest that hospital occupancy level should be a routinely collected data element by infection control officers and that this should be linked with protocols triggered or modified with high or low occupancy that might affect HACs.

Previous studies in Europe found increasing hospital-acquired infection rates with increasing occupancy.13-15 The authors postulated that increasing occupancy may limit available resources and increase nursing workloads, negatively impacting adherence to hand hygiene and cleaning protocols .8 However, these studies did not account for infections that were POA. In addition, our study examined hospitals in California after the 2006 implementation of the minimum nurse staffing policy, which means that staff to patient ratios could not fall below fixed thresholds that were typically higher than pre-policy ratios.19

This study had limitations pertaining to coded administrative data, including quality of coding and data validity. However, OSHPD has strict data reporting processes.20 This study focused on 1 state; however, California is large with a demographically diverse population and hospital types, characteristics that would help generalize findings. Furthermore, when using the average occupancy measure, we could not determine whether the complication was acquired during the high occupancy period of the hospitalization.

Higher admission day occupancy was associated with lower likelihood of CDI, and CDI rates were lower at high and low average occupancy. These findings should prompt exploration of how hospitals react to occupancy changes and how those care processes translate into HACs in order to inform best practices for hospital care.

 

 

Acknowledgments

The authors would like to thank Ms. Amanda Kogowski, MPH and Mr. Rekar Taymour, MS for their editorial assistance with drafting the manuscript.

Disclosures

The authors have no conflicts to disclose.

Funding 

This study was funded by the National Institute on Aging.

References

1. Siegel B, Wilson MJ, Sickler D. Enhancing work flow to reduce crowding. Jt Comm J Qual Patient Saf. 2007;33(11):57-67. PubMed
2. Institute of Medicine Committee on the Future of Emergency Care in the U. S. Health System. The future of emergency care in the United States health system. Ann Emerg Med. 2006;48(2):115-120. DOI:10.1016/j.annemergmed.2006.06.015. PubMed
3. Weissman JS, Rothschild JM, Bendavid E, et al. Hospital workload and adverse events. Med Care. 2007;45(5):448-455. DOI: 10.1097/01.mlr.0000257231.86368.09. PubMed
4. Fieldston ES, Hall M, Shah SS, et al. Addressing inpatient crowding by smoothing occupancy at children’s hospitals. JHM. 2011;6(8):466-473. DOI: 10.1186/s12245-014-0025-4. PubMed
5. Evans WN, Kim B. Patient outcomes when hospitals experience a surge in admissions. J Health Econ. 2006;25(2):365-388. DOI: 10.1016/j.jhealeco.2005.10.003. PubMed
6. Bair AE, Song WT, Chen Y-C, Morris BA. The impact of inpatient boarding on ED efficiency: a discrete-event simulation study. J Med Syst. 2010;34(5):919-929. DOI: 10.1007/s10916-009-9307-4. PubMed
7. Schilling PL, Campbell Jr DA, Englesbe MJ, Davis MM. A comparison of in-hospital mortality risk conferred by high hospital occupancy, differences in nurse staffing levels, weekend admission, and seasonal influenza. Med Care. 2010;48(3):224-232. DOI: 10.1097/MLR.0b013e3181c162c0. PubMed
8. Schwierz C, Augurzky B, Focke A, Wasem J. Demand, selection and patient outcomes in German acute care hospitals. Health Econ. 2012;21(3):209-221. PubMed
9. Sharma R, Stano M, Gehring R. Short‐term fluctuations in hospital demand: implications for admission, discharge, and discriminatory behavior. RAND J. Econ. 2008;39(2):586-606. PubMed
10. Centers for Medicare and Medicaid Services. Hospital-Acquired Condition Reduction Program (HACRP). 2016; https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/HAC-Reduction-Program.html. Accessed October 05, 2017. 
11. Cunningham JB, Kernohan G, Rush T. Bed occupancy, turnover intervals and MRSA rates in English hospitals. Br J Nurs. 2006;15(12):656-660. DOI: 10.12968/bjon.2006.15.12.21398. PubMed
12. Cunningham JB, Kernohan WG, Rush T. Bed occupancy, turnover interval and MRSA rates in Northern Ireland. Br J Nurs. 2006;15(6):324-328. DOI: 10.12968/bjon.2006.15.6.20680. PubMed
13. Kaier K, Luft D, Dettenkofer M, Kist M, Frank U. Correlations between bed occupancy rates and Clostridium difficile infections: a time-series analysis. Epidemiol Infect. 2011;139(3):482-485. DOI: 10.1017/S0950268810001214. PubMed
14. Rafferty AM, Clarke SP, Coles J, et al. Outcomes of variation in hospital nurse staffing in English hospitals: cross-sectional analysis of survey data and discharge records. Int J Nurs Stud. 2007;44(2):175-182. DOI: 10.1016/j.ijnurstu.2006.08.003. PubMed
15. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. DOI: 10.1056/NEJMsa003376. PubMed
16. Zhan C, Elixhauser A, Richards CL Jr, et al. Identification of hospital-acquired catheter-associated urinary tract infections from Medicare claims: sensitivity and positive predictive value. Med Care. 2009;47(3):364-369. DOI: 10.1097/MLR.0b013e31818af83d. PubMed
17. U.S. American factfinder. United States Census Bureau; 2016. 
18. McHugh MD, Ma C. Hospital nursing and 30-day readmissions among Medicare patients with heart failure, acute myocardial infarction, and pneumonia. Med Care. 2013;51(1):52. DOI: 10.1097/MLR.0b013e3182763284. PubMed
19. Coffman JM, Seago JA, Spetz J. Minimum nurse-to-patient ratios in acute care hospitals in California. Health Aff. 2002;21(5):53-64. DOI:10.1377/hlthaff.21.5.53 PubMed
20. State of California. Medical Information Reporting for California (MIRCal) Regulations. 2016. 

References

1. Siegel B, Wilson MJ, Sickler D. Enhancing work flow to reduce crowding. Jt Comm J Qual Patient Saf. 2007;33(11):57-67. PubMed
2. Institute of Medicine Committee on the Future of Emergency Care in the U. S. Health System. The future of emergency care in the United States health system. Ann Emerg Med. 2006;48(2):115-120. DOI:10.1016/j.annemergmed.2006.06.015. PubMed
3. Weissman JS, Rothschild JM, Bendavid E, et al. Hospital workload and adverse events. Med Care. 2007;45(5):448-455. DOI: 10.1097/01.mlr.0000257231.86368.09. PubMed
4. Fieldston ES, Hall M, Shah SS, et al. Addressing inpatient crowding by smoothing occupancy at children’s hospitals. JHM. 2011;6(8):466-473. DOI: 10.1186/s12245-014-0025-4. PubMed
5. Evans WN, Kim B. Patient outcomes when hospitals experience a surge in admissions. J Health Econ. 2006;25(2):365-388. DOI: 10.1016/j.jhealeco.2005.10.003. PubMed
6. Bair AE, Song WT, Chen Y-C, Morris BA. The impact of inpatient boarding on ED efficiency: a discrete-event simulation study. J Med Syst. 2010;34(5):919-929. DOI: 10.1007/s10916-009-9307-4. PubMed
7. Schilling PL, Campbell Jr DA, Englesbe MJ, Davis MM. A comparison of in-hospital mortality risk conferred by high hospital occupancy, differences in nurse staffing levels, weekend admission, and seasonal influenza. Med Care. 2010;48(3):224-232. DOI: 10.1097/MLR.0b013e3181c162c0. PubMed
8. Schwierz C, Augurzky B, Focke A, Wasem J. Demand, selection and patient outcomes in German acute care hospitals. Health Econ. 2012;21(3):209-221. PubMed
9. Sharma R, Stano M, Gehring R. Short‐term fluctuations in hospital demand: implications for admission, discharge, and discriminatory behavior. RAND J. Econ. 2008;39(2):586-606. PubMed
10. Centers for Medicare and Medicaid Services. Hospital-Acquired Condition Reduction Program (HACRP). 2016; https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/HAC-Reduction-Program.html. Accessed October 05, 2017. 
11. Cunningham JB, Kernohan G, Rush T. Bed occupancy, turnover intervals and MRSA rates in English hospitals. Br J Nurs. 2006;15(12):656-660. DOI: 10.12968/bjon.2006.15.12.21398. PubMed
12. Cunningham JB, Kernohan WG, Rush T. Bed occupancy, turnover interval and MRSA rates in Northern Ireland. Br J Nurs. 2006;15(6):324-328. DOI: 10.12968/bjon.2006.15.6.20680. PubMed
13. Kaier K, Luft D, Dettenkofer M, Kist M, Frank U. Correlations between bed occupancy rates and Clostridium difficile infections: a time-series analysis. Epidemiol Infect. 2011;139(3):482-485. DOI: 10.1017/S0950268810001214. PubMed
14. Rafferty AM, Clarke SP, Coles J, et al. Outcomes of variation in hospital nurse staffing in English hospitals: cross-sectional analysis of survey data and discharge records. Int J Nurs Stud. 2007;44(2):175-182. DOI: 10.1016/j.ijnurstu.2006.08.003. PubMed
15. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. DOI: 10.1056/NEJMsa003376. PubMed
16. Zhan C, Elixhauser A, Richards CL Jr, et al. Identification of hospital-acquired catheter-associated urinary tract infections from Medicare claims: sensitivity and positive predictive value. Med Care. 2009;47(3):364-369. DOI: 10.1097/MLR.0b013e31818af83d. PubMed
17. U.S. American factfinder. United States Census Bureau; 2016. 
18. McHugh MD, Ma C. Hospital nursing and 30-day readmissions among Medicare patients with heart failure, acute myocardial infarction, and pneumonia. Med Care. 2013;51(1):52. DOI: 10.1097/MLR.0b013e3182763284. PubMed
19. Coffman JM, Seago JA, Spetz J. Minimum nurse-to-patient ratios in acute care hospitals in California. Health Aff. 2002;21(5):53-64. DOI:10.1377/hlthaff.21.5.53 PubMed
20. State of California. Medical Information Reporting for California (MIRCal) Regulations. 2016. 

Issue
Journal of Hospital Medicine 13(10)
Issue
Journal of Hospital Medicine 13(10)
Page Number
698-701. Published online first June 27, 2018
Page Number
698-701. Published online first June 27, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Mahshid Abir, MD, MSc, Department of Emergency Medicine, Acute Care Research Unit Institute of Healthcare Policy and Innovation, North Campus Research Complex, 2800 Plymouth Road, Building 14-G226, Ann Arbor, MI 48109; Telephone: 734-763-9707, Fax: 734-232-1218; E-mail: Mahshida@med.umich.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 07/25/2018 - 05:00
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Pediatric Hospitalist Workload and Sustainability in University-Based Programs: Results from a National Interview-Based Survey

Article Type
Changed
Mon, 10/29/2018 - 21:39

Pediatric hospital medicine (PHM) has grown tremendously since Wachter first described the specialty in 1996.1 Evidence of this growth is seen most markedly at the annual Pediatric Hospitalist Meeting, which has experienced an increase in attendance from 700 in 2013 to over 1,200 in 20172. Although the exact number of pediatric hospitalists in the United States is unknown, the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) estimates that approximately 3,000-5,000 pediatric hospitalists currently practice in the country (personal communication).

As PHM programs have grown, variability has been reported in the roles, responsibilities, and workload among practitioners. Gosdin et al.3 reported large ranges and standard deviations in workload among full-time equivalents (FTEs) in academic PHM programs. However, this study’s ability to account for important nuances in program description was limited given that its data were obtained from an online survey.

Program variability, particularly regarding clinical hours and overall clinical burden (eg, in-house hours, census caps, and weekend coverage), is concerning given the well-reported increase in physician burn-out.4,5 Benchmarking data regarding the overall workload of pediatric hospitalists can offer nationally recognized guidance to assist program leaders in building successful programs. With this goal in mind, we sought to obtain data on university-based PHM programs to describe the current average workload for a 1.0 clinical FTE pediatric hospitalist and to assess the perceptions of program directors regarding the sustainability of the current workload.

METHODS

Study Design and Population

To obtain data with sufficient detail to compare programs, the authors, all of whom are practicing pediatric hospitalists at university-based programs, conducted structured interviews of PHM leaders in the United States. Given the absence of a single database for all PHM programs in the United States, the clinical division/program leaders of university-based programs were invited to participate through a post (with 2 reminders) to the AAP SOHM Listserv for PHM Division Leaders in May of 2017. To encourage participation, respondents were promised a summary of aggregate data. The study was exempted by the IRB of the University of Chicago.

Interview Content and Administration

The authors designed an 18-question structured interview regarding the current state of staffing in university-based PHM programs, with a focus on current descriptions of FTE, patient volume, and workload. Utilizing prior surveys3 as a basis, the authors iteratively determined the questions essential to understanding the programs’ current staffing models and ideal models. Considering the diversity of program models, interviews allowed for the clarification of questions and answers. A question regarding employment models was included to determine whether hospitalists were university-employed, hospital-employed, or a hybrid of the 2 modes of employment. The interview was also designed to establish a common language for work metrics (hours per year) for comparative purposes and to assess the perceived sustainability of the workload. Questions were provided in advance to provide respondents with sufficient time to collect data, thus increasing the accuracy of estimates. Respondents were asked, “Do you or your hospitalists have concerns about the sustainability of the model?” Sustainability was intentionally undefined to prevent limiting respondent perspective. For clarification, however, a follow-up comment that included examples was provided: “Faculty departures, reduction in total effort, and/or significant burn out.” The authors piloted the interview protocol by interviewing the division leaders of their own programs, and revisions were made based on feedback on feasibility and clarity. Finally, the AAP SOHM Subcommittee on Division Leaders provided feedback, which was incorporated.

 

 

Each author then interviewed 10-12 leaders (or designee) during May and June of 2017. Answers were recorded in REDCAP, an online survey and database tool that contains largely numeric data fields and has 1 field for narrative comments.

Data Analysis

Descriptive statistics were used to summarize interview responses, including median values with interquartile range. Data were compared between programs with models that were self-identified as either sustainable or unsustainable, with P-values in categorical variables from χ2-test or Fischer’s exact test and in continuous variables from Wilcoxon rank-sum test. 

Spearman correlation coefficient was used to evaluate the association between average protected time (defined as the percent of funded time for nonclinical roles) and percentage working full-time clinical effort. It was also used to evaluate hours per year per 1.0 FTE and total weekends per year per 1.0 FTE and perceived sustainability. Linear regression was used to determine whether associations differed between groups identifying as sustainable versus unsustainable.

RESULTS

Participation and Program Characteristics

Of the 143 subscribers to the listserv, which includes community and university-based programs, 62 division leaders/directors that self-identified by university-based hospitalist programs initially responded, and 56 completed phone interviews. Of these 56 respondents, 48% were university employed. The remainder were hospital employed (27%), had joint university/hospital appointments (13%), practiced in a private group (5%), or other models (7%).

Administration

A wide variation was reported in the clinical time expected of a 1.0 FTE hospitalist. Clinical time for 1.0 FTE was defined as the amount of clinical service a full-time hospitalist is expected to complete in 12 months (Table 1). The median hours worked per year were 1800 (Interquartile range [IQR] 1620,1975; mean 1796). The median number of weekends worked per year was 15.0 (IQR 12.5, 21; mean 16.8). Only 30% of pediatric hospitalists were full-time clinicians, whereas the rest had protected time for nonclinical duties. The average amount of protected time was 20% per full-time hospitalist.

Sustainability and Ideal FTE

Half of the division leaders reported that they or their hospitalists have concerns about the sustainability of the current workload. Programs perceived as sustainable required significantly fewer weekends per year (13 vs. 16, P < .02; Table 2) than those perceived as unsustainable. University-employed programs were more likely to be perceived as unsustainable (64% unsustainable vs. 32% unsustainable, P < .048), whereas programs with other employment models were more likely to be perceived as sustainable (Table 2). Total hours currently worked did not differ significantly between programs perceived as sustainable and unsustainable. Respondents reported an ideal workload for a 1.0 FTE of 1700 clinical hours (median). The hours worked per year for programs perceived as sustainable were statistically closer to their ideal than those perceived as unsustainable (P = .46; Table 2).

DISCUSSION

This study updates what has been previously reported about the structure and characteristics of university-based pediatric hospitalist programs.3 It also deepens our understanding of a relatively new field and the evolution of clinical coverage models. This evolution has been impacted by decreased resident work hours, increased patient complexity and acuity,6 and a broadened focus on care coordination and communication,7 while attempting to build and sustain a high-quality workforce.

This study is the first to use an interview-based method to determine the current PHM workload and to focus exclusively on university-based programs. Compared with the study by Gosdin et al,3 our study, which utilized interviews instead of surveys, was able to clarify questions and obtain workload data with a common language of hours per year. This approach allowed interviewees to incorporate subtleties, such as clinical vs. total FTE, in their responses. Our study found a slightly narrower range of clinical hours per year and extended the understanding of nonclinical duties by finding that university-based hospitalists have an average of 20% protected time from clinical duties.

In this study, we also explored the perceived sustainability of current clinical models and the ideal clinical model in hours per year. Half of respondents felt their current model was unsustainable. This result suggested that the field must continue to mitigate attrition and burnout.

Interestingly, the total number of clinical hours did not significantly differ in programs perceived to be unsustainable. Instead, a higher number of weekends worked and university employment were associated with lack of sustainability. We hypothesize that weekends have a disproportionate impact on work-life balance as compared with total hours, and that employment by a university may be a proxy for the increased academic and teaching demands of hospitalists without protected time. Future studies may better elucidate these findings and inform programmatic efforts to address sustainability.

Given that PHM is a relatively young field, considering the evolution of our clinical work model within the context of pediatric emergency medicine (PEM), a field that faces similar challenges in overnight and weekend staffing requirements, may be helpful. Gorelick et al.8 reported that total clinical work hours in PEM (combined academic and nonacademic programs) has decreased from 35.3 hours per week in 1998 to 26.7 in 2013. Extrapolating these numbers to an annual position with 5 weeks PTO/CME, the average PEM attending physician works 1254 clinical hours. These numbers demonstrate a marked difference compared with the average 1800 clinical work hours for PHM found in our study.

Although total hours trend lower in PEM, the authors noted continued challenges in sustainability with an estimated half of all PEM respondents indicating a plan to reduce hours or leave the field in the next 5 years and endorsing symptoms of burnout.6 These findings from PEM may motivate PHM leaders to be more aggressive in adjusting work models toward sustainability in the future.

Our study has several limitations. We utilized a convenience sampling approach that requires the voluntary participation of division directors. Although we had robust interest from respondents representing all major geographic areas, the respondent pool might conceivably over-represent those most interested in understanding and/or changing PHM clinical models. Overall, our sample size was smaller than that achieved by a survey approach. Nevertheless, this limitation was offset by controlling respondent type and clarifying questions, thus improving the quality of our obtained data.

 

 

CONCLUSION

This interview-based study of PHM directors describes the current state of clinical work models for university-based hospitalists. University-based PHM programs have similar mean and median total clinical hours per year. However, these hours are higher than those considered ideal by PHM directors, and many are concerned about the sustainability of current work models. Notably, programs that are university-employed or have higher weekends worked per year are more likely to be perceived as unsustainable. Future studies should explore differences between programs with sustainable work models and those with high levels of attrition and burnout.

Disclosures

The authors have no other conflicts to report.

Funding

A grant from the American Academy of Pediatrics Section on Hospital Medicine funded this study through the Subcommittee on Division and Program Leaders.

References

1. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514-517. DOI: 10.1056/NEJM199608153350713 PubMed
2. Chang W. Record Attendance, Key Issues Highlight Pediatric Hospital Medicine’s 10th Anniversary. http://www.the-hospitalist.org/hospitalist/article/125665/pediatrics/record-attendance-key-issues-highlight-pediatric-hospital
3. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. DOI: 10.1002/jhm.2020. PubMed
4. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. DOI: 10.1007/s11606-011-1780-z. PubMed
5. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. DOI: 10.1002/jhm.1907. PubMed
6. Barrett DJ, McGuinness GA, Cunha CA, et al. Pediatric hospital medicine: a proposed new subspecialty. Pediatrics. 2017;139(3):1-9. DOI: 10.1542/peds.2016-1823. PubMed
7. Cawley P, Deitelzweig S, Flores L, et al. The key principles and characteristics of an effective hospital medicine group: an assessment guide for hospitals and hospitalists. J Hosp Med. 2014;9(2):123-128. DOI: 10.1002/jhm.2119. PubMed
8. Gorelick MH, Schremmer R, Ruch-Ross H, Radabaugh C, Selbst S. Current workforce characteristics and burnout in pediatric emergency medicine. Acad Emerg Med. 2016;23(1):48-54. DOI: 10.1111/acem.12845. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(10)
Publications
Topics
Page Number
702-705. Published online first June 27, 2018
Sections
Article PDF
Article PDF
Related Articles

Pediatric hospital medicine (PHM) has grown tremendously since Wachter first described the specialty in 1996.1 Evidence of this growth is seen most markedly at the annual Pediatric Hospitalist Meeting, which has experienced an increase in attendance from 700 in 2013 to over 1,200 in 20172. Although the exact number of pediatric hospitalists in the United States is unknown, the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) estimates that approximately 3,000-5,000 pediatric hospitalists currently practice in the country (personal communication).

As PHM programs have grown, variability has been reported in the roles, responsibilities, and workload among practitioners. Gosdin et al.3 reported large ranges and standard deviations in workload among full-time equivalents (FTEs) in academic PHM programs. However, this study’s ability to account for important nuances in program description was limited given that its data were obtained from an online survey.

Program variability, particularly regarding clinical hours and overall clinical burden (eg, in-house hours, census caps, and weekend coverage), is concerning given the well-reported increase in physician burn-out.4,5 Benchmarking data regarding the overall workload of pediatric hospitalists can offer nationally recognized guidance to assist program leaders in building successful programs. With this goal in mind, we sought to obtain data on university-based PHM programs to describe the current average workload for a 1.0 clinical FTE pediatric hospitalist and to assess the perceptions of program directors regarding the sustainability of the current workload.

METHODS

Study Design and Population

To obtain data with sufficient detail to compare programs, the authors, all of whom are practicing pediatric hospitalists at university-based programs, conducted structured interviews of PHM leaders in the United States. Given the absence of a single database for all PHM programs in the United States, the clinical division/program leaders of university-based programs were invited to participate through a post (with 2 reminders) to the AAP SOHM Listserv for PHM Division Leaders in May of 2017. To encourage participation, respondents were promised a summary of aggregate data. The study was exempted by the IRB of the University of Chicago.

Interview Content and Administration

The authors designed an 18-question structured interview regarding the current state of staffing in university-based PHM programs, with a focus on current descriptions of FTE, patient volume, and workload. Utilizing prior surveys3 as a basis, the authors iteratively determined the questions essential to understanding the programs’ current staffing models and ideal models. Considering the diversity of program models, interviews allowed for the clarification of questions and answers. A question regarding employment models was included to determine whether hospitalists were university-employed, hospital-employed, or a hybrid of the 2 modes of employment. The interview was also designed to establish a common language for work metrics (hours per year) for comparative purposes and to assess the perceived sustainability of the workload. Questions were provided in advance to provide respondents with sufficient time to collect data, thus increasing the accuracy of estimates. Respondents were asked, “Do you or your hospitalists have concerns about the sustainability of the model?” Sustainability was intentionally undefined to prevent limiting respondent perspective. For clarification, however, a follow-up comment that included examples was provided: “Faculty departures, reduction in total effort, and/or significant burn out.” The authors piloted the interview protocol by interviewing the division leaders of their own programs, and revisions were made based on feedback on feasibility and clarity. Finally, the AAP SOHM Subcommittee on Division Leaders provided feedback, which was incorporated.

 

 

Each author then interviewed 10-12 leaders (or designee) during May and June of 2017. Answers were recorded in REDCAP, an online survey and database tool that contains largely numeric data fields and has 1 field for narrative comments.

Data Analysis

Descriptive statistics were used to summarize interview responses, including median values with interquartile range. Data were compared between programs with models that were self-identified as either sustainable or unsustainable, with P-values in categorical variables from χ2-test or Fischer’s exact test and in continuous variables from Wilcoxon rank-sum test. 

Spearman correlation coefficient was used to evaluate the association between average protected time (defined as the percent of funded time for nonclinical roles) and percentage working full-time clinical effort. It was also used to evaluate hours per year per 1.0 FTE and total weekends per year per 1.0 FTE and perceived sustainability. Linear regression was used to determine whether associations differed between groups identifying as sustainable versus unsustainable.

RESULTS

Participation and Program Characteristics

Of the 143 subscribers to the listserv, which includes community and university-based programs, 62 division leaders/directors that self-identified by university-based hospitalist programs initially responded, and 56 completed phone interviews. Of these 56 respondents, 48% were university employed. The remainder were hospital employed (27%), had joint university/hospital appointments (13%), practiced in a private group (5%), or other models (7%).

Administration

A wide variation was reported in the clinical time expected of a 1.0 FTE hospitalist. Clinical time for 1.0 FTE was defined as the amount of clinical service a full-time hospitalist is expected to complete in 12 months (Table 1). The median hours worked per year were 1800 (Interquartile range [IQR] 1620,1975; mean 1796). The median number of weekends worked per year was 15.0 (IQR 12.5, 21; mean 16.8). Only 30% of pediatric hospitalists were full-time clinicians, whereas the rest had protected time for nonclinical duties. The average amount of protected time was 20% per full-time hospitalist.

Sustainability and Ideal FTE

Half of the division leaders reported that they or their hospitalists have concerns about the sustainability of the current workload. Programs perceived as sustainable required significantly fewer weekends per year (13 vs. 16, P < .02; Table 2) than those perceived as unsustainable. University-employed programs were more likely to be perceived as unsustainable (64% unsustainable vs. 32% unsustainable, P < .048), whereas programs with other employment models were more likely to be perceived as sustainable (Table 2). Total hours currently worked did not differ significantly between programs perceived as sustainable and unsustainable. Respondents reported an ideal workload for a 1.0 FTE of 1700 clinical hours (median). The hours worked per year for programs perceived as sustainable were statistically closer to their ideal than those perceived as unsustainable (P = .46; Table 2).

DISCUSSION

This study updates what has been previously reported about the structure and characteristics of university-based pediatric hospitalist programs.3 It also deepens our understanding of a relatively new field and the evolution of clinical coverage models. This evolution has been impacted by decreased resident work hours, increased patient complexity and acuity,6 and a broadened focus on care coordination and communication,7 while attempting to build and sustain a high-quality workforce.

This study is the first to use an interview-based method to determine the current PHM workload and to focus exclusively on university-based programs. Compared with the study by Gosdin et al,3 our study, which utilized interviews instead of surveys, was able to clarify questions and obtain workload data with a common language of hours per year. This approach allowed interviewees to incorporate subtleties, such as clinical vs. total FTE, in their responses. Our study found a slightly narrower range of clinical hours per year and extended the understanding of nonclinical duties by finding that university-based hospitalists have an average of 20% protected time from clinical duties.

In this study, we also explored the perceived sustainability of current clinical models and the ideal clinical model in hours per year. Half of respondents felt their current model was unsustainable. This result suggested that the field must continue to mitigate attrition and burnout.

Interestingly, the total number of clinical hours did not significantly differ in programs perceived to be unsustainable. Instead, a higher number of weekends worked and university employment were associated with lack of sustainability. We hypothesize that weekends have a disproportionate impact on work-life balance as compared with total hours, and that employment by a university may be a proxy for the increased academic and teaching demands of hospitalists without protected time. Future studies may better elucidate these findings and inform programmatic efforts to address sustainability.

Given that PHM is a relatively young field, considering the evolution of our clinical work model within the context of pediatric emergency medicine (PEM), a field that faces similar challenges in overnight and weekend staffing requirements, may be helpful. Gorelick et al.8 reported that total clinical work hours in PEM (combined academic and nonacademic programs) has decreased from 35.3 hours per week in 1998 to 26.7 in 2013. Extrapolating these numbers to an annual position with 5 weeks PTO/CME, the average PEM attending physician works 1254 clinical hours. These numbers demonstrate a marked difference compared with the average 1800 clinical work hours for PHM found in our study.

Although total hours trend lower in PEM, the authors noted continued challenges in sustainability with an estimated half of all PEM respondents indicating a plan to reduce hours or leave the field in the next 5 years and endorsing symptoms of burnout.6 These findings from PEM may motivate PHM leaders to be more aggressive in adjusting work models toward sustainability in the future.

Our study has several limitations. We utilized a convenience sampling approach that requires the voluntary participation of division directors. Although we had robust interest from respondents representing all major geographic areas, the respondent pool might conceivably over-represent those most interested in understanding and/or changing PHM clinical models. Overall, our sample size was smaller than that achieved by a survey approach. Nevertheless, this limitation was offset by controlling respondent type and clarifying questions, thus improving the quality of our obtained data.

 

 

CONCLUSION

This interview-based study of PHM directors describes the current state of clinical work models for university-based hospitalists. University-based PHM programs have similar mean and median total clinical hours per year. However, these hours are higher than those considered ideal by PHM directors, and many are concerned about the sustainability of current work models. Notably, programs that are university-employed or have higher weekends worked per year are more likely to be perceived as unsustainable. Future studies should explore differences between programs with sustainable work models and those with high levels of attrition and burnout.

Disclosures

The authors have no other conflicts to report.

Funding

A grant from the American Academy of Pediatrics Section on Hospital Medicine funded this study through the Subcommittee on Division and Program Leaders.

Pediatric hospital medicine (PHM) has grown tremendously since Wachter first described the specialty in 1996.1 Evidence of this growth is seen most markedly at the annual Pediatric Hospitalist Meeting, which has experienced an increase in attendance from 700 in 2013 to over 1,200 in 20172. Although the exact number of pediatric hospitalists in the United States is unknown, the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) estimates that approximately 3,000-5,000 pediatric hospitalists currently practice in the country (personal communication).

As PHM programs have grown, variability has been reported in the roles, responsibilities, and workload among practitioners. Gosdin et al.3 reported large ranges and standard deviations in workload among full-time equivalents (FTEs) in academic PHM programs. However, this study’s ability to account for important nuances in program description was limited given that its data were obtained from an online survey.

Program variability, particularly regarding clinical hours and overall clinical burden (eg, in-house hours, census caps, and weekend coverage), is concerning given the well-reported increase in physician burn-out.4,5 Benchmarking data regarding the overall workload of pediatric hospitalists can offer nationally recognized guidance to assist program leaders in building successful programs. With this goal in mind, we sought to obtain data on university-based PHM programs to describe the current average workload for a 1.0 clinical FTE pediatric hospitalist and to assess the perceptions of program directors regarding the sustainability of the current workload.

METHODS

Study Design and Population

To obtain data with sufficient detail to compare programs, the authors, all of whom are practicing pediatric hospitalists at university-based programs, conducted structured interviews of PHM leaders in the United States. Given the absence of a single database for all PHM programs in the United States, the clinical division/program leaders of university-based programs were invited to participate through a post (with 2 reminders) to the AAP SOHM Listserv for PHM Division Leaders in May of 2017. To encourage participation, respondents were promised a summary of aggregate data. The study was exempted by the IRB of the University of Chicago.

Interview Content and Administration

The authors designed an 18-question structured interview regarding the current state of staffing in university-based PHM programs, with a focus on current descriptions of FTE, patient volume, and workload. Utilizing prior surveys3 as a basis, the authors iteratively determined the questions essential to understanding the programs’ current staffing models and ideal models. Considering the diversity of program models, interviews allowed for the clarification of questions and answers. A question regarding employment models was included to determine whether hospitalists were university-employed, hospital-employed, or a hybrid of the 2 modes of employment. The interview was also designed to establish a common language for work metrics (hours per year) for comparative purposes and to assess the perceived sustainability of the workload. Questions were provided in advance to provide respondents with sufficient time to collect data, thus increasing the accuracy of estimates. Respondents were asked, “Do you or your hospitalists have concerns about the sustainability of the model?” Sustainability was intentionally undefined to prevent limiting respondent perspective. For clarification, however, a follow-up comment that included examples was provided: “Faculty departures, reduction in total effort, and/or significant burn out.” The authors piloted the interview protocol by interviewing the division leaders of their own programs, and revisions were made based on feedback on feasibility and clarity. Finally, the AAP SOHM Subcommittee on Division Leaders provided feedback, which was incorporated.

 

 

Each author then interviewed 10-12 leaders (or designee) during May and June of 2017. Answers were recorded in REDCAP, an online survey and database tool that contains largely numeric data fields and has 1 field for narrative comments.

Data Analysis

Descriptive statistics were used to summarize interview responses, including median values with interquartile range. Data were compared between programs with models that were self-identified as either sustainable or unsustainable, with P-values in categorical variables from χ2-test or Fischer’s exact test and in continuous variables from Wilcoxon rank-sum test. 

Spearman correlation coefficient was used to evaluate the association between average protected time (defined as the percent of funded time for nonclinical roles) and percentage working full-time clinical effort. It was also used to evaluate hours per year per 1.0 FTE and total weekends per year per 1.0 FTE and perceived sustainability. Linear regression was used to determine whether associations differed between groups identifying as sustainable versus unsustainable.

RESULTS

Participation and Program Characteristics

Of the 143 subscribers to the listserv, which includes community and university-based programs, 62 division leaders/directors that self-identified by university-based hospitalist programs initially responded, and 56 completed phone interviews. Of these 56 respondents, 48% were university employed. The remainder were hospital employed (27%), had joint university/hospital appointments (13%), practiced in a private group (5%), or other models (7%).

Administration

A wide variation was reported in the clinical time expected of a 1.0 FTE hospitalist. Clinical time for 1.0 FTE was defined as the amount of clinical service a full-time hospitalist is expected to complete in 12 months (Table 1). The median hours worked per year were 1800 (Interquartile range [IQR] 1620,1975; mean 1796). The median number of weekends worked per year was 15.0 (IQR 12.5, 21; mean 16.8). Only 30% of pediatric hospitalists were full-time clinicians, whereas the rest had protected time for nonclinical duties. The average amount of protected time was 20% per full-time hospitalist.

Sustainability and Ideal FTE

Half of the division leaders reported that they or their hospitalists have concerns about the sustainability of the current workload. Programs perceived as sustainable required significantly fewer weekends per year (13 vs. 16, P < .02; Table 2) than those perceived as unsustainable. University-employed programs were more likely to be perceived as unsustainable (64% unsustainable vs. 32% unsustainable, P < .048), whereas programs with other employment models were more likely to be perceived as sustainable (Table 2). Total hours currently worked did not differ significantly between programs perceived as sustainable and unsustainable. Respondents reported an ideal workload for a 1.0 FTE of 1700 clinical hours (median). The hours worked per year for programs perceived as sustainable were statistically closer to their ideal than those perceived as unsustainable (P = .46; Table 2).

DISCUSSION

This study updates what has been previously reported about the structure and characteristics of university-based pediatric hospitalist programs.3 It also deepens our understanding of a relatively new field and the evolution of clinical coverage models. This evolution has been impacted by decreased resident work hours, increased patient complexity and acuity,6 and a broadened focus on care coordination and communication,7 while attempting to build and sustain a high-quality workforce.

This study is the first to use an interview-based method to determine the current PHM workload and to focus exclusively on university-based programs. Compared with the study by Gosdin et al,3 our study, which utilized interviews instead of surveys, was able to clarify questions and obtain workload data with a common language of hours per year. This approach allowed interviewees to incorporate subtleties, such as clinical vs. total FTE, in their responses. Our study found a slightly narrower range of clinical hours per year and extended the understanding of nonclinical duties by finding that university-based hospitalists have an average of 20% protected time from clinical duties.

In this study, we also explored the perceived sustainability of current clinical models and the ideal clinical model in hours per year. Half of respondents felt their current model was unsustainable. This result suggested that the field must continue to mitigate attrition and burnout.

Interestingly, the total number of clinical hours did not significantly differ in programs perceived to be unsustainable. Instead, a higher number of weekends worked and university employment were associated with lack of sustainability. We hypothesize that weekends have a disproportionate impact on work-life balance as compared with total hours, and that employment by a university may be a proxy for the increased academic and teaching demands of hospitalists without protected time. Future studies may better elucidate these findings and inform programmatic efforts to address sustainability.

Given that PHM is a relatively young field, considering the evolution of our clinical work model within the context of pediatric emergency medicine (PEM), a field that faces similar challenges in overnight and weekend staffing requirements, may be helpful. Gorelick et al.8 reported that total clinical work hours in PEM (combined academic and nonacademic programs) has decreased from 35.3 hours per week in 1998 to 26.7 in 2013. Extrapolating these numbers to an annual position with 5 weeks PTO/CME, the average PEM attending physician works 1254 clinical hours. These numbers demonstrate a marked difference compared with the average 1800 clinical work hours for PHM found in our study.

Although total hours trend lower in PEM, the authors noted continued challenges in sustainability with an estimated half of all PEM respondents indicating a plan to reduce hours or leave the field in the next 5 years and endorsing symptoms of burnout.6 These findings from PEM may motivate PHM leaders to be more aggressive in adjusting work models toward sustainability in the future.

Our study has several limitations. We utilized a convenience sampling approach that requires the voluntary participation of division directors. Although we had robust interest from respondents representing all major geographic areas, the respondent pool might conceivably over-represent those most interested in understanding and/or changing PHM clinical models. Overall, our sample size was smaller than that achieved by a survey approach. Nevertheless, this limitation was offset by controlling respondent type and clarifying questions, thus improving the quality of our obtained data.

 

 

CONCLUSION

This interview-based study of PHM directors describes the current state of clinical work models for university-based hospitalists. University-based PHM programs have similar mean and median total clinical hours per year. However, these hours are higher than those considered ideal by PHM directors, and many are concerned about the sustainability of current work models. Notably, programs that are university-employed or have higher weekends worked per year are more likely to be perceived as unsustainable. Future studies should explore differences between programs with sustainable work models and those with high levels of attrition and burnout.

Disclosures

The authors have no other conflicts to report.

Funding

A grant from the American Academy of Pediatrics Section on Hospital Medicine funded this study through the Subcommittee on Division and Program Leaders.

References

1. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514-517. DOI: 10.1056/NEJM199608153350713 PubMed
2. Chang W. Record Attendance, Key Issues Highlight Pediatric Hospital Medicine’s 10th Anniversary. http://www.the-hospitalist.org/hospitalist/article/125665/pediatrics/record-attendance-key-issues-highlight-pediatric-hospital
3. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. DOI: 10.1002/jhm.2020. PubMed
4. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. DOI: 10.1007/s11606-011-1780-z. PubMed
5. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. DOI: 10.1002/jhm.1907. PubMed
6. Barrett DJ, McGuinness GA, Cunha CA, et al. Pediatric hospital medicine: a proposed new subspecialty. Pediatrics. 2017;139(3):1-9. DOI: 10.1542/peds.2016-1823. PubMed
7. Cawley P, Deitelzweig S, Flores L, et al. The key principles and characteristics of an effective hospital medicine group: an assessment guide for hospitals and hospitalists. J Hosp Med. 2014;9(2):123-128. DOI: 10.1002/jhm.2119. PubMed
8. Gorelick MH, Schremmer R, Ruch-Ross H, Radabaugh C, Selbst S. Current workforce characteristics and burnout in pediatric emergency medicine. Acad Emerg Med. 2016;23(1):48-54. DOI: 10.1111/acem.12845. PubMed

References

1. Wachter RM, Goldman L. The emerging role of “hospitalists” in the American health care system. N Engl J Med. 1996;335(7):514-517. DOI: 10.1056/NEJM199608153350713 PubMed
2. Chang W. Record Attendance, Key Issues Highlight Pediatric Hospital Medicine’s 10th Anniversary. http://www.the-hospitalist.org/hospitalist/article/125665/pediatrics/record-attendance-key-issues-highlight-pediatric-hospital
3. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. DOI: 10.1002/jhm.2020. PubMed
4. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. DOI: 10.1007/s11606-011-1780-z. PubMed
5. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. DOI: 10.1002/jhm.1907. PubMed
6. Barrett DJ, McGuinness GA, Cunha CA, et al. Pediatric hospital medicine: a proposed new subspecialty. Pediatrics. 2017;139(3):1-9. DOI: 10.1542/peds.2016-1823. PubMed
7. Cawley P, Deitelzweig S, Flores L, et al. The key principles and characteristics of an effective hospital medicine group: an assessment guide for hospitals and hospitalists. J Hosp Med. 2014;9(2):123-128. DOI: 10.1002/jhm.2119. PubMed
8. Gorelick MH, Schremmer R, Ruch-Ross H, Radabaugh C, Selbst S. Current workforce characteristics and burnout in pediatric emergency medicine. Acad Emerg Med. 2016;23(1):48-54. DOI: 10.1111/acem.12845. PubMed

Issue
Journal of Hospital Medicine 13(10)
Issue
Journal of Hospital Medicine 13(10)
Page Number
702-705. Published online first June 27, 2018
Page Number
702-705. Published online first June 27, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
H. Barrett Fromme, MD, MHPE, Department of Pediatrics, University of Chicago Pritzker School of Medicine, 5721 S Maryland Avenue, MC8016, Chicago, Illinois 60637; Telephone: 773-834-9043 Fax: 773-834-0748; Email: hfromme@peds.bsd.uchicago.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 07/25/2018 - 05:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media

Appraising the Evidence Supporting Choosing Wisely® Recommendations

Article Type
Changed
Fri, 10/04/2019 - 16:31

As healthcare costs rise, physicians and other stakeholders are now seeking innovative and effective ways to reduce the provision of low-value services.1,2 The Choosing Wisely® campaign aims to further this goal by promoting lists of specific procedures, tests, and treatments that providers should avoid in selected clinical settings.3 On February 21, 2013, the Society of Hospital Medicine (SHM) released 2 Choosing Wisely® lists consisting of adult and pediatric services that are seen as costly to consumers and to the healthcare system, but which are often nonbeneficial or even harmful.4,5 A total of 80 physician and nurse specialty societies have joined in submitting additional lists.

Despite the growing enthusiasm for this effort, questions remain regarding the Choosing Wisely® campaign’s ability to initiate the meaningful de-adoption of low-value services. Specifically, prior efforts to reduce the use of services deemed to be of questionable benefit have met several challenges.2,6 Early analyses of the Choosing Wisely® recommendations reveal similar roadblocks and variable uptakes of several recommendations.7-10 While the reasons for difficulties in achieving de-adoption are broad, one important factor in whether clinicians are willing to follow guideline recommendations from such initiatives as Choosing Wisely®is the extent to which they believe in the underlying evidence.11 The current work seeks to formally evaluate the evidence supporting the Choosing Wisely® recommendations, and to compare the quality of evidence supporting SHM lists to other published Choosing Wisely® lists.

METHODS

Data Sources

Using the online listing of published Choosing Wisely® recommendations, a dataset was generated incorporating all 320 recommendations comprising the 58 lists published through August, 2014; these include both the adult and pediatric hospital medicine lists released by the SHM.4,5,12 Although data collection ended at this point, this represents a majority of all 81 lists and 535 recommendations published through December, 2017. The reviewers (A.J.A., A.G., M.W., T.S.V., M.S., and C.R.C) extracted information about the references cited for each recommendation.

Data Analysis

The reviewers obtained each reference cited by a Choosing Wisely® recommendation and categorized it by evidence strength along the following hierarchy: clinical practice guideline (CPG), primary research, review article, expert opinion, book, or others/unknown. CPGs were used as the highest level of evidence based on standard expectations for methodological rigor.13 Primary research was further rated as follows: systematic reviews and meta-analyses, randomized controlled trials (RCTs), observational studies, and case series. Each recommendation was graded using only the strongest piece of evidence cited.

Guideline Appraisal

We further sought to evaluate the strength of referenced CPGs. To accomplish this, a 10% random sample of the Choosing Wisely® recommendations citing CPGs was selected, and the referenced CPGs were obtained. Separately, CPGs referenced by the SHM-published adult and pediatric lists were also obtained. For both groups, one CPG was randomly selected when a recommendation cited more than one CPG. These guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation (AGREE) II instrument, a widely used instrument designed to assess CPG quality.14,15 AGREE II consists of 25 questions categorized into 6 domains: scope and purpose, stakeholder involvement, rigor of development, clarity of presentation, applicability, and editorial independence. Guidelines are also assigned an overall score. Two trained reviewers (A.J.A. and A.G.) assessed each of the sampled CPGs using a standardized form. Scores were then standardized using the method recommended by the instrument and reported as a percentage of available points. Although a standard interpretation of scores is not provided by the instrument, prior applications deemed scores below 50% as deficient16,17. When a recommendation item cited multiple CPGs, one was randomly selected. We also abstracted data on the year of publication, the evidence grade assigned to specific items recommended by Choosing Wisely®, and whether the CPG addressed the referring recommendation. All data management and analysis were conducted using Stata (V14.2, StataCorp, College Station, Texas).

 

 

RESULTS

A total of 320 recommendations were considered in our analysis, including 10 published across the 2 hospital medicine lists. When limited to the highest quality citation for each of the recommendations, 225 (70.3%) cited CPGs, whereas 71 (22.2%) cited primary research articles (Table 1). Specifically, 29 (9.1%) cited systematic reviews and meta-analyses, 28 (8.8%) cited observational studies, and 13 (4.1%) cited RCTs. One recommendation (0.3%) cited a case series as its highest level of evidence, 7 (2.2%) cited review articles, 7 (2.2%) cited editorials or opinion pieces, and 10 (3.1%) cited other types of documents, such as websites or books. Among hospital medicine recommendations, 9 (90%) referenced CPGs and 1 (10%) cited an observational study.

For the AGREE II assessment, we included 23 CPGs from the 225 referenced across all recommendations, after which we separately selected 6 CPGs from the hospital medicine recommendations. There was no overlap. Notably, 4 hospital medicine recommendations referenced a common CPG. Among the random sample of referenced CPGs, the median overall score obtained by using AGREE II was 54.2% (IQR 33.3%-70.8%, Table 2). This was similar to the median overall among hospital medicine guidelines (58.2%, IQR 50.0%-83.3%). Both hospital medicine and other sampled guidelines tended to score poorly in stakeholder involvement (48.6%, IQR 44.1%-61.1% and 47.2%, IQR 38.9%-61.1%, respectively). There were no significant differences between hospital medicine-referenced CPGs and the larger sample of CPGs in any AGREE II subdomains. The median age from the CPG publication to the list publication was 7 years (IQR 4–7) for hospital medicine recommendations and 3 years (IQR 2–6) for the nonhospital medicine recommendations. Substantial agreement was found between raters on the overall guideline assessment (ICC 0.80, 95% CI 0.58-0.91; Supplementary Table 1).



In terms of recommendation strengths and evidence grades, several recommendations were backed by Grades II–III (on a scale of I-III) evidence and level C (on a scale of A–C) recommendations in the reviewed CPG (Society of Maternal-Fetal Medicine, Recommendation 4, and Heart Rhythm Society, Recommendation 1). In one other case, the cited CPG did not directly address the Choosing Wisely® item (Society of Vascular Medicine, Recommendation 2).

DISCUSSION

Given the rising costs and the potential for iatrogenic harm, curbing ineffective practices has become an urgent concern. To achieve this, the Choosing Wisely® campaign has taken an important step by targeting certain low-value practices for de-adoption. However, the evidence supporting recommendations is variable. Specifically, 25 recommendations cited case series, review articles, or lower quality evidence as their highest level of support; moreover, among recommendations citing CPGs, quality, timeliness, and support for the recommendation item were variable. Although the hospital medicine lists tended to cite higher-quality evidence in the form of CPGs, these CPGs were often less recent than the guidelines referenced by other lists.

Our findings parallel those of other works that evaluate evidence among Choosing Wisely® recommendations and, more broadly, among CPGs.18–21 Lin and Yancey evaluated the quality of primary care-focused Choosing Wisely® recommendations using the Strength of Recommendation Taxonomy, a ranking system that evaluates evidence quality, consistency, and patient-centeredness.18 In their analysis, the authors found that many recommendations were based on lower quality evidence or relied on nonpatent-centered intermediate outcomes. Several groups, meanwhile, have evaluated the quality of evidence supporting CPG recommendations, finding them to be highly variable as well.19–21 These findings likely reflect inherent difficulties in the process, by which guideline development groups distill a broad evidence base into useful clinical recommendations, a reality that may have influenced the Choosing Wisely® list development groups seeking to make similar recommendations on low-value services.

These data should be taken in context due to several limitations. First, our sample of referenced CPGs includes only a small sample of all CPGs cited; thus, it may not be representative of all referenced guidelines. Second, the AGREE II assessment is inherently subjective, despite the availability of training materials. Third, data collection ended in April, 2014. Although this represents a majority of published lists to date, it is possible that more recent Choosing Wisely®lists include a stronger focus on evidence quality. Finally, references cited by Choosing Wisely®may not be representative of the entirety of the dataset that was considered when formulating the recommendations.

Despite these limitations, our findings suggest that Choosing Wisely®recommendations vary in terms of evidence strength. Although our results reveal that the majority of recommendations cite guidelines or high-quality original research, evidence gaps remain, with a small number citing low-quality evidence or low-quality CPGs as their highest form of support. Given the barriers to the successful de-implementation of low-value services, such campaigns as Choosing Wisely®face an uphill battle in their attempt to prompt behavior changes among providers and consumers.6-9 As a result, it is incumbent on funding agencies and medical journals to promote studies evaluating the harms and overall value of the care we deliver.

 

 

CONCLUSIONS

Although a majority of Choosing Wisely® recommendations cite high-quality evidence, some reference low-quality evidence or low-quality CPGs as their highest form of support. To overcome clinical inertia and other barriers to the successful de-implementation of low-value services, a clear rationale for the impetus to eradicate entrenched practices is critical.2,22 Choosing Wisely® has provided visionary leadership and a powerful platform to question low-value care. To expand the campaign’s efforts, the medical field must be able to generate the high-quality evidence necessary to support these efforts; further, list development groups must consider the availability of strong evidence when targeting services for de-implementation.

ACKNOWLEDGMENT

This work was supported, in part, by a grant from the Agency for Healthcare Research and Quality (No. K08HS020672, Dr. Cooke).

Disclosures

The authors have nothing to disclose.

Files
References

1. Institute of Medicine Roundtable on Evidence-Based Medicine. The Healthcare Imperative: Lowering Costs and Improving Outcomes: Workshop Series Summary. Yong P, Saudners R, Olsen L, editors. Washington, D.C.: National Academies Press; 2010. PubMed
2. Weinberger SE. Providing high-value, cost-conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386-388. PubMed
3. Cassel CK, Guest JA. Choosing wisely: Helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. PubMed
4. Bulger J, Nickel W, Messler J, Goldstein J, O’Callaghan J, Auron M, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Quinonez RA, Garber MD, Schroeder AR, Alverson BK, Nickel W, Goldstein J, et al. Choosing wisely in pediatric hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):479-485. PubMed
6. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
7. Rosenberg A, Agiro A, Gottlieb M, Barron J, Brady P, Liu Y, et al. Early trends among seven recommendations from the Choosing Wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. PubMed
8. Zikmund-Fisher BJ, Kullgren JT, Fagerlin A, Klamerus ML, Bernstein SJ, Kerr EA. Perceived barriers to implementing individual Choosing Wisely® recommendations in two national surveys of primary care providers. J Gen Intern Med. 2017;32(2):210-217. PubMed
9. Bishop TF, Cea M, Miranda Y, Kim R, Lash-Dardia M, Lee JI, et al. Academic physicians’ views on low-value services and the choosing wisely campaign: A qualitative study. Healthc (Amsterdam, Netherlands). 2017;5(1-2):17-22. PubMed
10. Prochaska MT, Hohmann SF, Modes M, Arora VM. Trends in Troponin-only testing for AMI in academic teaching hospitals and the impact of Choosing Wisely®. J Hosp Med. 2017;12(12):957-962. PubMed
11. Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PA, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15):1458-1465. PubMed
12. ABIM Foundation. ChoosingWisely.org Search Recommendations. 2014. 
13. Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guidelines. Clinical Practice Guidelines We Can Trust. Graham R, Mancher M, Miller Wolman D, Greenfield S, Steinberg E, editors. Washington, D.C.: National Academies Press; 2011. PubMed
14. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. AGREE II: Advancing guideline development, reporting, and evaluation in health care. Prev Med (Baltim). 2010;51(5):421-424. PubMed
15. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. Development of the AGREE II, part 2: Assessment of validity of items and tools to support application. CMAJ. 2010;182(10):E472-E478. PubMed
16. He Z, Tian H, Song A, Jin L, Zhou X, Liu X, et al. Quality appraisal of clinical practice guidelines on pancreatic cancer. Medicine (Baltimore). 2015;94(12):e635. PubMed
17. Isaac A, Saginur M, Hartling L, Robinson JL. Quality of reporting and evidence in American Academy of Pediatrics guidelines. Pediatrics. 2013;131(4):732-738. PubMed
18. Lin KW, Yancey JR. Evaluating the Evidence for Choosing WiselyTM in Primary Care Using the Strength of Recommendation Taxonomy (SORT). J Am Board Fam Med. 2016;29(4):512-515. PubMed
19. McAlister FA, van Diepen S, Padwal RS, Johnson JA, Majumdar SR. How evidence-based are the recommendations in evidence-based guidelines? PLoS Med. 2007;4(8):e250. PubMed
20. Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009;301(8):831-841. PubMed
21. Feuerstein JD, Gifford AE, Akbari M, Goldman J, Leffler DA, Sheth SG, et al. Systematic analysis underlying the quality of the scientific evidence and conflicts of interest in gastroenterology practice guidelines. Am J Gastroenterol. 2013;108(11):1686-1693. PubMed
22. Robert G, Harlock J, Williams I. Disentangling rhetoric and reality: an international Delphi study of factors and processes that facilitate the successful implementation of decisions to decommission healthcare services. Implement Sci. 2014;9:123. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(10)
Publications
Topics
Page Number
688-691. Published online first April 25, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

As healthcare costs rise, physicians and other stakeholders are now seeking innovative and effective ways to reduce the provision of low-value services.1,2 The Choosing Wisely® campaign aims to further this goal by promoting lists of specific procedures, tests, and treatments that providers should avoid in selected clinical settings.3 On February 21, 2013, the Society of Hospital Medicine (SHM) released 2 Choosing Wisely® lists consisting of adult and pediatric services that are seen as costly to consumers and to the healthcare system, but which are often nonbeneficial or even harmful.4,5 A total of 80 physician and nurse specialty societies have joined in submitting additional lists.

Despite the growing enthusiasm for this effort, questions remain regarding the Choosing Wisely® campaign’s ability to initiate the meaningful de-adoption of low-value services. Specifically, prior efforts to reduce the use of services deemed to be of questionable benefit have met several challenges.2,6 Early analyses of the Choosing Wisely® recommendations reveal similar roadblocks and variable uptakes of several recommendations.7-10 While the reasons for difficulties in achieving de-adoption are broad, one important factor in whether clinicians are willing to follow guideline recommendations from such initiatives as Choosing Wisely®is the extent to which they believe in the underlying evidence.11 The current work seeks to formally evaluate the evidence supporting the Choosing Wisely® recommendations, and to compare the quality of evidence supporting SHM lists to other published Choosing Wisely® lists.

METHODS

Data Sources

Using the online listing of published Choosing Wisely® recommendations, a dataset was generated incorporating all 320 recommendations comprising the 58 lists published through August, 2014; these include both the adult and pediatric hospital medicine lists released by the SHM.4,5,12 Although data collection ended at this point, this represents a majority of all 81 lists and 535 recommendations published through December, 2017. The reviewers (A.J.A., A.G., M.W., T.S.V., M.S., and C.R.C) extracted information about the references cited for each recommendation.

Data Analysis

The reviewers obtained each reference cited by a Choosing Wisely® recommendation and categorized it by evidence strength along the following hierarchy: clinical practice guideline (CPG), primary research, review article, expert opinion, book, or others/unknown. CPGs were used as the highest level of evidence based on standard expectations for methodological rigor.13 Primary research was further rated as follows: systematic reviews and meta-analyses, randomized controlled trials (RCTs), observational studies, and case series. Each recommendation was graded using only the strongest piece of evidence cited.

Guideline Appraisal

We further sought to evaluate the strength of referenced CPGs. To accomplish this, a 10% random sample of the Choosing Wisely® recommendations citing CPGs was selected, and the referenced CPGs were obtained. Separately, CPGs referenced by the SHM-published adult and pediatric lists were also obtained. For both groups, one CPG was randomly selected when a recommendation cited more than one CPG. These guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation (AGREE) II instrument, a widely used instrument designed to assess CPG quality.14,15 AGREE II consists of 25 questions categorized into 6 domains: scope and purpose, stakeholder involvement, rigor of development, clarity of presentation, applicability, and editorial independence. Guidelines are also assigned an overall score. Two trained reviewers (A.J.A. and A.G.) assessed each of the sampled CPGs using a standardized form. Scores were then standardized using the method recommended by the instrument and reported as a percentage of available points. Although a standard interpretation of scores is not provided by the instrument, prior applications deemed scores below 50% as deficient16,17. When a recommendation item cited multiple CPGs, one was randomly selected. We also abstracted data on the year of publication, the evidence grade assigned to specific items recommended by Choosing Wisely®, and whether the CPG addressed the referring recommendation. All data management and analysis were conducted using Stata (V14.2, StataCorp, College Station, Texas).

 

 

RESULTS

A total of 320 recommendations were considered in our analysis, including 10 published across the 2 hospital medicine lists. When limited to the highest quality citation for each of the recommendations, 225 (70.3%) cited CPGs, whereas 71 (22.2%) cited primary research articles (Table 1). Specifically, 29 (9.1%) cited systematic reviews and meta-analyses, 28 (8.8%) cited observational studies, and 13 (4.1%) cited RCTs. One recommendation (0.3%) cited a case series as its highest level of evidence, 7 (2.2%) cited review articles, 7 (2.2%) cited editorials or opinion pieces, and 10 (3.1%) cited other types of documents, such as websites or books. Among hospital medicine recommendations, 9 (90%) referenced CPGs and 1 (10%) cited an observational study.

For the AGREE II assessment, we included 23 CPGs from the 225 referenced across all recommendations, after which we separately selected 6 CPGs from the hospital medicine recommendations. There was no overlap. Notably, 4 hospital medicine recommendations referenced a common CPG. Among the random sample of referenced CPGs, the median overall score obtained by using AGREE II was 54.2% (IQR 33.3%-70.8%, Table 2). This was similar to the median overall among hospital medicine guidelines (58.2%, IQR 50.0%-83.3%). Both hospital medicine and other sampled guidelines tended to score poorly in stakeholder involvement (48.6%, IQR 44.1%-61.1% and 47.2%, IQR 38.9%-61.1%, respectively). There were no significant differences between hospital medicine-referenced CPGs and the larger sample of CPGs in any AGREE II subdomains. The median age from the CPG publication to the list publication was 7 years (IQR 4–7) for hospital medicine recommendations and 3 years (IQR 2–6) for the nonhospital medicine recommendations. Substantial agreement was found between raters on the overall guideline assessment (ICC 0.80, 95% CI 0.58-0.91; Supplementary Table 1).



In terms of recommendation strengths and evidence grades, several recommendations were backed by Grades II–III (on a scale of I-III) evidence and level C (on a scale of A–C) recommendations in the reviewed CPG (Society of Maternal-Fetal Medicine, Recommendation 4, and Heart Rhythm Society, Recommendation 1). In one other case, the cited CPG did not directly address the Choosing Wisely® item (Society of Vascular Medicine, Recommendation 2).

DISCUSSION

Given the rising costs and the potential for iatrogenic harm, curbing ineffective practices has become an urgent concern. To achieve this, the Choosing Wisely® campaign has taken an important step by targeting certain low-value practices for de-adoption. However, the evidence supporting recommendations is variable. Specifically, 25 recommendations cited case series, review articles, or lower quality evidence as their highest level of support; moreover, among recommendations citing CPGs, quality, timeliness, and support for the recommendation item were variable. Although the hospital medicine lists tended to cite higher-quality evidence in the form of CPGs, these CPGs were often less recent than the guidelines referenced by other lists.

Our findings parallel those of other works that evaluate evidence among Choosing Wisely® recommendations and, more broadly, among CPGs.18–21 Lin and Yancey evaluated the quality of primary care-focused Choosing Wisely® recommendations using the Strength of Recommendation Taxonomy, a ranking system that evaluates evidence quality, consistency, and patient-centeredness.18 In their analysis, the authors found that many recommendations were based on lower quality evidence or relied on nonpatent-centered intermediate outcomes. Several groups, meanwhile, have evaluated the quality of evidence supporting CPG recommendations, finding them to be highly variable as well.19–21 These findings likely reflect inherent difficulties in the process, by which guideline development groups distill a broad evidence base into useful clinical recommendations, a reality that may have influenced the Choosing Wisely® list development groups seeking to make similar recommendations on low-value services.

These data should be taken in context due to several limitations. First, our sample of referenced CPGs includes only a small sample of all CPGs cited; thus, it may not be representative of all referenced guidelines. Second, the AGREE II assessment is inherently subjective, despite the availability of training materials. Third, data collection ended in April, 2014. Although this represents a majority of published lists to date, it is possible that more recent Choosing Wisely®lists include a stronger focus on evidence quality. Finally, references cited by Choosing Wisely®may not be representative of the entirety of the dataset that was considered when formulating the recommendations.

Despite these limitations, our findings suggest that Choosing Wisely®recommendations vary in terms of evidence strength. Although our results reveal that the majority of recommendations cite guidelines or high-quality original research, evidence gaps remain, with a small number citing low-quality evidence or low-quality CPGs as their highest form of support. Given the barriers to the successful de-implementation of low-value services, such campaigns as Choosing Wisely®face an uphill battle in their attempt to prompt behavior changes among providers and consumers.6-9 As a result, it is incumbent on funding agencies and medical journals to promote studies evaluating the harms and overall value of the care we deliver.

 

 

CONCLUSIONS

Although a majority of Choosing Wisely® recommendations cite high-quality evidence, some reference low-quality evidence or low-quality CPGs as their highest form of support. To overcome clinical inertia and other barriers to the successful de-implementation of low-value services, a clear rationale for the impetus to eradicate entrenched practices is critical.2,22 Choosing Wisely® has provided visionary leadership and a powerful platform to question low-value care. To expand the campaign’s efforts, the medical field must be able to generate the high-quality evidence necessary to support these efforts; further, list development groups must consider the availability of strong evidence when targeting services for de-implementation.

ACKNOWLEDGMENT

This work was supported, in part, by a grant from the Agency for Healthcare Research and Quality (No. K08HS020672, Dr. Cooke).

Disclosures

The authors have nothing to disclose.

As healthcare costs rise, physicians and other stakeholders are now seeking innovative and effective ways to reduce the provision of low-value services.1,2 The Choosing Wisely® campaign aims to further this goal by promoting lists of specific procedures, tests, and treatments that providers should avoid in selected clinical settings.3 On February 21, 2013, the Society of Hospital Medicine (SHM) released 2 Choosing Wisely® lists consisting of adult and pediatric services that are seen as costly to consumers and to the healthcare system, but which are often nonbeneficial or even harmful.4,5 A total of 80 physician and nurse specialty societies have joined in submitting additional lists.

Despite the growing enthusiasm for this effort, questions remain regarding the Choosing Wisely® campaign’s ability to initiate the meaningful de-adoption of low-value services. Specifically, prior efforts to reduce the use of services deemed to be of questionable benefit have met several challenges.2,6 Early analyses of the Choosing Wisely® recommendations reveal similar roadblocks and variable uptakes of several recommendations.7-10 While the reasons for difficulties in achieving de-adoption are broad, one important factor in whether clinicians are willing to follow guideline recommendations from such initiatives as Choosing Wisely®is the extent to which they believe in the underlying evidence.11 The current work seeks to formally evaluate the evidence supporting the Choosing Wisely® recommendations, and to compare the quality of evidence supporting SHM lists to other published Choosing Wisely® lists.

METHODS

Data Sources

Using the online listing of published Choosing Wisely® recommendations, a dataset was generated incorporating all 320 recommendations comprising the 58 lists published through August, 2014; these include both the adult and pediatric hospital medicine lists released by the SHM.4,5,12 Although data collection ended at this point, this represents a majority of all 81 lists and 535 recommendations published through December, 2017. The reviewers (A.J.A., A.G., M.W., T.S.V., M.S., and C.R.C) extracted information about the references cited for each recommendation.

Data Analysis

The reviewers obtained each reference cited by a Choosing Wisely® recommendation and categorized it by evidence strength along the following hierarchy: clinical practice guideline (CPG), primary research, review article, expert opinion, book, or others/unknown. CPGs were used as the highest level of evidence based on standard expectations for methodological rigor.13 Primary research was further rated as follows: systematic reviews and meta-analyses, randomized controlled trials (RCTs), observational studies, and case series. Each recommendation was graded using only the strongest piece of evidence cited.

Guideline Appraisal

We further sought to evaluate the strength of referenced CPGs. To accomplish this, a 10% random sample of the Choosing Wisely® recommendations citing CPGs was selected, and the referenced CPGs were obtained. Separately, CPGs referenced by the SHM-published adult and pediatric lists were also obtained. For both groups, one CPG was randomly selected when a recommendation cited more than one CPG. These guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation (AGREE) II instrument, a widely used instrument designed to assess CPG quality.14,15 AGREE II consists of 25 questions categorized into 6 domains: scope and purpose, stakeholder involvement, rigor of development, clarity of presentation, applicability, and editorial independence. Guidelines are also assigned an overall score. Two trained reviewers (A.J.A. and A.G.) assessed each of the sampled CPGs using a standardized form. Scores were then standardized using the method recommended by the instrument and reported as a percentage of available points. Although a standard interpretation of scores is not provided by the instrument, prior applications deemed scores below 50% as deficient16,17. When a recommendation item cited multiple CPGs, one was randomly selected. We also abstracted data on the year of publication, the evidence grade assigned to specific items recommended by Choosing Wisely®, and whether the CPG addressed the referring recommendation. All data management and analysis were conducted using Stata (V14.2, StataCorp, College Station, Texas).

 

 

RESULTS

A total of 320 recommendations were considered in our analysis, including 10 published across the 2 hospital medicine lists. When limited to the highest quality citation for each of the recommendations, 225 (70.3%) cited CPGs, whereas 71 (22.2%) cited primary research articles (Table 1). Specifically, 29 (9.1%) cited systematic reviews and meta-analyses, 28 (8.8%) cited observational studies, and 13 (4.1%) cited RCTs. One recommendation (0.3%) cited a case series as its highest level of evidence, 7 (2.2%) cited review articles, 7 (2.2%) cited editorials or opinion pieces, and 10 (3.1%) cited other types of documents, such as websites or books. Among hospital medicine recommendations, 9 (90%) referenced CPGs and 1 (10%) cited an observational study.

For the AGREE II assessment, we included 23 CPGs from the 225 referenced across all recommendations, after which we separately selected 6 CPGs from the hospital medicine recommendations. There was no overlap. Notably, 4 hospital medicine recommendations referenced a common CPG. Among the random sample of referenced CPGs, the median overall score obtained by using AGREE II was 54.2% (IQR 33.3%-70.8%, Table 2). This was similar to the median overall among hospital medicine guidelines (58.2%, IQR 50.0%-83.3%). Both hospital medicine and other sampled guidelines tended to score poorly in stakeholder involvement (48.6%, IQR 44.1%-61.1% and 47.2%, IQR 38.9%-61.1%, respectively). There were no significant differences between hospital medicine-referenced CPGs and the larger sample of CPGs in any AGREE II subdomains. The median age from the CPG publication to the list publication was 7 years (IQR 4–7) for hospital medicine recommendations and 3 years (IQR 2–6) for the nonhospital medicine recommendations. Substantial agreement was found between raters on the overall guideline assessment (ICC 0.80, 95% CI 0.58-0.91; Supplementary Table 1).



In terms of recommendation strengths and evidence grades, several recommendations were backed by Grades II–III (on a scale of I-III) evidence and level C (on a scale of A–C) recommendations in the reviewed CPG (Society of Maternal-Fetal Medicine, Recommendation 4, and Heart Rhythm Society, Recommendation 1). In one other case, the cited CPG did not directly address the Choosing Wisely® item (Society of Vascular Medicine, Recommendation 2).

DISCUSSION

Given the rising costs and the potential for iatrogenic harm, curbing ineffective practices has become an urgent concern. To achieve this, the Choosing Wisely® campaign has taken an important step by targeting certain low-value practices for de-adoption. However, the evidence supporting recommendations is variable. Specifically, 25 recommendations cited case series, review articles, or lower quality evidence as their highest level of support; moreover, among recommendations citing CPGs, quality, timeliness, and support for the recommendation item were variable. Although the hospital medicine lists tended to cite higher-quality evidence in the form of CPGs, these CPGs were often less recent than the guidelines referenced by other lists.

Our findings parallel those of other works that evaluate evidence among Choosing Wisely® recommendations and, more broadly, among CPGs.18–21 Lin and Yancey evaluated the quality of primary care-focused Choosing Wisely® recommendations using the Strength of Recommendation Taxonomy, a ranking system that evaluates evidence quality, consistency, and patient-centeredness.18 In their analysis, the authors found that many recommendations were based on lower quality evidence or relied on nonpatent-centered intermediate outcomes. Several groups, meanwhile, have evaluated the quality of evidence supporting CPG recommendations, finding them to be highly variable as well.19–21 These findings likely reflect inherent difficulties in the process, by which guideline development groups distill a broad evidence base into useful clinical recommendations, a reality that may have influenced the Choosing Wisely® list development groups seeking to make similar recommendations on low-value services.

These data should be taken in context due to several limitations. First, our sample of referenced CPGs includes only a small sample of all CPGs cited; thus, it may not be representative of all referenced guidelines. Second, the AGREE II assessment is inherently subjective, despite the availability of training materials. Third, data collection ended in April, 2014. Although this represents a majority of published lists to date, it is possible that more recent Choosing Wisely®lists include a stronger focus on evidence quality. Finally, references cited by Choosing Wisely®may not be representative of the entirety of the dataset that was considered when formulating the recommendations.

Despite these limitations, our findings suggest that Choosing Wisely®recommendations vary in terms of evidence strength. Although our results reveal that the majority of recommendations cite guidelines or high-quality original research, evidence gaps remain, with a small number citing low-quality evidence or low-quality CPGs as their highest form of support. Given the barriers to the successful de-implementation of low-value services, such campaigns as Choosing Wisely®face an uphill battle in their attempt to prompt behavior changes among providers and consumers.6-9 As a result, it is incumbent on funding agencies and medical journals to promote studies evaluating the harms and overall value of the care we deliver.

 

 

CONCLUSIONS

Although a majority of Choosing Wisely® recommendations cite high-quality evidence, some reference low-quality evidence or low-quality CPGs as their highest form of support. To overcome clinical inertia and other barriers to the successful de-implementation of low-value services, a clear rationale for the impetus to eradicate entrenched practices is critical.2,22 Choosing Wisely® has provided visionary leadership and a powerful platform to question low-value care. To expand the campaign’s efforts, the medical field must be able to generate the high-quality evidence necessary to support these efforts; further, list development groups must consider the availability of strong evidence when targeting services for de-implementation.

ACKNOWLEDGMENT

This work was supported, in part, by a grant from the Agency for Healthcare Research and Quality (No. K08HS020672, Dr. Cooke).

Disclosures

The authors have nothing to disclose.

References

1. Institute of Medicine Roundtable on Evidence-Based Medicine. The Healthcare Imperative: Lowering Costs and Improving Outcomes: Workshop Series Summary. Yong P, Saudners R, Olsen L, editors. Washington, D.C.: National Academies Press; 2010. PubMed
2. Weinberger SE. Providing high-value, cost-conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386-388. PubMed
3. Cassel CK, Guest JA. Choosing wisely: Helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. PubMed
4. Bulger J, Nickel W, Messler J, Goldstein J, O’Callaghan J, Auron M, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Quinonez RA, Garber MD, Schroeder AR, Alverson BK, Nickel W, Goldstein J, et al. Choosing wisely in pediatric hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):479-485. PubMed
6. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
7. Rosenberg A, Agiro A, Gottlieb M, Barron J, Brady P, Liu Y, et al. Early trends among seven recommendations from the Choosing Wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. PubMed
8. Zikmund-Fisher BJ, Kullgren JT, Fagerlin A, Klamerus ML, Bernstein SJ, Kerr EA. Perceived barriers to implementing individual Choosing Wisely® recommendations in two national surveys of primary care providers. J Gen Intern Med. 2017;32(2):210-217. PubMed
9. Bishop TF, Cea M, Miranda Y, Kim R, Lash-Dardia M, Lee JI, et al. Academic physicians’ views on low-value services and the choosing wisely campaign: A qualitative study. Healthc (Amsterdam, Netherlands). 2017;5(1-2):17-22. PubMed
10. Prochaska MT, Hohmann SF, Modes M, Arora VM. Trends in Troponin-only testing for AMI in academic teaching hospitals and the impact of Choosing Wisely®. J Hosp Med. 2017;12(12):957-962. PubMed
11. Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PA, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15):1458-1465. PubMed
12. ABIM Foundation. ChoosingWisely.org Search Recommendations. 2014. 
13. Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guidelines. Clinical Practice Guidelines We Can Trust. Graham R, Mancher M, Miller Wolman D, Greenfield S, Steinberg E, editors. Washington, D.C.: National Academies Press; 2011. PubMed
14. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. AGREE II: Advancing guideline development, reporting, and evaluation in health care. Prev Med (Baltim). 2010;51(5):421-424. PubMed
15. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. Development of the AGREE II, part 2: Assessment of validity of items and tools to support application. CMAJ. 2010;182(10):E472-E478. PubMed
16. He Z, Tian H, Song A, Jin L, Zhou X, Liu X, et al. Quality appraisal of clinical practice guidelines on pancreatic cancer. Medicine (Baltimore). 2015;94(12):e635. PubMed
17. Isaac A, Saginur M, Hartling L, Robinson JL. Quality of reporting and evidence in American Academy of Pediatrics guidelines. Pediatrics. 2013;131(4):732-738. PubMed
18. Lin KW, Yancey JR. Evaluating the Evidence for Choosing WiselyTM in Primary Care Using the Strength of Recommendation Taxonomy (SORT). J Am Board Fam Med. 2016;29(4):512-515. PubMed
19. McAlister FA, van Diepen S, Padwal RS, Johnson JA, Majumdar SR. How evidence-based are the recommendations in evidence-based guidelines? PLoS Med. 2007;4(8):e250. PubMed
20. Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009;301(8):831-841. PubMed
21. Feuerstein JD, Gifford AE, Akbari M, Goldman J, Leffler DA, Sheth SG, et al. Systematic analysis underlying the quality of the scientific evidence and conflicts of interest in gastroenterology practice guidelines. Am J Gastroenterol. 2013;108(11):1686-1693. PubMed
22. Robert G, Harlock J, Williams I. Disentangling rhetoric and reality: an international Delphi study of factors and processes that facilitate the successful implementation of decisions to decommission healthcare services. Implement Sci. 2014;9:123. PubMed

References

1. Institute of Medicine Roundtable on Evidence-Based Medicine. The Healthcare Imperative: Lowering Costs and Improving Outcomes: Workshop Series Summary. Yong P, Saudners R, Olsen L, editors. Washington, D.C.: National Academies Press; 2010. PubMed
2. Weinberger SE. Providing high-value, cost-conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386-388. PubMed
3. Cassel CK, Guest JA. Choosing wisely: Helping physicians and patients make smart decisions about their care. JAMA. 2012;307(17):1801-1802. PubMed
4. Bulger J, Nickel W, Messler J, Goldstein J, O’Callaghan J, Auron M, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Quinonez RA, Garber MD, Schroeder AR, Alverson BK, Nickel W, Goldstein J, et al. Choosing wisely in pediatric hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):479-485. PubMed
6. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
7. Rosenberg A, Agiro A, Gottlieb M, Barron J, Brady P, Liu Y, et al. Early trends among seven recommendations from the Choosing Wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. PubMed
8. Zikmund-Fisher BJ, Kullgren JT, Fagerlin A, Klamerus ML, Bernstein SJ, Kerr EA. Perceived barriers to implementing individual Choosing Wisely® recommendations in two national surveys of primary care providers. J Gen Intern Med. 2017;32(2):210-217. PubMed
9. Bishop TF, Cea M, Miranda Y, Kim R, Lash-Dardia M, Lee JI, et al. Academic physicians’ views on low-value services and the choosing wisely campaign: A qualitative study. Healthc (Amsterdam, Netherlands). 2017;5(1-2):17-22. PubMed
10. Prochaska MT, Hohmann SF, Modes M, Arora VM. Trends in Troponin-only testing for AMI in academic teaching hospitals and the impact of Choosing Wisely®. J Hosp Med. 2017;12(12):957-962. PubMed
11. Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PA, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15):1458-1465. PubMed
12. ABIM Foundation. ChoosingWisely.org Search Recommendations. 2014. 
13. Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guidelines. Clinical Practice Guidelines We Can Trust. Graham R, Mancher M, Miller Wolman D, Greenfield S, Steinberg E, editors. Washington, D.C.: National Academies Press; 2011. PubMed
14. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. AGREE II: Advancing guideline development, reporting, and evaluation in health care. Prev Med (Baltim). 2010;51(5):421-424. PubMed
15. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. Development of the AGREE II, part 2: Assessment of validity of items and tools to support application. CMAJ. 2010;182(10):E472-E478. PubMed
16. He Z, Tian H, Song A, Jin L, Zhou X, Liu X, et al. Quality appraisal of clinical practice guidelines on pancreatic cancer. Medicine (Baltimore). 2015;94(12):e635. PubMed
17. Isaac A, Saginur M, Hartling L, Robinson JL. Quality of reporting and evidence in American Academy of Pediatrics guidelines. Pediatrics. 2013;131(4):732-738. PubMed
18. Lin KW, Yancey JR. Evaluating the Evidence for Choosing WiselyTM in Primary Care Using the Strength of Recommendation Taxonomy (SORT). J Am Board Fam Med. 2016;29(4):512-515. PubMed
19. McAlister FA, van Diepen S, Padwal RS, Johnson JA, Majumdar SR. How evidence-based are the recommendations in evidence-based guidelines? PLoS Med. 2007;4(8):e250. PubMed
20. Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009;301(8):831-841. PubMed
21. Feuerstein JD, Gifford AE, Akbari M, Goldman J, Leffler DA, Sheth SG, et al. Systematic analysis underlying the quality of the scientific evidence and conflicts of interest in gastroenterology practice guidelines. Am J Gastroenterol. 2013;108(11):1686-1693. PubMed
22. Robert G, Harlock J, Williams I. Disentangling rhetoric and reality: an international Delphi study of factors and processes that facilitate the successful implementation of decisions to decommission healthcare services. Implement Sci. 2014;9:123. PubMed

Issue
Journal of Hospital Medicine 13(10)
Issue
Journal of Hospital Medicine 13(10)
Page Number
688-691. Published online first April 25, 2018
Page Number
688-691. Published online first April 25, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Citation Override
Published online first April 25, 2018
Disallow All Ads
Correspondence Location
"Andrew J. Admon, MD, MPH", North Campus Research Complex, Buillding 16 127W, 2800 Plymouth Road, Ann Arbor, MI, 48109-2800; E-mail: ajadmon@umich.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 05/23/2018 - 05:00
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Patient Perceptions of Readmission Risk: An Exploratory Survey

Article Type
Changed
Mon, 10/29/2018 - 21:36

Recent years have seen a proliferation of programs designed to prevent readmissions, including patient education initiatives, financial assistance programs, postdischarge services, and clinical personnel assigned to help patients navigate their posthospitalization clinical care. Although some strategies do not require direct patient participation (such as timely and effective handoffs between inpatient and outpatient care teams), many rely upon a commitment by the patient to participate in the postdischarge care plan. At our hospital, we have found that only about 2/3 of patients who are offered transitional interventions (such as postdischarge phone calls by nurses or home nursing through a “transition guide” program) receive the intended interventions, and those who do not receive them are more likely to be readmitted.1 While limited patient uptake may relate, in part, to factors that are difficult to overcome, such as inadequate housing or phone service, we have also encountered patients whose values, beliefs, or preferences about their care do not align with those of the care team. The purposes of this exploratory study were to (1) assess patient attitudes surrounding readmission, (2) ascertain whether these attitudes are associated with actual readmission, and (3) determine whether patients can estimate their own risk of readmission.

METHODS

From January 2014 to September 2016, we circulated surveys to patients on internal medicine nursing units who were being discharged home within 24 hours. Blank surveys were distributed to nursing units by the researchers. Unit clerks and support staff were educated on the purpose of the project and asked to distribute surveys to patients who were identified by unit case managers or nurses as slated for discharge. Staff members were not asked to help with or supervise survey completion. Surveys were generally filled out by patients, but we allowed family members to assist patients if needed, and to indicate so with a checkbox. There were no exclusion criteria. Because surveys were distributed by clinical staff, the received surveys can be considered a convenience sample. Patients were asked 5 questions with 4- or 5-point Likert scale responses:

(1) “How likely is it that you will be admitted to the hospital (have to stay in the hospital overnight) again within the next 30 days after you leave the hospital this time?” [answers ranging from “Very Unlikely (<5% chance)” to “Very Likely (>50% chance)”];

(2) “How would you feel about being rehospitalized in the next month?” [answers ranging from “Very sad, frustrated, or disappointed” to “Very happy or relieved”];

(3) “How much do you think that you personally can control whether or not you will be rehospitalized (based on what you do to take care of your body, take your medicines, and follow-up with your healthcare team)?” [answers ranging from “I have no control over whether I will be rehospitalized” to “I have complete control over whether I will be rehospitalized”];

(4) “Which of the options below best describes how you plan to follow the medical instructions after you leave the hospital?” [answers ranging from “I do NOT plan to do very much of what I am being asked to do by the doctors, nurses, therapists, and other members of the care team” to “I plan to do EVERYTHING I am being asked to do by the doctors, nurses, therapists and other members of the care team”]; and

(5) “Pick the item below that best describes YOUR OWN VIEW of the care team’s recommendations:” [answers ranging from “I DO NOT AGREE AT ALL that the best way to be healthy is to do exactly what I am being asked to do by the doctors, nurses, therapists, and other members of the care team” to “I FULLY AGREE that the best way to be healthy is to do exactly what I am being asked to do by the doctors, nurses, therapists, and other members of the care team”].

Responses were linked, based on discharge date and medical record number, to administrative data, including age, sex, race, payer, and clinical data. Subsequent hospitalizations to our hospital were ascertained from administrative data. We estimated expected risk of readmission using the all payer refined diagnosis related group coupled with the associated severity-of-illness (SOI) score, as we have reported previously.2-5 We restricted our analysis to patients who answered the question related to the likelihood of readmission. Logistic regression models were constructed using actual 30-day readmission as the dependent variable to determine whether patients could predict their own readmissions and whether patient attitudes and beliefs about their care were predictive of subsequent readmission. Patient survey responses were entered as continuous independent variables (ranging from 1-4 or 1-5, as appropriate). Multivariable logistic regression was used to determine whether patients could predict their readmissions independent of demographic variables and expected readmission rate (modeled continuously); we repeated this model after dichotomizing the patient’s estimate of the likelihood of readmission as either “unlikely” or “likely.” Patients with missing survey responses were excluded from individual models without imputation. The study was approved by the Johns Hopkins institutional review board.

 

 

RESULTS

Responses were obtained from 895 patients. Their median age was 56 years [interquartile range, 43-67], 51.4% were female, and 41.7% were white. Mean SOI was 2.53 (on a 1-4 scale), and median length-of-stay was representative for our medical service at 5.2 days (range, 1-66 days). Family members reported filling out the survey in 57 cases. The primary payer was Medicare in 40.7%, Medicaid in 24.9%, and other in 34.4%. A total of 138 patients (15.4%) were readmitted within 30 days. The Table shows survey responses and associated readmission rates. None of the attitudes related to readmission were predictive of actual readmission. However, patients were able to predict their own readmissions (P = .002 for linear trend). After adjustment for expected readmission rate, race, sex, age, and payer, the trend remained significant (P = .005). Other significant predictors of readmissions in this model included expected readmission rate (P = .002), age (P = .02), and payer (P = .002). After dichotomizing the patient estimate of readmission rate as “unlikely” (N = 581) or “likely” (N = 314), the unadjusted odds ratio associating a patient-estimated risk of readmission as “likely” with actual readmission was 1.8 (95% confidence interval, 1.2-2.5). The adjusted odds ratio (including the variables above) was 1.6 (1.1-2.4).

DISCUSSION

Our findings demonstrate that patients are able to quantify their own readmission risk. This was true even after adjustment for expected readmission rate, age, sex, race, and payer. However, we did not identify any patient attitudes, beliefs, or preferences related to readmission or discharge instructions that were associated with subsequent rehospitalization. Reassuringly, more than 80% of patients who responded to the survey indicated that they would be sad, frustrated, or disappointed should readmission occur. This suggests that most patients are invested in preventing rehospitalization. Also reassuring was that patients indicated that they agreed with the discharge care plan and intended to follow their discharge instructions.

The major limitation of this study is that it was a convenience sample. Surveys were distributed inconsistently by nursing unit staff, preventing us from calculating a response rate. Further, it is possible, if not likely, that those patients with higher levels of engagement were more likely to take the time to respond, enriching our sample with activated patients. Although we allowed family members to fill out surveys on behalf of patients, this was done in fewer than 10% of instances; as such, our data may have limited applicability to patients who are physically or cognitively unable to participate in the discharge process. Finally, in this study, we did not capture readmissions to other facilities.

We conclude that patients are able to predict their own readmissions, even after accounting for other potential predictors of readmission. However, we found no evidence to support the possibility that low levels of engagement, limited trust in the healthcare team, or nonchalance about being readmitted are associated with subsequent rehospitalization. Whether asking patients about their perceived risk of readmission might help target readmission prevention programs deserves further study.

Acknowledgments

Dr. Daniel J. Brotman had full access to the data in the study and takes responsibility for the integrity of the study data and the accuracy of the data analysis. The authors also thank the following individuals for their contributions: Drafting the manuscript (Brotman); revising the manuscript for important intellectual content (Brotman, Shihab, Tieu, Cheng, Bertram, Hoyer, Deutschendorf); acquiring the data (Brotman, Shihab, Tieu, Cheng, Bertram, Deutschendorf); interpreting the data (Brotman, Shihab, Tieu, Cheng, Bertram, Hoyer, Deutschendorf); and analyzing the data (Brotman). The authors thank nursing leadership and nursing unit staff for their assistance in distributing surveys.

Funding support: Johns Hopkins Hospitalist Scholars Program

Disclosures: The authors have declared no conflicts of interest.

References

1. Hoyer EH, Brotman DJ, Apfel A, et al. Improving outcomes after hospitalization: a prospective observational multi-center evaluation of care-coordination strategies on 30-day readmissions to Maryland hospitals. J Gen Int Med. 2017 (in press). PubMed
2. Oduyebo I, Lehmann CU, Pollack CE, et al. Association of self-reported hospital discharge handoffs with 30-day readmissions. JAMA Intern Med. 2013;173(8):624-629. PubMed
3. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277-282. PubMed
4. Hoyer EH, Needham DM, Miller J, Deutschendorf A, Friedman M, Brotman DJ. Functional status impairment is associated with unplanned readmissions. Arch Phys Med Rehabil. 2013;94(10):1951-1958. PubMed
5. Hoyer EH, Odonkor CA, Bhatia SN, Leung C, Deutschendorf A, Brotman DJ. Association between days to complete inpatient discharge summaries with all-payer hospital readmissions in Maryland. J Hosp Med. 2016;11(6):393-400. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(10)
Publications
Topics
Page Number
695-697. Published online first March 26, 2018
Sections
Article PDF
Article PDF
Related Articles

Recent years have seen a proliferation of programs designed to prevent readmissions, including patient education initiatives, financial assistance programs, postdischarge services, and clinical personnel assigned to help patients navigate their posthospitalization clinical care. Although some strategies do not require direct patient participation (such as timely and effective handoffs between inpatient and outpatient care teams), many rely upon a commitment by the patient to participate in the postdischarge care plan. At our hospital, we have found that only about 2/3 of patients who are offered transitional interventions (such as postdischarge phone calls by nurses or home nursing through a “transition guide” program) receive the intended interventions, and those who do not receive them are more likely to be readmitted.1 While limited patient uptake may relate, in part, to factors that are difficult to overcome, such as inadequate housing or phone service, we have also encountered patients whose values, beliefs, or preferences about their care do not align with those of the care team. The purposes of this exploratory study were to (1) assess patient attitudes surrounding readmission, (2) ascertain whether these attitudes are associated with actual readmission, and (3) determine whether patients can estimate their own risk of readmission.

METHODS

From January 2014 to September 2016, we circulated surveys to patients on internal medicine nursing units who were being discharged home within 24 hours. Blank surveys were distributed to nursing units by the researchers. Unit clerks and support staff were educated on the purpose of the project and asked to distribute surveys to patients who were identified by unit case managers or nurses as slated for discharge. Staff members were not asked to help with or supervise survey completion. Surveys were generally filled out by patients, but we allowed family members to assist patients if needed, and to indicate so with a checkbox. There were no exclusion criteria. Because surveys were distributed by clinical staff, the received surveys can be considered a convenience sample. Patients were asked 5 questions with 4- or 5-point Likert scale responses:

(1) “How likely is it that you will be admitted to the hospital (have to stay in the hospital overnight) again within the next 30 days after you leave the hospital this time?” [answers ranging from “Very Unlikely (<5% chance)” to “Very Likely (>50% chance)”];

(2) “How would you feel about being rehospitalized in the next month?” [answers ranging from “Very sad, frustrated, or disappointed” to “Very happy or relieved”];

(3) “How much do you think that you personally can control whether or not you will be rehospitalized (based on what you do to take care of your body, take your medicines, and follow-up with your healthcare team)?” [answers ranging from “I have no control over whether I will be rehospitalized” to “I have complete control over whether I will be rehospitalized”];

(4) “Which of the options below best describes how you plan to follow the medical instructions after you leave the hospital?” [answers ranging from “I do NOT plan to do very much of what I am being asked to do by the doctors, nurses, therapists, and other members of the care team” to “I plan to do EVERYTHING I am being asked to do by the doctors, nurses, therapists and other members of the care team”]; and

(5) “Pick the item below that best describes YOUR OWN VIEW of the care team’s recommendations:” [answers ranging from “I DO NOT AGREE AT ALL that the best way to be healthy is to do exactly what I am being asked to do by the doctors, nurses, therapists, and other members of the care team” to “I FULLY AGREE that the best way to be healthy is to do exactly what I am being asked to do by the doctors, nurses, therapists, and other members of the care team”].

Responses were linked, based on discharge date and medical record number, to administrative data, including age, sex, race, payer, and clinical data. Subsequent hospitalizations to our hospital were ascertained from administrative data. We estimated expected risk of readmission using the all payer refined diagnosis related group coupled with the associated severity-of-illness (SOI) score, as we have reported previously.2-5 We restricted our analysis to patients who answered the question related to the likelihood of readmission. Logistic regression models were constructed using actual 30-day readmission as the dependent variable to determine whether patients could predict their own readmissions and whether patient attitudes and beliefs about their care were predictive of subsequent readmission. Patient survey responses were entered as continuous independent variables (ranging from 1-4 or 1-5, as appropriate). Multivariable logistic regression was used to determine whether patients could predict their readmissions independent of demographic variables and expected readmission rate (modeled continuously); we repeated this model after dichotomizing the patient’s estimate of the likelihood of readmission as either “unlikely” or “likely.” Patients with missing survey responses were excluded from individual models without imputation. The study was approved by the Johns Hopkins institutional review board.

 

 

RESULTS

Responses were obtained from 895 patients. Their median age was 56 years [interquartile range, 43-67], 51.4% were female, and 41.7% were white. Mean SOI was 2.53 (on a 1-4 scale), and median length-of-stay was representative for our medical service at 5.2 days (range, 1-66 days). Family members reported filling out the survey in 57 cases. The primary payer was Medicare in 40.7%, Medicaid in 24.9%, and other in 34.4%. A total of 138 patients (15.4%) were readmitted within 30 days. The Table shows survey responses and associated readmission rates. None of the attitudes related to readmission were predictive of actual readmission. However, patients were able to predict their own readmissions (P = .002 for linear trend). After adjustment for expected readmission rate, race, sex, age, and payer, the trend remained significant (P = .005). Other significant predictors of readmissions in this model included expected readmission rate (P = .002), age (P = .02), and payer (P = .002). After dichotomizing the patient estimate of readmission rate as “unlikely” (N = 581) or “likely” (N = 314), the unadjusted odds ratio associating a patient-estimated risk of readmission as “likely” with actual readmission was 1.8 (95% confidence interval, 1.2-2.5). The adjusted odds ratio (including the variables above) was 1.6 (1.1-2.4).

DISCUSSION

Our findings demonstrate that patients are able to quantify their own readmission risk. This was true even after adjustment for expected readmission rate, age, sex, race, and payer. However, we did not identify any patient attitudes, beliefs, or preferences related to readmission or discharge instructions that were associated with subsequent rehospitalization. Reassuringly, more than 80% of patients who responded to the survey indicated that they would be sad, frustrated, or disappointed should readmission occur. This suggests that most patients are invested in preventing rehospitalization. Also reassuring was that patients indicated that they agreed with the discharge care plan and intended to follow their discharge instructions.

The major limitation of this study is that it was a convenience sample. Surveys were distributed inconsistently by nursing unit staff, preventing us from calculating a response rate. Further, it is possible, if not likely, that those patients with higher levels of engagement were more likely to take the time to respond, enriching our sample with activated patients. Although we allowed family members to fill out surveys on behalf of patients, this was done in fewer than 10% of instances; as such, our data may have limited applicability to patients who are physically or cognitively unable to participate in the discharge process. Finally, in this study, we did not capture readmissions to other facilities.

We conclude that patients are able to predict their own readmissions, even after accounting for other potential predictors of readmission. However, we found no evidence to support the possibility that low levels of engagement, limited trust in the healthcare team, or nonchalance about being readmitted are associated with subsequent rehospitalization. Whether asking patients about their perceived risk of readmission might help target readmission prevention programs deserves further study.

Acknowledgments

Dr. Daniel J. Brotman had full access to the data in the study and takes responsibility for the integrity of the study data and the accuracy of the data analysis. The authors also thank the following individuals for their contributions: Drafting the manuscript (Brotman); revising the manuscript for important intellectual content (Brotman, Shihab, Tieu, Cheng, Bertram, Hoyer, Deutschendorf); acquiring the data (Brotman, Shihab, Tieu, Cheng, Bertram, Deutschendorf); interpreting the data (Brotman, Shihab, Tieu, Cheng, Bertram, Hoyer, Deutschendorf); and analyzing the data (Brotman). The authors thank nursing leadership and nursing unit staff for their assistance in distributing surveys.

Funding support: Johns Hopkins Hospitalist Scholars Program

Disclosures: The authors have declared no conflicts of interest.

Recent years have seen a proliferation of programs designed to prevent readmissions, including patient education initiatives, financial assistance programs, postdischarge services, and clinical personnel assigned to help patients navigate their posthospitalization clinical care. Although some strategies do not require direct patient participation (such as timely and effective handoffs between inpatient and outpatient care teams), many rely upon a commitment by the patient to participate in the postdischarge care plan. At our hospital, we have found that only about 2/3 of patients who are offered transitional interventions (such as postdischarge phone calls by nurses or home nursing through a “transition guide” program) receive the intended interventions, and those who do not receive them are more likely to be readmitted.1 While limited patient uptake may relate, in part, to factors that are difficult to overcome, such as inadequate housing or phone service, we have also encountered patients whose values, beliefs, or preferences about their care do not align with those of the care team. The purposes of this exploratory study were to (1) assess patient attitudes surrounding readmission, (2) ascertain whether these attitudes are associated with actual readmission, and (3) determine whether patients can estimate their own risk of readmission.

METHODS

From January 2014 to September 2016, we circulated surveys to patients on internal medicine nursing units who were being discharged home within 24 hours. Blank surveys were distributed to nursing units by the researchers. Unit clerks and support staff were educated on the purpose of the project and asked to distribute surveys to patients who were identified by unit case managers or nurses as slated for discharge. Staff members were not asked to help with or supervise survey completion. Surveys were generally filled out by patients, but we allowed family members to assist patients if needed, and to indicate so with a checkbox. There were no exclusion criteria. Because surveys were distributed by clinical staff, the received surveys can be considered a convenience sample. Patients were asked 5 questions with 4- or 5-point Likert scale responses:

(1) “How likely is it that you will be admitted to the hospital (have to stay in the hospital overnight) again within the next 30 days after you leave the hospital this time?” [answers ranging from “Very Unlikely (<5% chance)” to “Very Likely (>50% chance)”];

(2) “How would you feel about being rehospitalized in the next month?” [answers ranging from “Very sad, frustrated, or disappointed” to “Very happy or relieved”];

(3) “How much do you think that you personally can control whether or not you will be rehospitalized (based on what you do to take care of your body, take your medicines, and follow-up with your healthcare team)?” [answers ranging from “I have no control over whether I will be rehospitalized” to “I have complete control over whether I will be rehospitalized”];

(4) “Which of the options below best describes how you plan to follow the medical instructions after you leave the hospital?” [answers ranging from “I do NOT plan to do very much of what I am being asked to do by the doctors, nurses, therapists, and other members of the care team” to “I plan to do EVERYTHING I am being asked to do by the doctors, nurses, therapists and other members of the care team”]; and

(5) “Pick the item below that best describes YOUR OWN VIEW of the care team’s recommendations:” [answers ranging from “I DO NOT AGREE AT ALL that the best way to be healthy is to do exactly what I am being asked to do by the doctors, nurses, therapists, and other members of the care team” to “I FULLY AGREE that the best way to be healthy is to do exactly what I am being asked to do by the doctors, nurses, therapists, and other members of the care team”].

Responses were linked, based on discharge date and medical record number, to administrative data, including age, sex, race, payer, and clinical data. Subsequent hospitalizations to our hospital were ascertained from administrative data. We estimated expected risk of readmission using the all payer refined diagnosis related group coupled with the associated severity-of-illness (SOI) score, as we have reported previously.2-5 We restricted our analysis to patients who answered the question related to the likelihood of readmission. Logistic regression models were constructed using actual 30-day readmission as the dependent variable to determine whether patients could predict their own readmissions and whether patient attitudes and beliefs about their care were predictive of subsequent readmission. Patient survey responses were entered as continuous independent variables (ranging from 1-4 or 1-5, as appropriate). Multivariable logistic regression was used to determine whether patients could predict their readmissions independent of demographic variables and expected readmission rate (modeled continuously); we repeated this model after dichotomizing the patient’s estimate of the likelihood of readmission as either “unlikely” or “likely.” Patients with missing survey responses were excluded from individual models without imputation. The study was approved by the Johns Hopkins institutional review board.

 

 

RESULTS

Responses were obtained from 895 patients. Their median age was 56 years [interquartile range, 43-67], 51.4% were female, and 41.7% were white. Mean SOI was 2.53 (on a 1-4 scale), and median length-of-stay was representative for our medical service at 5.2 days (range, 1-66 days). Family members reported filling out the survey in 57 cases. The primary payer was Medicare in 40.7%, Medicaid in 24.9%, and other in 34.4%. A total of 138 patients (15.4%) were readmitted within 30 days. The Table shows survey responses and associated readmission rates. None of the attitudes related to readmission were predictive of actual readmission. However, patients were able to predict their own readmissions (P = .002 for linear trend). After adjustment for expected readmission rate, race, sex, age, and payer, the trend remained significant (P = .005). Other significant predictors of readmissions in this model included expected readmission rate (P = .002), age (P = .02), and payer (P = .002). After dichotomizing the patient estimate of readmission rate as “unlikely” (N = 581) or “likely” (N = 314), the unadjusted odds ratio associating a patient-estimated risk of readmission as “likely” with actual readmission was 1.8 (95% confidence interval, 1.2-2.5). The adjusted odds ratio (including the variables above) was 1.6 (1.1-2.4).

DISCUSSION

Our findings demonstrate that patients are able to quantify their own readmission risk. This was true even after adjustment for expected readmission rate, age, sex, race, and payer. However, we did not identify any patient attitudes, beliefs, or preferences related to readmission or discharge instructions that were associated with subsequent rehospitalization. Reassuringly, more than 80% of patients who responded to the survey indicated that they would be sad, frustrated, or disappointed should readmission occur. This suggests that most patients are invested in preventing rehospitalization. Also reassuring was that patients indicated that they agreed with the discharge care plan and intended to follow their discharge instructions.

The major limitation of this study is that it was a convenience sample. Surveys were distributed inconsistently by nursing unit staff, preventing us from calculating a response rate. Further, it is possible, if not likely, that those patients with higher levels of engagement were more likely to take the time to respond, enriching our sample with activated patients. Although we allowed family members to fill out surveys on behalf of patients, this was done in fewer than 10% of instances; as such, our data may have limited applicability to patients who are physically or cognitively unable to participate in the discharge process. Finally, in this study, we did not capture readmissions to other facilities.

We conclude that patients are able to predict their own readmissions, even after accounting for other potential predictors of readmission. However, we found no evidence to support the possibility that low levels of engagement, limited trust in the healthcare team, or nonchalance about being readmitted are associated with subsequent rehospitalization. Whether asking patients about their perceived risk of readmission might help target readmission prevention programs deserves further study.

Acknowledgments

Dr. Daniel J. Brotman had full access to the data in the study and takes responsibility for the integrity of the study data and the accuracy of the data analysis. The authors also thank the following individuals for their contributions: Drafting the manuscript (Brotman); revising the manuscript for important intellectual content (Brotman, Shihab, Tieu, Cheng, Bertram, Hoyer, Deutschendorf); acquiring the data (Brotman, Shihab, Tieu, Cheng, Bertram, Deutschendorf); interpreting the data (Brotman, Shihab, Tieu, Cheng, Bertram, Hoyer, Deutschendorf); and analyzing the data (Brotman). The authors thank nursing leadership and nursing unit staff for their assistance in distributing surveys.

Funding support: Johns Hopkins Hospitalist Scholars Program

Disclosures: The authors have declared no conflicts of interest.

References

1. Hoyer EH, Brotman DJ, Apfel A, et al. Improving outcomes after hospitalization: a prospective observational multi-center evaluation of care-coordination strategies on 30-day readmissions to Maryland hospitals. J Gen Int Med. 2017 (in press). PubMed
2. Oduyebo I, Lehmann CU, Pollack CE, et al. Association of self-reported hospital discharge handoffs with 30-day readmissions. JAMA Intern Med. 2013;173(8):624-629. PubMed
3. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277-282. PubMed
4. Hoyer EH, Needham DM, Miller J, Deutschendorf A, Friedman M, Brotman DJ. Functional status impairment is associated with unplanned readmissions. Arch Phys Med Rehabil. 2013;94(10):1951-1958. PubMed
5. Hoyer EH, Odonkor CA, Bhatia SN, Leung C, Deutschendorf A, Brotman DJ. Association between days to complete inpatient discharge summaries with all-payer hospital readmissions in Maryland. J Hosp Med. 2016;11(6):393-400. PubMed

References

1. Hoyer EH, Brotman DJ, Apfel A, et al. Improving outcomes after hospitalization: a prospective observational multi-center evaluation of care-coordination strategies on 30-day readmissions to Maryland hospitals. J Gen Int Med. 2017 (in press). PubMed
2. Oduyebo I, Lehmann CU, Pollack CE, et al. Association of self-reported hospital discharge handoffs with 30-day readmissions. JAMA Intern Med. 2013;173(8):624-629. PubMed
3. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277-282. PubMed
4. Hoyer EH, Needham DM, Miller J, Deutschendorf A, Friedman M, Brotman DJ. Functional status impairment is associated with unplanned readmissions. Arch Phys Med Rehabil. 2013;94(10):1951-1958. PubMed
5. Hoyer EH, Odonkor CA, Bhatia SN, Leung C, Deutschendorf A, Brotman DJ. Association between days to complete inpatient discharge summaries with all-payer hospital readmissions in Maryland. J Hosp Med. 2016;11(6):393-400. PubMed

Issue
Journal of Hospital Medicine 13(10)
Issue
Journal of Hospital Medicine 13(10)
Page Number
695-697. Published online first March 26, 2018
Page Number
695-697. Published online first March 26, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Daniel J. Brotman, MD, Director, Hospitalist Program, Johns Hopkins Hospital, Division of General Internal Medicine, 600 N. Wolfe St., Meyer 8-134A, Baltimore, MD 21287; Phone: 443-287-3631; Fax: 410-502-0923; E-mail: brotman@jhmi.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 04/10/2018 - 06:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media

The Influence of Hospitalist Continuity on the Likelihood of Patient Discharge in General Medicine Patients

Article Type
Changed
Mon, 10/29/2018 - 21:32

In addition to treating patients, physicians frequently have other time commitments that could include administrative, teaching, research, and family duties. Inpatient medicine is particularly unforgiving to these nonclinical duties since patients have to be assessed on a daily basis. Because of this characteristic, it is not uncommon for inpatient care responsibility to be switched between physicians to create time for nonclinical duties and personal health.

In contrast to the ambulatory setting, the influence of physician continuity of care on inpatient outcomes has not been studied frequently. Studies of inpatient continuity have primarily focused on patient discharge (likely because of its objective nature) over the weekends (likely because weekend cross-coverage is common) and have reported conflicting results.1-3 However, discontinuity of care is not isolated to the weekend since hospitalist-switches can occur at any time. In addition, expressing hospitalist continuity of care as a dichotomous variable (Was there weekend cross-coverage?) could incompletely express continuity since discharge likelihood might change with the consecutive number of days that a hospitalist is on service. This study measured the influence of hospitalist continuity throughout the patient’s hospitalization (rather than just the weekend) on daily patient discharge.

METHODS

Study Setting and Databases Used for Analysis

The study was conducted at The Ottawa Hospital, Ontario, Canada, a 1000-bed teaching hospital with 2 campuses and the primary referral center in our region. The division of general internal medicine has 6 patient services (or “teams”) at two campuses led by a staff hospitalist (exclusively general internists), a senior medical resident (2nd year of training), and various numbers of interns and medical students. Staff hospitalists do not treat more than one patient service even on the weekends.

Patients are admitted to each service on a daily basis and almost exclusively from the emergency room. Assignment of patients is essentially random since all services have the same clinical expertise. At a particular campus, the number of patients assigned daily to each service is usually equivalent between teams. Patients almost never switch between teams but may be transferred to another specialty. The study was approved by our local research ethics board.

The Patient Registry Database records for each patient the date and time of admissions (defined as the moment that a patient’s admission request is entered into the database), death or discharge from hospital (defined as the time when the patient’s discharge from hospital was entered into the database), or transfer to another specialty. It also records emergency visits, patient demographics, and location during admission. The Laboratory Database records all laboratory tests and their results.

Study Cohort

The Patient Registry Database was used to identify all individuals who were admitted to the general medicine services between January 1 and December 31, 2015. This time frame was selected to ensure that data were complete and current. General medicine services were analyzed because they are collectively the largest inpatient specialty in the hospital.

Study Outcome

The primary outcome was discharge from hospital as determined from the Patient Registry Database. Patients who died or were transferred to another service were not counted as outcomes.

Covariables

The primary exposure variable was the consecutive number of days (including weekends) that a particular hospitalist rounded on patients on a particular general medicine service. This was measured using call schedules. Other covariates included tomorrow’s expected number of discharges (TEND) daily discharge probability and its components. The TEND model4 used patient factors (age, Laboratory Abnormality Physiological Score [LAPS]5 calculated at admission) and hospitalization factors (hospital campus and service, admission urgency, day of the week, ICU status) to predict the daily discharge probability. In a validation population, these daily discharge probabilities (when summed over a particular day) strongly predicted the daily number of discharges (adjusted R2 of 89.2% [P < .001], median relative difference between observed and expected number of discharges of only 1.4% [Interquartile range,IQR: −5.5% to 7.1%]). The expected annual death risk was determined using the HOMR-now! model.6 This model used routinely collected data available at patient admission regarding the patient (sex, life-table-estimated 1-year death risk, Charlson score, current living location, previous cancer clinic status, and number of emergency department visits in the previous year) and the hospitalization (urgency, service, and LAPS score). The model explained more than half of the total variability in death likelihood of death (Nagelkirke’s R2 value of 0.53),7 was highly discriminative (C-statistic 0.92), and accurately predicted death risk (calibration slope 0.98).

 

 

Analysis

Logistic generalized estimating equation (GEE) methods were used to model the adjusted daily discharge probability.8 Data in the analytical dataset were expressed in a patient-day format (each dataset row represented one day for a particular patient). This permitted the inclusion of time-dependent covariates and allowed the GEE model to cluster hospitalization days within patients.

Model construction started with the TEND daily discharge probability and the HOMR-now! expected annual death risk (both expressed as log-odds). Then, hospitalist continuity was entered as a time-dependent covariate (ie, its value changed every day). Linear, square root, and natural logarithm forms of physician continuity were examined to determine the best fit (determined using the QIC statistic9). Finally, individual components of the TEND model were also offered to the model with those which significantly improving fit kept in the model. The GEE model used an independent correlation structure since this minimized the QIC statistic in the base model. All covariates in the final daily discharge probability model were used in the hospital death model. Analyses were conducted using SAS 9.4 (Cary, NC).

RESULTS

There were 6,405 general medicine admissions involving 5208 patients and 38,967 patient-days between January 1 and December 31, 2015 (Appendix A). Patients were elderly and were evenly divided in terms of gender, with 85% of them being admitted from the community. Comorbidities were common (median coded Charlson score was 2), with 6.0% of patients known to our cancer clinic. The median length of stay was 4 days (IQR, 2–7), with 378 admissions (5.9%) ending in death and 121 admissions (1.9%) ending in a transfer to another service.

There were 41 different staff people having at least 1 day on service. The median total service by physicians was 9 weeks (IQR 1.8–10.9 weeks). Changes in hospitalist coverage were common; hospitalizations had a median of 1 (IQR 1–2) physician switches and a median of 1 (IQR 1–2) different physicians. However, patients spent a median of 100% (IQR 66.7%–100%] of their total hospitalization with their primary hospitalist. The median duration of individual physician “stints” on service was 5 days (IQR 2–7, range 1–42).

The TEND model accurately estimated daily discharge probability for the entire cohort with 5833 and 5718.6 observed and expected discharges, respectively, during 38,967 patient-days (O/E 1.02, 95% CI 0.99–1.05). Discharge probability increased as hospitalist continuity increased, but this was statistically significant only when hospitalist continuity exceeded 4 days. Other covariables also significantly influenced discharge probability (Appendix B).

After adjusting for important covariables (Appendix C), hospitalist continuity was significantly associated with daily discharge probability (Figure). Discharge probability increased linearly with increasing consecutive days that hospitalists treated patients. For each additional consecutive day with the same hospitalist, the adjusted daily odds increased by 2% (Adj-odds ratio [OR] 1.02, 95% CI 1.01–1.02, Appendix C). When the consecutive number of days that hospitalists remained on service increased from 1 to 28 days, the adjusted discharge probability for the average patient increased from 18.1% to 25.7%, respectively. Discharge was significantly influenced by other factors (Appendix C). Continuity did not influence the risk of death in hospital (Appendix D).

DISCUSSION

In a general medicine service at a large teaching hospital, this study found that greater hospitalist continuity was associated with a significantly increased adjusted daily discharge probability, increasing (in the average patient) from 18.1% to 25.7% when the consecutive number of hospitalist days on service increased from 1 to 28 days, respectively.

The study demonstrated some interesting findings. First, it shows that shifting patient care between physicians can significantly influence patient outcomes. This could be a function of incomplete transfer of knowledge between physicians, a phenomenon that should be expected given the extensive amount of information–both explicit and implicit–that physicians collect about particular patients during their hospitalization. Second, continuity of care could increase a physician’s and a patient’s confidence in clinical decision-making. Perhaps physicians are subconsciously more trusting of their instincts (and the decisions based on those instincts) when they have been on service for a while. It is also possible that patients more readily trust recommendations of a physician they have had throughout their stay. Finally, people wishing to decrease patient length of stay might consider minimizing the extent that hospitalists sign over patient care to colleagues.

Several issues should be noted when interpreting the results of the study. First, the study examined only patient discharge and death. These are by no means the only or the most important outcomes that might be influenced by hospitalist continuity. Second, this study was limited to a single service at a single center. Third, the analysis did not account for house-staff continuity. Since hospitalist and house-staff at the study hospital invariably switched at different times, it is unlikely that hospitalist continuity was a surrogate for house-staff continuity.

 

 

Disclosures

This study was supported by the Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. The author has nothing to disclose.

Files
References

1. Ali NA, Hammersley J, Hoffmann SP et al. Continuity of care in intensive care units: a cluster-randomized trial of intensivist staffing. Am J Respir Crit Care Med. 2011;184(7):803-808. PubMed
2. Epstein K, Juarez E, Epstein A, Loya K, Singer A. The impact of fragmentation of hospitalist care on length of stay. J Hosp Med. 2010;5(6):335-338. PubMed
3. Blecker S, Shine D, Park N et al. Association of weekend continuity of care with hospital length of stay. Int J Qual Health Care. 2014;26(5):530-537. PubMed
4. van Walraven C, Forster AJ. The TEND (Tomorrow’s Expected Number of Discharges) model accurately predicted the number of patients who were discharged from the hospital in the next day. J Hosp Med. In press. PubMed
5. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232-239. PubMed
6. van Walraven C, Forster AJ. HOMR-now! A modification of the HOMR score that predicts 1-year death risk for hospitalized patients using data immediately available at patient admission. Am J Med. In press. PubMed
7. Nagelkerke NJ. A note on a general definition of the coefficient of determination. Biometrika. 1991;78(3):691-692. 
8. Stokes ME, Davis CS, Koch GG. Generalized estimating equations. Categorical Data Analysis Using the SAS System. 2nd ed. Cary, NC: SAS Institute Inc; 2000;469-549.
9. Pan W. Akaike’s information criterion in generalized estimating equations. Biometrics. 2001;57(1):120-125. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(10)
Publications
Topics
Page Number
692-694. Published online first March 26, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

In addition to treating patients, physicians frequently have other time commitments that could include administrative, teaching, research, and family duties. Inpatient medicine is particularly unforgiving to these nonclinical duties since patients have to be assessed on a daily basis. Because of this characteristic, it is not uncommon for inpatient care responsibility to be switched between physicians to create time for nonclinical duties and personal health.

In contrast to the ambulatory setting, the influence of physician continuity of care on inpatient outcomes has not been studied frequently. Studies of inpatient continuity have primarily focused on patient discharge (likely because of its objective nature) over the weekends (likely because weekend cross-coverage is common) and have reported conflicting results.1-3 However, discontinuity of care is not isolated to the weekend since hospitalist-switches can occur at any time. In addition, expressing hospitalist continuity of care as a dichotomous variable (Was there weekend cross-coverage?) could incompletely express continuity since discharge likelihood might change with the consecutive number of days that a hospitalist is on service. This study measured the influence of hospitalist continuity throughout the patient’s hospitalization (rather than just the weekend) on daily patient discharge.

METHODS

Study Setting and Databases Used for Analysis

The study was conducted at The Ottawa Hospital, Ontario, Canada, a 1000-bed teaching hospital with 2 campuses and the primary referral center in our region. The division of general internal medicine has 6 patient services (or “teams”) at two campuses led by a staff hospitalist (exclusively general internists), a senior medical resident (2nd year of training), and various numbers of interns and medical students. Staff hospitalists do not treat more than one patient service even on the weekends.

Patients are admitted to each service on a daily basis and almost exclusively from the emergency room. Assignment of patients is essentially random since all services have the same clinical expertise. At a particular campus, the number of patients assigned daily to each service is usually equivalent between teams. Patients almost never switch between teams but may be transferred to another specialty. The study was approved by our local research ethics board.

The Patient Registry Database records for each patient the date and time of admissions (defined as the moment that a patient’s admission request is entered into the database), death or discharge from hospital (defined as the time when the patient’s discharge from hospital was entered into the database), or transfer to another specialty. It also records emergency visits, patient demographics, and location during admission. The Laboratory Database records all laboratory tests and their results.

Study Cohort

The Patient Registry Database was used to identify all individuals who were admitted to the general medicine services between January 1 and December 31, 2015. This time frame was selected to ensure that data were complete and current. General medicine services were analyzed because they are collectively the largest inpatient specialty in the hospital.

Study Outcome

The primary outcome was discharge from hospital as determined from the Patient Registry Database. Patients who died or were transferred to another service were not counted as outcomes.

Covariables

The primary exposure variable was the consecutive number of days (including weekends) that a particular hospitalist rounded on patients on a particular general medicine service. This was measured using call schedules. Other covariates included tomorrow’s expected number of discharges (TEND) daily discharge probability and its components. The TEND model4 used patient factors (age, Laboratory Abnormality Physiological Score [LAPS]5 calculated at admission) and hospitalization factors (hospital campus and service, admission urgency, day of the week, ICU status) to predict the daily discharge probability. In a validation population, these daily discharge probabilities (when summed over a particular day) strongly predicted the daily number of discharges (adjusted R2 of 89.2% [P < .001], median relative difference between observed and expected number of discharges of only 1.4% [Interquartile range,IQR: −5.5% to 7.1%]). The expected annual death risk was determined using the HOMR-now! model.6 This model used routinely collected data available at patient admission regarding the patient (sex, life-table-estimated 1-year death risk, Charlson score, current living location, previous cancer clinic status, and number of emergency department visits in the previous year) and the hospitalization (urgency, service, and LAPS score). The model explained more than half of the total variability in death likelihood of death (Nagelkirke’s R2 value of 0.53),7 was highly discriminative (C-statistic 0.92), and accurately predicted death risk (calibration slope 0.98).

 

 

Analysis

Logistic generalized estimating equation (GEE) methods were used to model the adjusted daily discharge probability.8 Data in the analytical dataset were expressed in a patient-day format (each dataset row represented one day for a particular patient). This permitted the inclusion of time-dependent covariates and allowed the GEE model to cluster hospitalization days within patients.

Model construction started with the TEND daily discharge probability and the HOMR-now! expected annual death risk (both expressed as log-odds). Then, hospitalist continuity was entered as a time-dependent covariate (ie, its value changed every day). Linear, square root, and natural logarithm forms of physician continuity were examined to determine the best fit (determined using the QIC statistic9). Finally, individual components of the TEND model were also offered to the model with those which significantly improving fit kept in the model. The GEE model used an independent correlation structure since this minimized the QIC statistic in the base model. All covariates in the final daily discharge probability model were used in the hospital death model. Analyses were conducted using SAS 9.4 (Cary, NC).

RESULTS

There were 6,405 general medicine admissions involving 5208 patients and 38,967 patient-days between January 1 and December 31, 2015 (Appendix A). Patients were elderly and were evenly divided in terms of gender, with 85% of them being admitted from the community. Comorbidities were common (median coded Charlson score was 2), with 6.0% of patients known to our cancer clinic. The median length of stay was 4 days (IQR, 2–7), with 378 admissions (5.9%) ending in death and 121 admissions (1.9%) ending in a transfer to another service.

There were 41 different staff people having at least 1 day on service. The median total service by physicians was 9 weeks (IQR 1.8–10.9 weeks). Changes in hospitalist coverage were common; hospitalizations had a median of 1 (IQR 1–2) physician switches and a median of 1 (IQR 1–2) different physicians. However, patients spent a median of 100% (IQR 66.7%–100%] of their total hospitalization with their primary hospitalist. The median duration of individual physician “stints” on service was 5 days (IQR 2–7, range 1–42).

The TEND model accurately estimated daily discharge probability for the entire cohort with 5833 and 5718.6 observed and expected discharges, respectively, during 38,967 patient-days (O/E 1.02, 95% CI 0.99–1.05). Discharge probability increased as hospitalist continuity increased, but this was statistically significant only when hospitalist continuity exceeded 4 days. Other covariables also significantly influenced discharge probability (Appendix B).

After adjusting for important covariables (Appendix C), hospitalist continuity was significantly associated with daily discharge probability (Figure). Discharge probability increased linearly with increasing consecutive days that hospitalists treated patients. For each additional consecutive day with the same hospitalist, the adjusted daily odds increased by 2% (Adj-odds ratio [OR] 1.02, 95% CI 1.01–1.02, Appendix C). When the consecutive number of days that hospitalists remained on service increased from 1 to 28 days, the adjusted discharge probability for the average patient increased from 18.1% to 25.7%, respectively. Discharge was significantly influenced by other factors (Appendix C). Continuity did not influence the risk of death in hospital (Appendix D).

DISCUSSION

In a general medicine service at a large teaching hospital, this study found that greater hospitalist continuity was associated with a significantly increased adjusted daily discharge probability, increasing (in the average patient) from 18.1% to 25.7% when the consecutive number of hospitalist days on service increased from 1 to 28 days, respectively.

The study demonstrated some interesting findings. First, it shows that shifting patient care between physicians can significantly influence patient outcomes. This could be a function of incomplete transfer of knowledge between physicians, a phenomenon that should be expected given the extensive amount of information–both explicit and implicit–that physicians collect about particular patients during their hospitalization. Second, continuity of care could increase a physician’s and a patient’s confidence in clinical decision-making. Perhaps physicians are subconsciously more trusting of their instincts (and the decisions based on those instincts) when they have been on service for a while. It is also possible that patients more readily trust recommendations of a physician they have had throughout their stay. Finally, people wishing to decrease patient length of stay might consider minimizing the extent that hospitalists sign over patient care to colleagues.

Several issues should be noted when interpreting the results of the study. First, the study examined only patient discharge and death. These are by no means the only or the most important outcomes that might be influenced by hospitalist continuity. Second, this study was limited to a single service at a single center. Third, the analysis did not account for house-staff continuity. Since hospitalist and house-staff at the study hospital invariably switched at different times, it is unlikely that hospitalist continuity was a surrogate for house-staff continuity.

 

 

Disclosures

This study was supported by the Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. The author has nothing to disclose.

In addition to treating patients, physicians frequently have other time commitments that could include administrative, teaching, research, and family duties. Inpatient medicine is particularly unforgiving to these nonclinical duties since patients have to be assessed on a daily basis. Because of this characteristic, it is not uncommon for inpatient care responsibility to be switched between physicians to create time for nonclinical duties and personal health.

In contrast to the ambulatory setting, the influence of physician continuity of care on inpatient outcomes has not been studied frequently. Studies of inpatient continuity have primarily focused on patient discharge (likely because of its objective nature) over the weekends (likely because weekend cross-coverage is common) and have reported conflicting results.1-3 However, discontinuity of care is not isolated to the weekend since hospitalist-switches can occur at any time. In addition, expressing hospitalist continuity of care as a dichotomous variable (Was there weekend cross-coverage?) could incompletely express continuity since discharge likelihood might change with the consecutive number of days that a hospitalist is on service. This study measured the influence of hospitalist continuity throughout the patient’s hospitalization (rather than just the weekend) on daily patient discharge.

METHODS

Study Setting and Databases Used for Analysis

The study was conducted at The Ottawa Hospital, Ontario, Canada, a 1000-bed teaching hospital with 2 campuses and the primary referral center in our region. The division of general internal medicine has 6 patient services (or “teams”) at two campuses led by a staff hospitalist (exclusively general internists), a senior medical resident (2nd year of training), and various numbers of interns and medical students. Staff hospitalists do not treat more than one patient service even on the weekends.

Patients are admitted to each service on a daily basis and almost exclusively from the emergency room. Assignment of patients is essentially random since all services have the same clinical expertise. At a particular campus, the number of patients assigned daily to each service is usually equivalent between teams. Patients almost never switch between teams but may be transferred to another specialty. The study was approved by our local research ethics board.

The Patient Registry Database records for each patient the date and time of admissions (defined as the moment that a patient’s admission request is entered into the database), death or discharge from hospital (defined as the time when the patient’s discharge from hospital was entered into the database), or transfer to another specialty. It also records emergency visits, patient demographics, and location during admission. The Laboratory Database records all laboratory tests and their results.

Study Cohort

The Patient Registry Database was used to identify all individuals who were admitted to the general medicine services between January 1 and December 31, 2015. This time frame was selected to ensure that data were complete and current. General medicine services were analyzed because they are collectively the largest inpatient specialty in the hospital.

Study Outcome

The primary outcome was discharge from hospital as determined from the Patient Registry Database. Patients who died or were transferred to another service were not counted as outcomes.

Covariables

The primary exposure variable was the consecutive number of days (including weekends) that a particular hospitalist rounded on patients on a particular general medicine service. This was measured using call schedules. Other covariates included tomorrow’s expected number of discharges (TEND) daily discharge probability and its components. The TEND model4 used patient factors (age, Laboratory Abnormality Physiological Score [LAPS]5 calculated at admission) and hospitalization factors (hospital campus and service, admission urgency, day of the week, ICU status) to predict the daily discharge probability. In a validation population, these daily discharge probabilities (when summed over a particular day) strongly predicted the daily number of discharges (adjusted R2 of 89.2% [P < .001], median relative difference between observed and expected number of discharges of only 1.4% [Interquartile range,IQR: −5.5% to 7.1%]). The expected annual death risk was determined using the HOMR-now! model.6 This model used routinely collected data available at patient admission regarding the patient (sex, life-table-estimated 1-year death risk, Charlson score, current living location, previous cancer clinic status, and number of emergency department visits in the previous year) and the hospitalization (urgency, service, and LAPS score). The model explained more than half of the total variability in death likelihood of death (Nagelkirke’s R2 value of 0.53),7 was highly discriminative (C-statistic 0.92), and accurately predicted death risk (calibration slope 0.98).

 

 

Analysis

Logistic generalized estimating equation (GEE) methods were used to model the adjusted daily discharge probability.8 Data in the analytical dataset were expressed in a patient-day format (each dataset row represented one day for a particular patient). This permitted the inclusion of time-dependent covariates and allowed the GEE model to cluster hospitalization days within patients.

Model construction started with the TEND daily discharge probability and the HOMR-now! expected annual death risk (both expressed as log-odds). Then, hospitalist continuity was entered as a time-dependent covariate (ie, its value changed every day). Linear, square root, and natural logarithm forms of physician continuity were examined to determine the best fit (determined using the QIC statistic9). Finally, individual components of the TEND model were also offered to the model with those which significantly improving fit kept in the model. The GEE model used an independent correlation structure since this minimized the QIC statistic in the base model. All covariates in the final daily discharge probability model were used in the hospital death model. Analyses were conducted using SAS 9.4 (Cary, NC).

RESULTS

There were 6,405 general medicine admissions involving 5208 patients and 38,967 patient-days between January 1 and December 31, 2015 (Appendix A). Patients were elderly and were evenly divided in terms of gender, with 85% of them being admitted from the community. Comorbidities were common (median coded Charlson score was 2), with 6.0% of patients known to our cancer clinic. The median length of stay was 4 days (IQR, 2–7), with 378 admissions (5.9%) ending in death and 121 admissions (1.9%) ending in a transfer to another service.

There were 41 different staff people having at least 1 day on service. The median total service by physicians was 9 weeks (IQR 1.8–10.9 weeks). Changes in hospitalist coverage were common; hospitalizations had a median of 1 (IQR 1–2) physician switches and a median of 1 (IQR 1–2) different physicians. However, patients spent a median of 100% (IQR 66.7%–100%] of their total hospitalization with their primary hospitalist. The median duration of individual physician “stints” on service was 5 days (IQR 2–7, range 1–42).

The TEND model accurately estimated daily discharge probability for the entire cohort with 5833 and 5718.6 observed and expected discharges, respectively, during 38,967 patient-days (O/E 1.02, 95% CI 0.99–1.05). Discharge probability increased as hospitalist continuity increased, but this was statistically significant only when hospitalist continuity exceeded 4 days. Other covariables also significantly influenced discharge probability (Appendix B).

After adjusting for important covariables (Appendix C), hospitalist continuity was significantly associated with daily discharge probability (Figure). Discharge probability increased linearly with increasing consecutive days that hospitalists treated patients. For each additional consecutive day with the same hospitalist, the adjusted daily odds increased by 2% (Adj-odds ratio [OR] 1.02, 95% CI 1.01–1.02, Appendix C). When the consecutive number of days that hospitalists remained on service increased from 1 to 28 days, the adjusted discharge probability for the average patient increased from 18.1% to 25.7%, respectively. Discharge was significantly influenced by other factors (Appendix C). Continuity did not influence the risk of death in hospital (Appendix D).

DISCUSSION

In a general medicine service at a large teaching hospital, this study found that greater hospitalist continuity was associated with a significantly increased adjusted daily discharge probability, increasing (in the average patient) from 18.1% to 25.7% when the consecutive number of hospitalist days on service increased from 1 to 28 days, respectively.

The study demonstrated some interesting findings. First, it shows that shifting patient care between physicians can significantly influence patient outcomes. This could be a function of incomplete transfer of knowledge between physicians, a phenomenon that should be expected given the extensive amount of information–both explicit and implicit–that physicians collect about particular patients during their hospitalization. Second, continuity of care could increase a physician’s and a patient’s confidence in clinical decision-making. Perhaps physicians are subconsciously more trusting of their instincts (and the decisions based on those instincts) when they have been on service for a while. It is also possible that patients more readily trust recommendations of a physician they have had throughout their stay. Finally, people wishing to decrease patient length of stay might consider minimizing the extent that hospitalists sign over patient care to colleagues.

Several issues should be noted when interpreting the results of the study. First, the study examined only patient discharge and death. These are by no means the only or the most important outcomes that might be influenced by hospitalist continuity. Second, this study was limited to a single service at a single center. Third, the analysis did not account for house-staff continuity. Since hospitalist and house-staff at the study hospital invariably switched at different times, it is unlikely that hospitalist continuity was a surrogate for house-staff continuity.

 

 

Disclosures

This study was supported by the Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. The author has nothing to disclose.

References

1. Ali NA, Hammersley J, Hoffmann SP et al. Continuity of care in intensive care units: a cluster-randomized trial of intensivist staffing. Am J Respir Crit Care Med. 2011;184(7):803-808. PubMed
2. Epstein K, Juarez E, Epstein A, Loya K, Singer A. The impact of fragmentation of hospitalist care on length of stay. J Hosp Med. 2010;5(6):335-338. PubMed
3. Blecker S, Shine D, Park N et al. Association of weekend continuity of care with hospital length of stay. Int J Qual Health Care. 2014;26(5):530-537. PubMed
4. van Walraven C, Forster AJ. The TEND (Tomorrow’s Expected Number of Discharges) model accurately predicted the number of patients who were discharged from the hospital in the next day. J Hosp Med. In press. PubMed
5. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232-239. PubMed
6. van Walraven C, Forster AJ. HOMR-now! A modification of the HOMR score that predicts 1-year death risk for hospitalized patients using data immediately available at patient admission. Am J Med. In press. PubMed
7. Nagelkerke NJ. A note on a general definition of the coefficient of determination. Biometrika. 1991;78(3):691-692. 
8. Stokes ME, Davis CS, Koch GG. Generalized estimating equations. Categorical Data Analysis Using the SAS System. 2nd ed. Cary, NC: SAS Institute Inc; 2000;469-549.
9. Pan W. Akaike’s information criterion in generalized estimating equations. Biometrics. 2001;57(1):120-125. PubMed

References

1. Ali NA, Hammersley J, Hoffmann SP et al. Continuity of care in intensive care units: a cluster-randomized trial of intensivist staffing. Am J Respir Crit Care Med. 2011;184(7):803-808. PubMed
2. Epstein K, Juarez E, Epstein A, Loya K, Singer A. The impact of fragmentation of hospitalist care on length of stay. J Hosp Med. 2010;5(6):335-338. PubMed
3. Blecker S, Shine D, Park N et al. Association of weekend continuity of care with hospital length of stay. Int J Qual Health Care. 2014;26(5):530-537. PubMed
4. van Walraven C, Forster AJ. The TEND (Tomorrow’s Expected Number of Discharges) model accurately predicted the number of patients who were discharged from the hospital in the next day. J Hosp Med. In press. PubMed
5. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232-239. PubMed
6. van Walraven C, Forster AJ. HOMR-now! A modification of the HOMR score that predicts 1-year death risk for hospitalized patients using data immediately available at patient admission. Am J Med. In press. PubMed
7. Nagelkerke NJ. A note on a general definition of the coefficient of determination. Biometrika. 1991;78(3):691-692. 
8. Stokes ME, Davis CS, Koch GG. Generalized estimating equations. Categorical Data Analysis Using the SAS System. 2nd ed. Cary, NC: SAS Institute Inc; 2000;469-549.
9. Pan W. Akaike’s information criterion in generalized estimating equations. Biometrics. 2001;57(1):120-125. PubMed

Issue
Journal of Hospital Medicine 13(10)
Issue
Journal of Hospital Medicine 13(10)
Page Number
692-694. Published online first March 26, 2018
Page Number
692-694. Published online first March 26, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Carl van Walraven, MD, MSc, FRCPC, ASB1-003 1053 Carling Ave, Ottawa ON; K1Y 4E9; Phone: 613-761-4903; Fax: 613-761-5492; E-mail: carlv@ohri.ca
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Training Residents in Hospital Medicine: The Hospitalist Elective National Survey

Article Type
Changed
Sat, 09/29/2018 - 22:41

Hospital medicine has become the fastest growing medicine subspecialty, though no standardized hospitalist-focused educational program is required to become a practicing adult medicine hospitalist.1 Historically, adult hospitalists have had little additional training beyond residency, yet, as residency training adapts to duty hour restrictions, patient caps, and increasing attending oversight, it is not clear if traditional rotations and curricula provide adequate preparation for independent practice as an adult hospitalist.2-5 Several types of training and educational programs have emerged to fill this potential gap. These include hospital medicine fellowships, residency pathways, early career faculty development programs (eg, Society of Hospital Medicine/ Society of General Internal Medicine sponsored Academic Hospitalist Academy), and hospitalist-focused resident rotations.6-10 These activities are intended to ensure that residents and early career physicians gain the skills and competencies required to effectively practice hospital medicine.

Hospital medicine fellowships, residency pathways, and faculty development have been described previously.6-8 However, the prevalence and characteristics of hospital medicine-focused resident rotations are unknown, and these rotations are rarely publicized beyond local residency programs. Our study aims to determine the prevalence, purpose, and function of hospitalist-focused rotations within residency programs and explore the role these rotations have in preparing residents for a career in hospital medicine.

METHODS

Study Design, Setting, and Participants

We conducted a cross-sectional study involving the largest 100 Accreditation Council for Graduate Medical Education (ACGME) internal medicine residency programs. We chose the largest programs as we hypothesized that these programs would be most likely to have the infrastructure to support hospital medicine focused rotations compared to smaller programs. The UCSF Committee on Human Research approved this study.

Survey Development

We developed a study-specific survey (the Hospitalist Elective National Survey [HENS]) to assess the prevalence, structure, curricular goals, and perceived benefits of distinct hospitalist rotations as defined by individual resident programs. The survey prompted respondents to consider a “hospitalist-focused” rotation as one that is different from a traditional inpatient “ward” rotation and whose emphasis is on hospitalist-specific training, clinical skills, or career development. The 18-question survey (Appendix 1) included fixed choice, multiple choice, and open-ended responses.

Data Collection

Using publicly available data from the ACGME website (www.acgme.org), we identified the largest 100 medicine programs based on the total number of residents. This included programs with 81 or more residents. An electronic survey was e-mailed to the leadership of each program. In May 2015, surveys were sent to Residency Program Directors (PD), and if they did not respond after 2 attempts, then Associate Program Directors (APD) were contacted twice. If both these leaders did not respond, then the survey was sent to residency program administrators or Hospital Medicine Division Chiefs. Only one survey was completed per site.

Data Analysis

We used descriptive statistics to summarize quantitative data. Responses to open-ended qualitative questions about the goals, strengths, and design of rotations were analyzed using thematic analysis.11 During analysis, we iteratively developed and refined codes that identified important concepts that emerged from the data. Two members of the research team trained in qualitative data analysis coded these data independently (SL & JH).

RESULTS

Eighty-two residency program leaders (53 PD, 19 APD, 10 chiefs/admin) responded to the survey (82% total response rate). Among all responders, the prevalence of hospitalist-focused rotations was 50% (41/82). Of these 41 rotations, 85% (35/41) were elective rotations and 15% (6/41) were mandatory rotations. Hospitalist rotations ranged in existence from 1 to 15 years with a mean duration of 4.78 years (S.D. 3.5).

Of the 41 programs that did not have a hospital medicine-focused rotation, the key barriers identified were a lack of a well-defined model (29%), low faculty interest (15%), low resident interest (12%), and lack of funding (5%). Despite these barriers, 9 of these 41 programs (22%) stated they planned to start a rotation in the future – of which, 3 programs (7%) planned to start a rotation within the year.


Of the 41 established rotations, most were 1 month in duration (31/41, 76%) and most of the participants included second-year residents (30/41, 73%), and/or third-year residents (32/41, 78%). In addition to clinical work, most rotations had a nonclinical component that included teaching, research/scholarship, and/or work on quality improvement or patient safety (Table 1). Clinical activities, nonclinical activities, and curricular elements varied across institutions (Table 1).

Most programs with rotations (39/41, 95%) reported that their hospitalist rotation filled at least one gap in traditional residency curriculum. The most frequently identified gaps the rotation filled included: allowing progressive clinical autonomy (59%, 24/41), learning about quality improvement and high value care (41%, 17/41), and preparing to become a practicing hospitalist (39%, 16/41). Most respondents (66%, 27/41) reported that the rotation helped to prepare trainees for their first year as an attending.

Results of thematic analysis related to the goals, strengths, and design of rotations are shown in Table 2. Five themes emerged relating to autonomy, mentorship, hospitalist skills, real-world experience, and training and curriculum gaps. These themes describe the underlying structure in which these rotations promote career preparation and skill development.

 

 

DISCUSSION

The Hospital Elective National Survey provides insight into a growing component of hospitalist-focused training and preparation. Fifty percent of ACGME residency programs surveyed in this study had a hospitalist-focused rotation. Rotation characteristics were heterogeneous, perhaps reflecting both the homegrown nature of their development and the lack of national study or data to guide what constitutes an “ideal” rotation. Common functions of rotations included providing career mentorship and allowing for trainees to get experience “being a hospitalist.” Other key elements of the rotations included providing additional clinical autonomy and teaching material outside of traditional residency curricula such as quality improvement, patient safety, billing, and healthcare finances.

Prior research has explored other training for hospitalists such as fellowships, pathways, and faculty development.6-8 A hospital medicine fellowship provides extensive training but without a practice requirement in adult medicine (as now exists in pediatric hospital medicine), the impact of fellowship training may be limited by its scale.12,13 Longitudinal hospitalist residency pathways provide comprehensive skill development and often require an early career commitment from trainees.7 Faculty development can be another tool to foster career growth, though requires local investment from hospitalist groups that may not have the resources or experience to support this.8 Our study has highlighted that hospitalist-focused rotations within residency programs can train physicians for a career in hospital medicine. Hospitalist and residency leaders should consider that these rotations might be the only hospital medicine-focused training that new hospitalists will have. Given the variable nature of these rotations nationally, developing standards around core hospitalist competencies within these rotations should be a key component to career preparation and a goal for the field at large.14,15

Our study has limitations. The survey focused only on internal medicine as it is the most common training background of hospitalists; however, the field has grown to include other specialties including pediatrics, neurology, family medicine, and surgery. In addition, the survey reviewed the largest ACGME Internal Medicine programs to best evaluate prevalence and content—it may be that some smaller programs have rotations with different characteristics that we have not captured. Lastly, the survey reviewed the rotations through the lens of residency program leadership and not trainees. A future survey of trainees or early career hospitalists who participated in these rotations could provide a better understanding of their achievements and effectiveness.

CONCLUSION

We anticipate that the demand for hospitalist-focused training will continue to grow as more residents in training seek to enter the specialty. Hospitalist and residency program leaders have an opportunity within residency training programs to build new or further develop existing hospital medicine-focused rotations. The HENS survey demonstrates that hospitalist-focused rotations are prevalent in residency education and have the potential to play an important role in hospitalist training.

Disclosure

The authors declare no conflicts of interest in relation to this manuscript.

Files
References

1. Wachter RM, Goldman L. Zero to 50,000 – The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375:1009-1011. PubMed
2. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists’ needs. J Gen Intern Med. 2008;23:1110-1115. PubMed
3. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine’s promise through internal medicine residency redesign. Mt Sinai J Med. 2008; 5:436-441. PubMed
4. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001; 15;111:247-254. PubMed
5. Kumar A, Smeraglio A, Witteles R, Harman S, Nallamshetty, S, Rogers A, Harrington R, Ahuja N. A resident-created hospitalist curriculum for internal medicine housestaff. J Hosp Med. 2016;11:646-649. PubMed
6. Ranji, SR, Rosenman, DJ, Amin, AN, Kripalani, S. Hospital medicine fellowships: works in progress. Am J Med. 2006;119(1):72.e1-7. PubMed
7. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12:173-176. PubMed
8. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6:161-166. PubMed
9. Academic Hospitalist Academy. Course Description, Objectives and Society Sponsorship. Available at: https://academichospitalist.org/. Accessed August 23, 2017. 
10. Amin AN. A successful hospitalist rotation for senior medicine residents. Med Educ. 2003;37:1042. PubMed
11. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77-101. 
12. American Board of Medical Specialties. ABMS Officially Recognizes Pediatric Hospital Medicine Subspecialty Certification Available at: http://www.abms.org/news-events/abms-officially-recognizes-pediatric-hospital-medicine-subspecialty-certification/. Accessed August 23, 2017. PubMed
13. Wiese J. Residency training: beginning with the end in mind. J Gen Intern Med. 2008; 23(7):1122-1123. PubMed
14. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006; 1 Suppl 1:48-56. PubMed
15. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine – 2017 revision: introduction and methodology. J Hosp Med. 2017;4:283-287. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(9)
Publications
Topics
Page Number
623-625. Published online first March 26, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Hospital medicine has become the fastest growing medicine subspecialty, though no standardized hospitalist-focused educational program is required to become a practicing adult medicine hospitalist.1 Historically, adult hospitalists have had little additional training beyond residency, yet, as residency training adapts to duty hour restrictions, patient caps, and increasing attending oversight, it is not clear if traditional rotations and curricula provide adequate preparation for independent practice as an adult hospitalist.2-5 Several types of training and educational programs have emerged to fill this potential gap. These include hospital medicine fellowships, residency pathways, early career faculty development programs (eg, Society of Hospital Medicine/ Society of General Internal Medicine sponsored Academic Hospitalist Academy), and hospitalist-focused resident rotations.6-10 These activities are intended to ensure that residents and early career physicians gain the skills and competencies required to effectively practice hospital medicine.

Hospital medicine fellowships, residency pathways, and faculty development have been described previously.6-8 However, the prevalence and characteristics of hospital medicine-focused resident rotations are unknown, and these rotations are rarely publicized beyond local residency programs. Our study aims to determine the prevalence, purpose, and function of hospitalist-focused rotations within residency programs and explore the role these rotations have in preparing residents for a career in hospital medicine.

METHODS

Study Design, Setting, and Participants

We conducted a cross-sectional study involving the largest 100 Accreditation Council for Graduate Medical Education (ACGME) internal medicine residency programs. We chose the largest programs as we hypothesized that these programs would be most likely to have the infrastructure to support hospital medicine focused rotations compared to smaller programs. The UCSF Committee on Human Research approved this study.

Survey Development

We developed a study-specific survey (the Hospitalist Elective National Survey [HENS]) to assess the prevalence, structure, curricular goals, and perceived benefits of distinct hospitalist rotations as defined by individual resident programs. The survey prompted respondents to consider a “hospitalist-focused” rotation as one that is different from a traditional inpatient “ward” rotation and whose emphasis is on hospitalist-specific training, clinical skills, or career development. The 18-question survey (Appendix 1) included fixed choice, multiple choice, and open-ended responses.

Data Collection

Using publicly available data from the ACGME website (www.acgme.org), we identified the largest 100 medicine programs based on the total number of residents. This included programs with 81 or more residents. An electronic survey was e-mailed to the leadership of each program. In May 2015, surveys were sent to Residency Program Directors (PD), and if they did not respond after 2 attempts, then Associate Program Directors (APD) were contacted twice. If both these leaders did not respond, then the survey was sent to residency program administrators or Hospital Medicine Division Chiefs. Only one survey was completed per site.

Data Analysis

We used descriptive statistics to summarize quantitative data. Responses to open-ended qualitative questions about the goals, strengths, and design of rotations were analyzed using thematic analysis.11 During analysis, we iteratively developed and refined codes that identified important concepts that emerged from the data. Two members of the research team trained in qualitative data analysis coded these data independently (SL & JH).

RESULTS

Eighty-two residency program leaders (53 PD, 19 APD, 10 chiefs/admin) responded to the survey (82% total response rate). Among all responders, the prevalence of hospitalist-focused rotations was 50% (41/82). Of these 41 rotations, 85% (35/41) were elective rotations and 15% (6/41) were mandatory rotations. Hospitalist rotations ranged in existence from 1 to 15 years with a mean duration of 4.78 years (S.D. 3.5).

Of the 41 programs that did not have a hospital medicine-focused rotation, the key barriers identified were a lack of a well-defined model (29%), low faculty interest (15%), low resident interest (12%), and lack of funding (5%). Despite these barriers, 9 of these 41 programs (22%) stated they planned to start a rotation in the future – of which, 3 programs (7%) planned to start a rotation within the year.


Of the 41 established rotations, most were 1 month in duration (31/41, 76%) and most of the participants included second-year residents (30/41, 73%), and/or third-year residents (32/41, 78%). In addition to clinical work, most rotations had a nonclinical component that included teaching, research/scholarship, and/or work on quality improvement or patient safety (Table 1). Clinical activities, nonclinical activities, and curricular elements varied across institutions (Table 1).

Most programs with rotations (39/41, 95%) reported that their hospitalist rotation filled at least one gap in traditional residency curriculum. The most frequently identified gaps the rotation filled included: allowing progressive clinical autonomy (59%, 24/41), learning about quality improvement and high value care (41%, 17/41), and preparing to become a practicing hospitalist (39%, 16/41). Most respondents (66%, 27/41) reported that the rotation helped to prepare trainees for their first year as an attending.

Results of thematic analysis related to the goals, strengths, and design of rotations are shown in Table 2. Five themes emerged relating to autonomy, mentorship, hospitalist skills, real-world experience, and training and curriculum gaps. These themes describe the underlying structure in which these rotations promote career preparation and skill development.

 

 

DISCUSSION

The Hospital Elective National Survey provides insight into a growing component of hospitalist-focused training and preparation. Fifty percent of ACGME residency programs surveyed in this study had a hospitalist-focused rotation. Rotation characteristics were heterogeneous, perhaps reflecting both the homegrown nature of their development and the lack of national study or data to guide what constitutes an “ideal” rotation. Common functions of rotations included providing career mentorship and allowing for trainees to get experience “being a hospitalist.” Other key elements of the rotations included providing additional clinical autonomy and teaching material outside of traditional residency curricula such as quality improvement, patient safety, billing, and healthcare finances.

Prior research has explored other training for hospitalists such as fellowships, pathways, and faculty development.6-8 A hospital medicine fellowship provides extensive training but without a practice requirement in adult medicine (as now exists in pediatric hospital medicine), the impact of fellowship training may be limited by its scale.12,13 Longitudinal hospitalist residency pathways provide comprehensive skill development and often require an early career commitment from trainees.7 Faculty development can be another tool to foster career growth, though requires local investment from hospitalist groups that may not have the resources or experience to support this.8 Our study has highlighted that hospitalist-focused rotations within residency programs can train physicians for a career in hospital medicine. Hospitalist and residency leaders should consider that these rotations might be the only hospital medicine-focused training that new hospitalists will have. Given the variable nature of these rotations nationally, developing standards around core hospitalist competencies within these rotations should be a key component to career preparation and a goal for the field at large.14,15

Our study has limitations. The survey focused only on internal medicine as it is the most common training background of hospitalists; however, the field has grown to include other specialties including pediatrics, neurology, family medicine, and surgery. In addition, the survey reviewed the largest ACGME Internal Medicine programs to best evaluate prevalence and content—it may be that some smaller programs have rotations with different characteristics that we have not captured. Lastly, the survey reviewed the rotations through the lens of residency program leadership and not trainees. A future survey of trainees or early career hospitalists who participated in these rotations could provide a better understanding of their achievements and effectiveness.

CONCLUSION

We anticipate that the demand for hospitalist-focused training will continue to grow as more residents in training seek to enter the specialty. Hospitalist and residency program leaders have an opportunity within residency training programs to build new or further develop existing hospital medicine-focused rotations. The HENS survey demonstrates that hospitalist-focused rotations are prevalent in residency education and have the potential to play an important role in hospitalist training.

Disclosure

The authors declare no conflicts of interest in relation to this manuscript.

Hospital medicine has become the fastest growing medicine subspecialty, though no standardized hospitalist-focused educational program is required to become a practicing adult medicine hospitalist.1 Historically, adult hospitalists have had little additional training beyond residency, yet, as residency training adapts to duty hour restrictions, patient caps, and increasing attending oversight, it is not clear if traditional rotations and curricula provide adequate preparation for independent practice as an adult hospitalist.2-5 Several types of training and educational programs have emerged to fill this potential gap. These include hospital medicine fellowships, residency pathways, early career faculty development programs (eg, Society of Hospital Medicine/ Society of General Internal Medicine sponsored Academic Hospitalist Academy), and hospitalist-focused resident rotations.6-10 These activities are intended to ensure that residents and early career physicians gain the skills and competencies required to effectively practice hospital medicine.

Hospital medicine fellowships, residency pathways, and faculty development have been described previously.6-8 However, the prevalence and characteristics of hospital medicine-focused resident rotations are unknown, and these rotations are rarely publicized beyond local residency programs. Our study aims to determine the prevalence, purpose, and function of hospitalist-focused rotations within residency programs and explore the role these rotations have in preparing residents for a career in hospital medicine.

METHODS

Study Design, Setting, and Participants

We conducted a cross-sectional study involving the largest 100 Accreditation Council for Graduate Medical Education (ACGME) internal medicine residency programs. We chose the largest programs as we hypothesized that these programs would be most likely to have the infrastructure to support hospital medicine focused rotations compared to smaller programs. The UCSF Committee on Human Research approved this study.

Survey Development

We developed a study-specific survey (the Hospitalist Elective National Survey [HENS]) to assess the prevalence, structure, curricular goals, and perceived benefits of distinct hospitalist rotations as defined by individual resident programs. The survey prompted respondents to consider a “hospitalist-focused” rotation as one that is different from a traditional inpatient “ward” rotation and whose emphasis is on hospitalist-specific training, clinical skills, or career development. The 18-question survey (Appendix 1) included fixed choice, multiple choice, and open-ended responses.

Data Collection

Using publicly available data from the ACGME website (www.acgme.org), we identified the largest 100 medicine programs based on the total number of residents. This included programs with 81 or more residents. An electronic survey was e-mailed to the leadership of each program. In May 2015, surveys were sent to Residency Program Directors (PD), and if they did not respond after 2 attempts, then Associate Program Directors (APD) were contacted twice. If both these leaders did not respond, then the survey was sent to residency program administrators or Hospital Medicine Division Chiefs. Only one survey was completed per site.

Data Analysis

We used descriptive statistics to summarize quantitative data. Responses to open-ended qualitative questions about the goals, strengths, and design of rotations were analyzed using thematic analysis.11 During analysis, we iteratively developed and refined codes that identified important concepts that emerged from the data. Two members of the research team trained in qualitative data analysis coded these data independently (SL & JH).

RESULTS

Eighty-two residency program leaders (53 PD, 19 APD, 10 chiefs/admin) responded to the survey (82% total response rate). Among all responders, the prevalence of hospitalist-focused rotations was 50% (41/82). Of these 41 rotations, 85% (35/41) were elective rotations and 15% (6/41) were mandatory rotations. Hospitalist rotations ranged in existence from 1 to 15 years with a mean duration of 4.78 years (S.D. 3.5).

Of the 41 programs that did not have a hospital medicine-focused rotation, the key barriers identified were a lack of a well-defined model (29%), low faculty interest (15%), low resident interest (12%), and lack of funding (5%). Despite these barriers, 9 of these 41 programs (22%) stated they planned to start a rotation in the future – of which, 3 programs (7%) planned to start a rotation within the year.


Of the 41 established rotations, most were 1 month in duration (31/41, 76%) and most of the participants included second-year residents (30/41, 73%), and/or third-year residents (32/41, 78%). In addition to clinical work, most rotations had a nonclinical component that included teaching, research/scholarship, and/or work on quality improvement or patient safety (Table 1). Clinical activities, nonclinical activities, and curricular elements varied across institutions (Table 1).

Most programs with rotations (39/41, 95%) reported that their hospitalist rotation filled at least one gap in traditional residency curriculum. The most frequently identified gaps the rotation filled included: allowing progressive clinical autonomy (59%, 24/41), learning about quality improvement and high value care (41%, 17/41), and preparing to become a practicing hospitalist (39%, 16/41). Most respondents (66%, 27/41) reported that the rotation helped to prepare trainees for their first year as an attending.

Results of thematic analysis related to the goals, strengths, and design of rotations are shown in Table 2. Five themes emerged relating to autonomy, mentorship, hospitalist skills, real-world experience, and training and curriculum gaps. These themes describe the underlying structure in which these rotations promote career preparation and skill development.

 

 

DISCUSSION

The Hospital Elective National Survey provides insight into a growing component of hospitalist-focused training and preparation. Fifty percent of ACGME residency programs surveyed in this study had a hospitalist-focused rotation. Rotation characteristics were heterogeneous, perhaps reflecting both the homegrown nature of their development and the lack of national study or data to guide what constitutes an “ideal” rotation. Common functions of rotations included providing career mentorship and allowing for trainees to get experience “being a hospitalist.” Other key elements of the rotations included providing additional clinical autonomy and teaching material outside of traditional residency curricula such as quality improvement, patient safety, billing, and healthcare finances.

Prior research has explored other training for hospitalists such as fellowships, pathways, and faculty development.6-8 A hospital medicine fellowship provides extensive training but without a practice requirement in adult medicine (as now exists in pediatric hospital medicine), the impact of fellowship training may be limited by its scale.12,13 Longitudinal hospitalist residency pathways provide comprehensive skill development and often require an early career commitment from trainees.7 Faculty development can be another tool to foster career growth, though requires local investment from hospitalist groups that may not have the resources or experience to support this.8 Our study has highlighted that hospitalist-focused rotations within residency programs can train physicians for a career in hospital medicine. Hospitalist and residency leaders should consider that these rotations might be the only hospital medicine-focused training that new hospitalists will have. Given the variable nature of these rotations nationally, developing standards around core hospitalist competencies within these rotations should be a key component to career preparation and a goal for the field at large.14,15

Our study has limitations. The survey focused only on internal medicine as it is the most common training background of hospitalists; however, the field has grown to include other specialties including pediatrics, neurology, family medicine, and surgery. In addition, the survey reviewed the largest ACGME Internal Medicine programs to best evaluate prevalence and content—it may be that some smaller programs have rotations with different characteristics that we have not captured. Lastly, the survey reviewed the rotations through the lens of residency program leadership and not trainees. A future survey of trainees or early career hospitalists who participated in these rotations could provide a better understanding of their achievements and effectiveness.

CONCLUSION

We anticipate that the demand for hospitalist-focused training will continue to grow as more residents in training seek to enter the specialty. Hospitalist and residency program leaders have an opportunity within residency training programs to build new or further develop existing hospital medicine-focused rotations. The HENS survey demonstrates that hospitalist-focused rotations are prevalent in residency education and have the potential to play an important role in hospitalist training.

Disclosure

The authors declare no conflicts of interest in relation to this manuscript.

References

1. Wachter RM, Goldman L. Zero to 50,000 – The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375:1009-1011. PubMed
2. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists’ needs. J Gen Intern Med. 2008;23:1110-1115. PubMed
3. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine’s promise through internal medicine residency redesign. Mt Sinai J Med. 2008; 5:436-441. PubMed
4. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001; 15;111:247-254. PubMed
5. Kumar A, Smeraglio A, Witteles R, Harman S, Nallamshetty, S, Rogers A, Harrington R, Ahuja N. A resident-created hospitalist curriculum for internal medicine housestaff. J Hosp Med. 2016;11:646-649. PubMed
6. Ranji, SR, Rosenman, DJ, Amin, AN, Kripalani, S. Hospital medicine fellowships: works in progress. Am J Med. 2006;119(1):72.e1-7. PubMed
7. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12:173-176. PubMed
8. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6:161-166. PubMed
9. Academic Hospitalist Academy. Course Description, Objectives and Society Sponsorship. Available at: https://academichospitalist.org/. Accessed August 23, 2017. 
10. Amin AN. A successful hospitalist rotation for senior medicine residents. Med Educ. 2003;37:1042. PubMed
11. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77-101. 
12. American Board of Medical Specialties. ABMS Officially Recognizes Pediatric Hospital Medicine Subspecialty Certification Available at: http://www.abms.org/news-events/abms-officially-recognizes-pediatric-hospital-medicine-subspecialty-certification/. Accessed August 23, 2017. PubMed
13. Wiese J. Residency training: beginning with the end in mind. J Gen Intern Med. 2008; 23(7):1122-1123. PubMed
14. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006; 1 Suppl 1:48-56. PubMed
15. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine – 2017 revision: introduction and methodology. J Hosp Med. 2017;4:283-287. PubMed

References

1. Wachter RM, Goldman L. Zero to 50,000 – The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375:1009-1011. PubMed
2. Glasheen JJ, Siegal EM, Epstein K, Kutner J, Prochazka AV. Fulfilling the promise of hospital medicine: tailoring internal medicine training to address hospitalists’ needs. J Gen Intern Med. 2008;23:1110-1115. PubMed
3. Glasheen JJ, Goldenberg J, Nelson JR. Achieving hospital medicine’s promise through internal medicine residency redesign. Mt Sinai J Med. 2008; 5:436-441. PubMed
4. Plauth WH 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Med. 2001; 15;111:247-254. PubMed
5. Kumar A, Smeraglio A, Witteles R, Harman S, Nallamshetty, S, Rogers A, Harrington R, Ahuja N. A resident-created hospitalist curriculum for internal medicine housestaff. J Hosp Med. 2016;11:646-649. PubMed
6. Ranji, SR, Rosenman, DJ, Amin, AN, Kripalani, S. Hospital medicine fellowships: works in progress. Am J Med. 2006;119(1):72.e1-7. PubMed
7. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12:173-176. PubMed
8. Sehgal NL, Sharpe BA, Auerbach AA, Wachter RM. Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6:161-166. PubMed
9. Academic Hospitalist Academy. Course Description, Objectives and Society Sponsorship. Available at: https://academichospitalist.org/. Accessed August 23, 2017. 
10. Amin AN. A successful hospitalist rotation for senior medicine residents. Med Educ. 2003;37:1042. PubMed
11. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77-101. 
12. American Board of Medical Specialties. ABMS Officially Recognizes Pediatric Hospital Medicine Subspecialty Certification Available at: http://www.abms.org/news-events/abms-officially-recognizes-pediatric-hospital-medicine-subspecialty-certification/. Accessed August 23, 2017. PubMed
13. Wiese J. Residency training: beginning with the end in mind. J Gen Intern Med. 2008; 23(7):1122-1123. PubMed
14. Dressler DD, Pistoria MJ, Budnitz TL, McKean SC, Amin AN. Core competencies in hospital medicine: development and methodology. J Hosp Med. 2006; 1 Suppl 1:48-56. PubMed
15. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine – 2017 revision: introduction and methodology. J Hosp Med. 2017;4:283-287. PubMed

Issue
Journal of Hospital Medicine 13(9)
Issue
Journal of Hospital Medicine 13(9)
Page Number
623-625. Published online first March 26, 2018
Page Number
623-625. Published online first March 26, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Steven Ludwin, MD, Assistant Professor of Medicine, Division of Hospital Medicine, University of California, San Francisco, 533 Parnassus Avenue, Box 0131, San Francisco, CA 94113; Telephone: 415-476-4814; Fax: 415-502-1963; E-mail: Steven.Ludwin@ucsf.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Evaluating the Benefits of Hospital Room Artwork for Patients Receiving Cancer Treatment: A Randomized Controlled Trial

Article Type
Changed
Sun, 08/19/2018 - 21:28

With hospital reimbursement increasingly being linked to patient satisfaction,1 about half of US hospitals have embraced arts programs as a means of humanizing clinical environments and improving the patient experience.2,3 There is emerging evidence that integrating such programs into clinical settings is associated with less pain, stress, and anxiety4-10 as well as improved mood,11 greater levels of interaction,12 and feeling less institutionalized.13 However, it has been observed that existing studies have been undertaken with variable methodological rigor,14 and few randomized controlled trials (RCTs) have linked specific design features or interventions directly to healthcare outcomes. We designed a RCT to test the hypotheses that (1) placing a painting by a local artist in the line of vision of hospitalized patients would improve psychological and clinical outcomes and patient satisfaction and (2) letting patients choose their own painting would offer even greater benefit in these areas.

METHODS

From 2014 to 2016, our research team recruited inpatients who were being treated in the Pennsylvania State University Hershey Cancer Institute in Hershey, Pennsylvania. Patients were eligible if they were English speaking, over the age of 19, not cognitively impaired, and had been admitted for cancer-related treatments that required at least a 3-day inpatient stay. During recruitment, patients were told that the study was on patient care and room décor, and thus those who were not being given artwork did not know about the artwork option. By using a permuted block design with mixed block size, we randomly assigned consenting patients to 1 of the following 3 groups: (1) those who chose the painting displayed in their rooms, (2) those whose painting was randomly selected, and (3) those with no painting in their rooms, only white boards in their line of vision (see Figure 1). All paintings were created by artists in central Pennsylvania and reproduced as high-quality digital prints for the study, costing approximately $90 apiece. Members of the research team visited patients in the designated rooms 3 times during their stay—within 24 hours of being admitted, within 24 to 48 hours of the first visit, and within 24 to 48 hours of the second visit—with each visit lasting from 5 to 10 minutes. Patients who were given the opportunity to select art for their rooms were shown a catalogue of approximately 20 available paintings from which to choose a desired print; as with the group whose paintings were randomly selected for them, patients who made a choice had a print immediately hung in their room by members of the research team for the entirety of their inpatient stay.

Outcomes and Measures

The primary outcomes were psychological and included the following: anxiety, mood, depression, and sense of control and/or influence. These were measured using the validated State-Trait Anxiety Inventory (STAI)15 an emotional thermometer instrument (ETI)16, and a self-designed instrument measuring one’s sense of control and influence over the environment. Secondary outcomes were clinical, encompassing pain, quality of life (QOL), length of stay (LOS), and related to perceptions of the hospital environment. These were assessed using data extracted from the electronic medical record (EMR) as well as the Room Assessment (RA) survey, a validated instrument used in prior clinical studies to assess inpatient settings.17 The RA survey uses the Semantic Differential scale, a rating scale designed to measure emotional associations by using paired attributes.18 In our scale, we listed 17 paired and polar opposite attributes, with one descriptor reflecting a more positive impression than the other. Anxiety, emotional state, and control and/or influence were assessed at baseline and prior to discharge; emotional state was assessed every 1 to 2 days; and perceptions of the room and overall patient experience were measured once, prior to discharge, using the RA survey.

Data Analysis

A sample of 180 participants were chosen, with a 2:1 ratio of art group to no-art control group to provide at least 80% power to detect a difference in anxiety score of 4 units, for the comparisons of interest among the groups. The calculations assumed a 2-sided test with α = 0.05.

 

 

Comparisons were made between (1) those with paintings versus those without and (2) those with a choice of paintings versus those with no choice. For the primary psychological outcome, the average anxiety score at discharge was compared between groups of interest by using analysis of covariance, with adjustment for baseline score. Items measuring mood, depression, control, and influence that were collected more frequently were compared between groups by using repeated measures analysis of covariance, with adjustment for corresponding score at baseline. For clinical outcomes, median LOS was compared between groups by using the Wilcoxon rank sum test due to the skewed distribution of data, and QOL and pain were compared between groups by using repeated measures analysis of covariance. The model for patient-reported pain included covariates for pain medication received and level of pain tolerance. Outcomes measuring perceptions of hospital environment were collected at a single time point and compared between groups by using the 2-sample t-test. Results were reported in terms of means and 95% confidence intervals or medians and quartiles. Significance was defined by P < .05. All facets of this study were approved by the Pennsylvania State University College of Medicine Institutional Review Board.

RESULTS

We approached 518 patients to participate in the study, and 203 elected to enroll. Seventeen patients withdrew from the study because they had been discharged from the hospital or were unable to continue. Of the 186 participants who completed the study, 74 chose the painting displayed in their rooms, 69 had paintings randomly selected for them, and 43 had no paintings in their rooms, only white boards in their line of vision. The average age of participants in the trial was 56 years, 49% were male, and 89% were Caucasian. There were no significant differences between participants and decliners in terms of race (P = .13) and mean age (P = .08). However, they did differ by gender, with 49% of participants being male compared with 68% of decliners (P < .001). There were no significant differences among the 3 study groups with respect to these demographic characteristics. No harms were observed for any patients; however, several patients in the group whose artwork was randomly selected expressed distaste for the image and/or color scheme of their painting.

Psychological Outcomes: Anxiety (STAI), Mood and Depression (ETI), and Sense of Control and/or Influence (Self-Designed Instrument)

There were no differences in anxiety for the primary comparison of artwork versus no artwork or the secondary comparison of choice versus no choice. Likewise, there were no differences in mood, depression, or sense of control and/or influence across the 3 groups.

Clinical Outcomes: Self-Reported Pain, LOS, and QOL (from EMR)

There were no differences in self-reported pain, LOS, or QOL across the 3 groups. With regard to LOS, the median (quartile 1 [Q1], quartile 3 [Q3]) stay was 6 days for the choice group (4.0, 12.0), 6 days for the no-choice group (5.0, 9.5), and 9.5 days for the group with no artwork (5.0, 20.0; see supplementary Table).

Perceptions of Hospital Environment (RA Survey)

As shown in Figure 2, participants who had art in their rooms generally had more positive impressions of the hospital environment than those who did not. For 6 of the 17 paired attributes, participants with artwork were significantly more likely to choose the positive attribute—specifically, such patients indicated their rooms were more interesting, colorful, pleasant, attractive, ornate, and tasteful. With regard to the other attributes, though not reaching levels of significance, the overall pattern clearly reflected a more positive impression of rooms with art than without it.

DISCUSSION

While having paintings in cancer inpatient rooms did not affect the psychological or clinical outcomes we assessed, patients who had paintings in their rooms had more positive impressions of the hospital environment. Given that healthcare administrators are under strong pressures to control costs while increasing care quality and patient satisfaction to maximize reimbursement, integrating local artwork into inpatient rooms may represent a simple and relatively inexpensive way (approximately $90 per room) to humanize clinical environments, systematically improve perceptions of the institution, and perhaps contribute to increased patient satisfaction scores. While more work must be done to establish a positive link between access to artwork and improved standardized patient satisfaction outcomes, our results suggest that there may be potential benefit in giving patients an opportunity to engage artwork as a therapeutic resource during the physical, emotional, and spiritual challenges that arise during inpatient treatment.

These findings also have implications for inpatients with illnesses other than cancer. Though we did not explicitly study noncancer patients, we know that nearly 40 million Americans are admitted annually to institutional care (ie, acute hospitalizations, rehabilitation hospitals, and skilled nursing facilities) and often find themselves in environments that can be stark and medicalized. We would anticipate that providing art in these patients’ rooms would likewise improve perceptions of the institutions where they receive their inpatient medical care.

This study had several limitations that could affect the generalizability of our findings. First, it was difficult to enroll patients, with greater than 50% of persons approached declining to participate. Second, nonparticipants were more likely to be male, and this clearly provides a biased sample. Third, we have incomplete data for some patients who were unavailable or changed rooms during the study. Fourth, while each patient room had standardized features (eg, windows, televisions, etc.), there were logistical challenges with placing paintings in the exact same location (ie, in the patient’s direct line of vision) in every hospital room because the shape, size, and idiosyncratic decorating of hospital rooms varied, so we were not able to fully control for all room décor features. Fifth, the study was conducted at a single site and only among patients with cancer; other populations could respond very differently. It is possible that other confounding factors (such as prior hospital experience, patient predilection for artwork, and usage of digital devices during hospitalization) could have affected outcomes, but these were not measured in this study.

In conclusion, as patient satisfaction continues to influence hospital reimbursement, identifying novel and effective approaches to improving patient perceptions can play a meaningful role in patient care. Future research should focus on different inpatient populations and venues; new strategies to effectively evaluate relevant clinical outcomes; comparisons with other nonpharmacological, arts-based interventions in inpatient settings (eg, music, creation of artwork, etc.); and assessment of aggregate scores on standardized patient satisfaction instruments (eg, Press Ganey, Hospital Consumer Assessment of Healthcare Providers and Systems). There may also be an additive benefit in providing “coaching” to healthcare providers on how to engage with patients regarding the artwork they have chosen. Such approaches might also examine the value of giving patients control over multiple opportunities to influence the aesthetics in their room versus a single opportunity during the course of their stay.

 

 

Acknowledgments

The authors would like to acknowledge the contributions of Lorna Davis, Lori Snyder, and Renee Stewart to this work.

Disclosure 

This work was supported by funding from the National Endowment for the Arts (grant 14-3800-7008). ClinicalTrials.gov Identifier for Penn State Milton S. Hershey Medical Center Protocol Record STUDY00000378: NCT02357160. The authors report no conflicts of interest.

Files
References

1. Mehta SJ. Patient Satisfaction Reporting and Its Implications for Patient Care. AMA J Ethics. 2015;17(7):616-621. PubMed
2. Hathorn KN. A Guide to Evidence-based Art. The Center for Health and Design; 2008. https://www.healthdesign.org/sites/default/files/Hathorn_Nanda_Mar08.pdf . Accessed November 5, 2017.
3. Sonke J, Rollins J, Brandman R, Graham-Pole J. The state of the arts in healthcare in the United States. Arts & Health. 2009;1(2):107-135. 
4. Ulrich RS, Zimring C, Zhu X, et al. A Review of the Research Literature on Evidence-Based Healthcare Design. HERD. 2008;1(3):61-125. PubMed
5. Beukeboom CJ, Langeveld D, Tanja-Dijkstra K. Stress-reducing effects of real and artificial nature in a hospital waiting room. J Altern Complement Med. 2012;18(4):329-333. PubMed
6. Miller AC, Hickman LC, Lemaster GK. A Distraction Technique for Control of Burn Pain. J Burn Care Rehabil. 1992;13(5):576-580. PubMed
7. Diette GB, Lechtzin N, Haponik E, Devrotes A, Rubin HR. Distraction Therapy with Nature Sights and Sounds Reduces Pain During Flexible Bronchoscopy: A Complementary Approach to Routine Analgesic. Chest. 2003;123(3):941-948. PubMed
8. Tse MM, Ng JK, Chung JW, Wong TK. The effect of visual stimuli on pain threshold and tolerance. J Clin Nursing. 2002;11(4):462-469.
 PubMed

9. Vincent E, Battisto D, Grimes L. The Effects of Nature Images on Pain in a Simulated Hospital Patient Room. HERD. 2010;3(3):56-69. PubMed
10. Staricoff RL. Arts in health: the value of evaluation. J R Soc Promot Health. 2006;126(3):116-120. PubMed
11. Karnik MPrintz BFinkel J. A Hospital’s Contemporary Art Collection: Effects on Patient Mood, Stress, Comfort, and Expectations. HERD. 2014;7(3):60-77. PubMed
12. Suter EBaylin D. Choosing art as a complement to healing. Appl Nurs Res. 2007;20(1):32-38. PubMed
13. Harris PB, McBride G, Ross C, Curtis L. A Place to Heal: Environmental Sources of Satisfaction among Hospital Patients. J Appl Soc Psychol. 2002;32(6):1276-1299. 
14. Moss HDonnellan CO’Neill D. A review of qualitative methodologies used to explore patient perceptions of arts and healthcare. Med Humanit. 2012;38(2):106-109. PubMed
15. Corsini RJ, Ozaki BD. Encyclopedia of psychology. Vol. 1. New York: Wiley; 1994. State-Trait Anxiety Inventory. 
16. Beck KR, Tan SM, Lum SS, Lim LE, Krishna LK. Validation of the emotion thermometers and hospital anxiety and depression scales in Singapore: Screening cancer patients for distress, anxiety and depression. Asia Pac J Clin Oncol. 2016;12(2):e241-e249. PubMed
17. Lohr VI, Pearson-Mims CH. Physical discomfort may be reduced in the presence of interior plants. HortTechnology. 2000;10(1):53-58. 
18. Semantic Differential. http://psc.dss.ucdavis.edu/sommerb/sommerdemo/scaling/semdiff.htm. Accessed November 5, 2017.

Article PDF
Issue
Journal of Hospital Medicine 13(8)
Publications
Topics
Page Number
558-561. Published online first February 5, 2018
Sections
Files
Files
Article PDF
Article PDF

With hospital reimbursement increasingly being linked to patient satisfaction,1 about half of US hospitals have embraced arts programs as a means of humanizing clinical environments and improving the patient experience.2,3 There is emerging evidence that integrating such programs into clinical settings is associated with less pain, stress, and anxiety4-10 as well as improved mood,11 greater levels of interaction,12 and feeling less institutionalized.13 However, it has been observed that existing studies have been undertaken with variable methodological rigor,14 and few randomized controlled trials (RCTs) have linked specific design features or interventions directly to healthcare outcomes. We designed a RCT to test the hypotheses that (1) placing a painting by a local artist in the line of vision of hospitalized patients would improve psychological and clinical outcomes and patient satisfaction and (2) letting patients choose their own painting would offer even greater benefit in these areas.

METHODS

From 2014 to 2016, our research team recruited inpatients who were being treated in the Pennsylvania State University Hershey Cancer Institute in Hershey, Pennsylvania. Patients were eligible if they were English speaking, over the age of 19, not cognitively impaired, and had been admitted for cancer-related treatments that required at least a 3-day inpatient stay. During recruitment, patients were told that the study was on patient care and room décor, and thus those who were not being given artwork did not know about the artwork option. By using a permuted block design with mixed block size, we randomly assigned consenting patients to 1 of the following 3 groups: (1) those who chose the painting displayed in their rooms, (2) those whose painting was randomly selected, and (3) those with no painting in their rooms, only white boards in their line of vision (see Figure 1). All paintings were created by artists in central Pennsylvania and reproduced as high-quality digital prints for the study, costing approximately $90 apiece. Members of the research team visited patients in the designated rooms 3 times during their stay—within 24 hours of being admitted, within 24 to 48 hours of the first visit, and within 24 to 48 hours of the second visit—with each visit lasting from 5 to 10 minutes. Patients who were given the opportunity to select art for their rooms were shown a catalogue of approximately 20 available paintings from which to choose a desired print; as with the group whose paintings were randomly selected for them, patients who made a choice had a print immediately hung in their room by members of the research team for the entirety of their inpatient stay.

Outcomes and Measures

The primary outcomes were psychological and included the following: anxiety, mood, depression, and sense of control and/or influence. These were measured using the validated State-Trait Anxiety Inventory (STAI)15 an emotional thermometer instrument (ETI)16, and a self-designed instrument measuring one’s sense of control and influence over the environment. Secondary outcomes were clinical, encompassing pain, quality of life (QOL), length of stay (LOS), and related to perceptions of the hospital environment. These were assessed using data extracted from the electronic medical record (EMR) as well as the Room Assessment (RA) survey, a validated instrument used in prior clinical studies to assess inpatient settings.17 The RA survey uses the Semantic Differential scale, a rating scale designed to measure emotional associations by using paired attributes.18 In our scale, we listed 17 paired and polar opposite attributes, with one descriptor reflecting a more positive impression than the other. Anxiety, emotional state, and control and/or influence were assessed at baseline and prior to discharge; emotional state was assessed every 1 to 2 days; and perceptions of the room and overall patient experience were measured once, prior to discharge, using the RA survey.

Data Analysis

A sample of 180 participants were chosen, with a 2:1 ratio of art group to no-art control group to provide at least 80% power to detect a difference in anxiety score of 4 units, for the comparisons of interest among the groups. The calculations assumed a 2-sided test with α = 0.05.

 

 

Comparisons were made between (1) those with paintings versus those without and (2) those with a choice of paintings versus those with no choice. For the primary psychological outcome, the average anxiety score at discharge was compared between groups of interest by using analysis of covariance, with adjustment for baseline score. Items measuring mood, depression, control, and influence that were collected more frequently were compared between groups by using repeated measures analysis of covariance, with adjustment for corresponding score at baseline. For clinical outcomes, median LOS was compared between groups by using the Wilcoxon rank sum test due to the skewed distribution of data, and QOL and pain were compared between groups by using repeated measures analysis of covariance. The model for patient-reported pain included covariates for pain medication received and level of pain tolerance. Outcomes measuring perceptions of hospital environment were collected at a single time point and compared between groups by using the 2-sample t-test. Results were reported in terms of means and 95% confidence intervals or medians and quartiles. Significance was defined by P < .05. All facets of this study were approved by the Pennsylvania State University College of Medicine Institutional Review Board.

RESULTS

We approached 518 patients to participate in the study, and 203 elected to enroll. Seventeen patients withdrew from the study because they had been discharged from the hospital or were unable to continue. Of the 186 participants who completed the study, 74 chose the painting displayed in their rooms, 69 had paintings randomly selected for them, and 43 had no paintings in their rooms, only white boards in their line of vision. The average age of participants in the trial was 56 years, 49% were male, and 89% were Caucasian. There were no significant differences between participants and decliners in terms of race (P = .13) and mean age (P = .08). However, they did differ by gender, with 49% of participants being male compared with 68% of decliners (P < .001). There were no significant differences among the 3 study groups with respect to these demographic characteristics. No harms were observed for any patients; however, several patients in the group whose artwork was randomly selected expressed distaste for the image and/or color scheme of their painting.

Psychological Outcomes: Anxiety (STAI), Mood and Depression (ETI), and Sense of Control and/or Influence (Self-Designed Instrument)

There were no differences in anxiety for the primary comparison of artwork versus no artwork or the secondary comparison of choice versus no choice. Likewise, there were no differences in mood, depression, or sense of control and/or influence across the 3 groups.

Clinical Outcomes: Self-Reported Pain, LOS, and QOL (from EMR)

There were no differences in self-reported pain, LOS, or QOL across the 3 groups. With regard to LOS, the median (quartile 1 [Q1], quartile 3 [Q3]) stay was 6 days for the choice group (4.0, 12.0), 6 days for the no-choice group (5.0, 9.5), and 9.5 days for the group with no artwork (5.0, 20.0; see supplementary Table).

Perceptions of Hospital Environment (RA Survey)

As shown in Figure 2, participants who had art in their rooms generally had more positive impressions of the hospital environment than those who did not. For 6 of the 17 paired attributes, participants with artwork were significantly more likely to choose the positive attribute—specifically, such patients indicated their rooms were more interesting, colorful, pleasant, attractive, ornate, and tasteful. With regard to the other attributes, though not reaching levels of significance, the overall pattern clearly reflected a more positive impression of rooms with art than without it.

DISCUSSION

While having paintings in cancer inpatient rooms did not affect the psychological or clinical outcomes we assessed, patients who had paintings in their rooms had more positive impressions of the hospital environment. Given that healthcare administrators are under strong pressures to control costs while increasing care quality and patient satisfaction to maximize reimbursement, integrating local artwork into inpatient rooms may represent a simple and relatively inexpensive way (approximately $90 per room) to humanize clinical environments, systematically improve perceptions of the institution, and perhaps contribute to increased patient satisfaction scores. While more work must be done to establish a positive link between access to artwork and improved standardized patient satisfaction outcomes, our results suggest that there may be potential benefit in giving patients an opportunity to engage artwork as a therapeutic resource during the physical, emotional, and spiritual challenges that arise during inpatient treatment.

These findings also have implications for inpatients with illnesses other than cancer. Though we did not explicitly study noncancer patients, we know that nearly 40 million Americans are admitted annually to institutional care (ie, acute hospitalizations, rehabilitation hospitals, and skilled nursing facilities) and often find themselves in environments that can be stark and medicalized. We would anticipate that providing art in these patients’ rooms would likewise improve perceptions of the institutions where they receive their inpatient medical care.

This study had several limitations that could affect the generalizability of our findings. First, it was difficult to enroll patients, with greater than 50% of persons approached declining to participate. Second, nonparticipants were more likely to be male, and this clearly provides a biased sample. Third, we have incomplete data for some patients who were unavailable or changed rooms during the study. Fourth, while each patient room had standardized features (eg, windows, televisions, etc.), there were logistical challenges with placing paintings in the exact same location (ie, in the patient’s direct line of vision) in every hospital room because the shape, size, and idiosyncratic decorating of hospital rooms varied, so we were not able to fully control for all room décor features. Fifth, the study was conducted at a single site and only among patients with cancer; other populations could respond very differently. It is possible that other confounding factors (such as prior hospital experience, patient predilection for artwork, and usage of digital devices during hospitalization) could have affected outcomes, but these were not measured in this study.

In conclusion, as patient satisfaction continues to influence hospital reimbursement, identifying novel and effective approaches to improving patient perceptions can play a meaningful role in patient care. Future research should focus on different inpatient populations and venues; new strategies to effectively evaluate relevant clinical outcomes; comparisons with other nonpharmacological, arts-based interventions in inpatient settings (eg, music, creation of artwork, etc.); and assessment of aggregate scores on standardized patient satisfaction instruments (eg, Press Ganey, Hospital Consumer Assessment of Healthcare Providers and Systems). There may also be an additive benefit in providing “coaching” to healthcare providers on how to engage with patients regarding the artwork they have chosen. Such approaches might also examine the value of giving patients control over multiple opportunities to influence the aesthetics in their room versus a single opportunity during the course of their stay.

 

 

Acknowledgments

The authors would like to acknowledge the contributions of Lorna Davis, Lori Snyder, and Renee Stewart to this work.

Disclosure 

This work was supported by funding from the National Endowment for the Arts (grant 14-3800-7008). ClinicalTrials.gov Identifier for Penn State Milton S. Hershey Medical Center Protocol Record STUDY00000378: NCT02357160. The authors report no conflicts of interest.

With hospital reimbursement increasingly being linked to patient satisfaction,1 about half of US hospitals have embraced arts programs as a means of humanizing clinical environments and improving the patient experience.2,3 There is emerging evidence that integrating such programs into clinical settings is associated with less pain, stress, and anxiety4-10 as well as improved mood,11 greater levels of interaction,12 and feeling less institutionalized.13 However, it has been observed that existing studies have been undertaken with variable methodological rigor,14 and few randomized controlled trials (RCTs) have linked specific design features or interventions directly to healthcare outcomes. We designed a RCT to test the hypotheses that (1) placing a painting by a local artist in the line of vision of hospitalized patients would improve psychological and clinical outcomes and patient satisfaction and (2) letting patients choose their own painting would offer even greater benefit in these areas.

METHODS

From 2014 to 2016, our research team recruited inpatients who were being treated in the Pennsylvania State University Hershey Cancer Institute in Hershey, Pennsylvania. Patients were eligible if they were English speaking, over the age of 19, not cognitively impaired, and had been admitted for cancer-related treatments that required at least a 3-day inpatient stay. During recruitment, patients were told that the study was on patient care and room décor, and thus those who were not being given artwork did not know about the artwork option. By using a permuted block design with mixed block size, we randomly assigned consenting patients to 1 of the following 3 groups: (1) those who chose the painting displayed in their rooms, (2) those whose painting was randomly selected, and (3) those with no painting in their rooms, only white boards in their line of vision (see Figure 1). All paintings were created by artists in central Pennsylvania and reproduced as high-quality digital prints for the study, costing approximately $90 apiece. Members of the research team visited patients in the designated rooms 3 times during their stay—within 24 hours of being admitted, within 24 to 48 hours of the first visit, and within 24 to 48 hours of the second visit—with each visit lasting from 5 to 10 minutes. Patients who were given the opportunity to select art for their rooms were shown a catalogue of approximately 20 available paintings from which to choose a desired print; as with the group whose paintings were randomly selected for them, patients who made a choice had a print immediately hung in their room by members of the research team for the entirety of their inpatient stay.

Outcomes and Measures

The primary outcomes were psychological and included the following: anxiety, mood, depression, and sense of control and/or influence. These were measured using the validated State-Trait Anxiety Inventory (STAI)15 an emotional thermometer instrument (ETI)16, and a self-designed instrument measuring one’s sense of control and influence over the environment. Secondary outcomes were clinical, encompassing pain, quality of life (QOL), length of stay (LOS), and related to perceptions of the hospital environment. These were assessed using data extracted from the electronic medical record (EMR) as well as the Room Assessment (RA) survey, a validated instrument used in prior clinical studies to assess inpatient settings.17 The RA survey uses the Semantic Differential scale, a rating scale designed to measure emotional associations by using paired attributes.18 In our scale, we listed 17 paired and polar opposite attributes, with one descriptor reflecting a more positive impression than the other. Anxiety, emotional state, and control and/or influence were assessed at baseline and prior to discharge; emotional state was assessed every 1 to 2 days; and perceptions of the room and overall patient experience were measured once, prior to discharge, using the RA survey.

Data Analysis

A sample of 180 participants were chosen, with a 2:1 ratio of art group to no-art control group to provide at least 80% power to detect a difference in anxiety score of 4 units, for the comparisons of interest among the groups. The calculations assumed a 2-sided test with α = 0.05.

 

 

Comparisons were made between (1) those with paintings versus those without and (2) those with a choice of paintings versus those with no choice. For the primary psychological outcome, the average anxiety score at discharge was compared between groups of interest by using analysis of covariance, with adjustment for baseline score. Items measuring mood, depression, control, and influence that were collected more frequently were compared between groups by using repeated measures analysis of covariance, with adjustment for corresponding score at baseline. For clinical outcomes, median LOS was compared between groups by using the Wilcoxon rank sum test due to the skewed distribution of data, and QOL and pain were compared between groups by using repeated measures analysis of covariance. The model for patient-reported pain included covariates for pain medication received and level of pain tolerance. Outcomes measuring perceptions of hospital environment were collected at a single time point and compared between groups by using the 2-sample t-test. Results were reported in terms of means and 95% confidence intervals or medians and quartiles. Significance was defined by P < .05. All facets of this study were approved by the Pennsylvania State University College of Medicine Institutional Review Board.

RESULTS

We approached 518 patients to participate in the study, and 203 elected to enroll. Seventeen patients withdrew from the study because they had been discharged from the hospital or were unable to continue. Of the 186 participants who completed the study, 74 chose the painting displayed in their rooms, 69 had paintings randomly selected for them, and 43 had no paintings in their rooms, only white boards in their line of vision. The average age of participants in the trial was 56 years, 49% were male, and 89% were Caucasian. There were no significant differences between participants and decliners in terms of race (P = .13) and mean age (P = .08). However, they did differ by gender, with 49% of participants being male compared with 68% of decliners (P < .001). There were no significant differences among the 3 study groups with respect to these demographic characteristics. No harms were observed for any patients; however, several patients in the group whose artwork was randomly selected expressed distaste for the image and/or color scheme of their painting.

Psychological Outcomes: Anxiety (STAI), Mood and Depression (ETI), and Sense of Control and/or Influence (Self-Designed Instrument)

There were no differences in anxiety for the primary comparison of artwork versus no artwork or the secondary comparison of choice versus no choice. Likewise, there were no differences in mood, depression, or sense of control and/or influence across the 3 groups.

Clinical Outcomes: Self-Reported Pain, LOS, and QOL (from EMR)

There were no differences in self-reported pain, LOS, or QOL across the 3 groups. With regard to LOS, the median (quartile 1 [Q1], quartile 3 [Q3]) stay was 6 days for the choice group (4.0, 12.0), 6 days for the no-choice group (5.0, 9.5), and 9.5 days for the group with no artwork (5.0, 20.0; see supplementary Table).

Perceptions of Hospital Environment (RA Survey)

As shown in Figure 2, participants who had art in their rooms generally had more positive impressions of the hospital environment than those who did not. For 6 of the 17 paired attributes, participants with artwork were significantly more likely to choose the positive attribute—specifically, such patients indicated their rooms were more interesting, colorful, pleasant, attractive, ornate, and tasteful. With regard to the other attributes, though not reaching levels of significance, the overall pattern clearly reflected a more positive impression of rooms with art than without it.

DISCUSSION

While having paintings in cancer inpatient rooms did not affect the psychological or clinical outcomes we assessed, patients who had paintings in their rooms had more positive impressions of the hospital environment. Given that healthcare administrators are under strong pressures to control costs while increasing care quality and patient satisfaction to maximize reimbursement, integrating local artwork into inpatient rooms may represent a simple and relatively inexpensive way (approximately $90 per room) to humanize clinical environments, systematically improve perceptions of the institution, and perhaps contribute to increased patient satisfaction scores. While more work must be done to establish a positive link between access to artwork and improved standardized patient satisfaction outcomes, our results suggest that there may be potential benefit in giving patients an opportunity to engage artwork as a therapeutic resource during the physical, emotional, and spiritual challenges that arise during inpatient treatment.

These findings also have implications for inpatients with illnesses other than cancer. Though we did not explicitly study noncancer patients, we know that nearly 40 million Americans are admitted annually to institutional care (ie, acute hospitalizations, rehabilitation hospitals, and skilled nursing facilities) and often find themselves in environments that can be stark and medicalized. We would anticipate that providing art in these patients’ rooms would likewise improve perceptions of the institutions where they receive their inpatient medical care.

This study had several limitations that could affect the generalizability of our findings. First, it was difficult to enroll patients, with greater than 50% of persons approached declining to participate. Second, nonparticipants were more likely to be male, and this clearly provides a biased sample. Third, we have incomplete data for some patients who were unavailable or changed rooms during the study. Fourth, while each patient room had standardized features (eg, windows, televisions, etc.), there were logistical challenges with placing paintings in the exact same location (ie, in the patient’s direct line of vision) in every hospital room because the shape, size, and idiosyncratic decorating of hospital rooms varied, so we were not able to fully control for all room décor features. Fifth, the study was conducted at a single site and only among patients with cancer; other populations could respond very differently. It is possible that other confounding factors (such as prior hospital experience, patient predilection for artwork, and usage of digital devices during hospitalization) could have affected outcomes, but these were not measured in this study.

In conclusion, as patient satisfaction continues to influence hospital reimbursement, identifying novel and effective approaches to improving patient perceptions can play a meaningful role in patient care. Future research should focus on different inpatient populations and venues; new strategies to effectively evaluate relevant clinical outcomes; comparisons with other nonpharmacological, arts-based interventions in inpatient settings (eg, music, creation of artwork, etc.); and assessment of aggregate scores on standardized patient satisfaction instruments (eg, Press Ganey, Hospital Consumer Assessment of Healthcare Providers and Systems). There may also be an additive benefit in providing “coaching” to healthcare providers on how to engage with patients regarding the artwork they have chosen. Such approaches might also examine the value of giving patients control over multiple opportunities to influence the aesthetics in their room versus a single opportunity during the course of their stay.

 

 

Acknowledgments

The authors would like to acknowledge the contributions of Lorna Davis, Lori Snyder, and Renee Stewart to this work.

Disclosure 

This work was supported by funding from the National Endowment for the Arts (grant 14-3800-7008). ClinicalTrials.gov Identifier for Penn State Milton S. Hershey Medical Center Protocol Record STUDY00000378: NCT02357160. The authors report no conflicts of interest.

References

1. Mehta SJ. Patient Satisfaction Reporting and Its Implications for Patient Care. AMA J Ethics. 2015;17(7):616-621. PubMed
2. Hathorn KN. A Guide to Evidence-based Art. The Center for Health and Design; 2008. https://www.healthdesign.org/sites/default/files/Hathorn_Nanda_Mar08.pdf . Accessed November 5, 2017.
3. Sonke J, Rollins J, Brandman R, Graham-Pole J. The state of the arts in healthcare in the United States. Arts & Health. 2009;1(2):107-135. 
4. Ulrich RS, Zimring C, Zhu X, et al. A Review of the Research Literature on Evidence-Based Healthcare Design. HERD. 2008;1(3):61-125. PubMed
5. Beukeboom CJ, Langeveld D, Tanja-Dijkstra K. Stress-reducing effects of real and artificial nature in a hospital waiting room. J Altern Complement Med. 2012;18(4):329-333. PubMed
6. Miller AC, Hickman LC, Lemaster GK. A Distraction Technique for Control of Burn Pain. J Burn Care Rehabil. 1992;13(5):576-580. PubMed
7. Diette GB, Lechtzin N, Haponik E, Devrotes A, Rubin HR. Distraction Therapy with Nature Sights and Sounds Reduces Pain During Flexible Bronchoscopy: A Complementary Approach to Routine Analgesic. Chest. 2003;123(3):941-948. PubMed
8. Tse MM, Ng JK, Chung JW, Wong TK. The effect of visual stimuli on pain threshold and tolerance. J Clin Nursing. 2002;11(4):462-469.
 PubMed

9. Vincent E, Battisto D, Grimes L. The Effects of Nature Images on Pain in a Simulated Hospital Patient Room. HERD. 2010;3(3):56-69. PubMed
10. Staricoff RL. Arts in health: the value of evaluation. J R Soc Promot Health. 2006;126(3):116-120. PubMed
11. Karnik MPrintz BFinkel J. A Hospital’s Contemporary Art Collection: Effects on Patient Mood, Stress, Comfort, and Expectations. HERD. 2014;7(3):60-77. PubMed
12. Suter EBaylin D. Choosing art as a complement to healing. Appl Nurs Res. 2007;20(1):32-38. PubMed
13. Harris PB, McBride G, Ross C, Curtis L. A Place to Heal: Environmental Sources of Satisfaction among Hospital Patients. J Appl Soc Psychol. 2002;32(6):1276-1299. 
14. Moss HDonnellan CO’Neill D. A review of qualitative methodologies used to explore patient perceptions of arts and healthcare. Med Humanit. 2012;38(2):106-109. PubMed
15. Corsini RJ, Ozaki BD. Encyclopedia of psychology. Vol. 1. New York: Wiley; 1994. State-Trait Anxiety Inventory. 
16. Beck KR, Tan SM, Lum SS, Lim LE, Krishna LK. Validation of the emotion thermometers and hospital anxiety and depression scales in Singapore: Screening cancer patients for distress, anxiety and depression. Asia Pac J Clin Oncol. 2016;12(2):e241-e249. PubMed
17. Lohr VI, Pearson-Mims CH. Physical discomfort may be reduced in the presence of interior plants. HortTechnology. 2000;10(1):53-58. 
18. Semantic Differential. http://psc.dss.ucdavis.edu/sommerb/sommerdemo/scaling/semdiff.htm. Accessed November 5, 2017.

References

1. Mehta SJ. Patient Satisfaction Reporting and Its Implications for Patient Care. AMA J Ethics. 2015;17(7):616-621. PubMed
2. Hathorn KN. A Guide to Evidence-based Art. The Center for Health and Design; 2008. https://www.healthdesign.org/sites/default/files/Hathorn_Nanda_Mar08.pdf . Accessed November 5, 2017.
3. Sonke J, Rollins J, Brandman R, Graham-Pole J. The state of the arts in healthcare in the United States. Arts & Health. 2009;1(2):107-135. 
4. Ulrich RS, Zimring C, Zhu X, et al. A Review of the Research Literature on Evidence-Based Healthcare Design. HERD. 2008;1(3):61-125. PubMed
5. Beukeboom CJ, Langeveld D, Tanja-Dijkstra K. Stress-reducing effects of real and artificial nature in a hospital waiting room. J Altern Complement Med. 2012;18(4):329-333. PubMed
6. Miller AC, Hickman LC, Lemaster GK. A Distraction Technique for Control of Burn Pain. J Burn Care Rehabil. 1992;13(5):576-580. PubMed
7. Diette GB, Lechtzin N, Haponik E, Devrotes A, Rubin HR. Distraction Therapy with Nature Sights and Sounds Reduces Pain During Flexible Bronchoscopy: A Complementary Approach to Routine Analgesic. Chest. 2003;123(3):941-948. PubMed
8. Tse MM, Ng JK, Chung JW, Wong TK. The effect of visual stimuli on pain threshold and tolerance. J Clin Nursing. 2002;11(4):462-469.
 PubMed

9. Vincent E, Battisto D, Grimes L. The Effects of Nature Images on Pain in a Simulated Hospital Patient Room. HERD. 2010;3(3):56-69. PubMed
10. Staricoff RL. Arts in health: the value of evaluation. J R Soc Promot Health. 2006;126(3):116-120. PubMed
11. Karnik MPrintz BFinkel J. A Hospital’s Contemporary Art Collection: Effects on Patient Mood, Stress, Comfort, and Expectations. HERD. 2014;7(3):60-77. PubMed
12. Suter EBaylin D. Choosing art as a complement to healing. Appl Nurs Res. 2007;20(1):32-38. PubMed
13. Harris PB, McBride G, Ross C, Curtis L. A Place to Heal: Environmental Sources of Satisfaction among Hospital Patients. J Appl Soc Psychol. 2002;32(6):1276-1299. 
14. Moss HDonnellan CO’Neill D. A review of qualitative methodologies used to explore patient perceptions of arts and healthcare. Med Humanit. 2012;38(2):106-109. PubMed
15. Corsini RJ, Ozaki BD. Encyclopedia of psychology. Vol. 1. New York: Wiley; 1994. State-Trait Anxiety Inventory. 
16. Beck KR, Tan SM, Lum SS, Lim LE, Krishna LK. Validation of the emotion thermometers and hospital anxiety and depression scales in Singapore: Screening cancer patients for distress, anxiety and depression. Asia Pac J Clin Oncol. 2016;12(2):e241-e249. PubMed
17. Lohr VI, Pearson-Mims CH. Physical discomfort may be reduced in the presence of interior plants. HortTechnology. 2000;10(1):53-58. 
18. Semantic Differential. http://psc.dss.ucdavis.edu/sommerb/sommerdemo/scaling/semdiff.htm. Accessed November 5, 2017.

Issue
Journal of Hospital Medicine 13(8)
Issue
Journal of Hospital Medicine 13(8)
Page Number
558-561. Published online first February 5, 2018
Page Number
558-561. Published online first February 5, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Citation Override
Published online first February 5, 2018
Disallow All Ads
Correspondence Location
Daniel R. George, PhD, MSc, Milton S. Hershey Medical Center, 500 University Drive, Hershey, PA 17033; Telephone: 717-531-8778; Fax: 717-531-3894; E-mail: dgeorge1@pennstatehealth.psu.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media
Media Files

The Use of Individual Provider Performance Reports by US Hospitals

Article Type
Changed
Sun, 08/19/2018 - 21:32

Reimbursement for hospitals and physicians is increasingly tied to performance.1 Bundled payments, for example, require hospitals to share risk for patient outcomes. Medicare bundled payments are becoming mandatory for some surgical and medical conditions, including joint replacement, acute myocardial infarction, and coronary artery bypass graft surgery.2 Value-based payment is anticipated to become the norm as Medicare and private payers strive to control costs and improve outcomes. Although value-based reimbursement for hospitals targets hospital-level costs and outcomes, we know that variations at the level of individual providers explain a considerable proportion of variation in utilization and outcomes.3 However, physicians often lack awareness of their own practice patterns and relative costs, and successful participation in new payment models may require an investment by hospitals in the infrastructure needed to measure and provide feedback on performance to individual providers to affect their behavior.4,5

Electronic health record (EHR)-based reports or “dashboards” have been proposed as one potential tool to provide individualized feedback on provider performance.6 Individual provider performance profiles (IPPs) offer the potential to provide peer comparisons that may adjust individual behavior by correcting misperceptions about norms.7 Behavioral economic theory suggests that individual performance data, if combined with information on peer behavior and normative goals, may be effective in changing behavior.8 Several studies have reported the effects of specific efforts to use IPPs, showing that such reports can improve care in certain clinical areas. For example, individual provider dashboards have been associated with better outcomes for hospitalized patients, such as increased compliance with recommendations for prophylaxis of venous thromboembolism, although evidence in other areas of practice is mixed.9,10 A randomized controlled trial of peer comparison feedback reduced inappropriate antibiotic prescribing for upper respiratory infections by 11% among internists.11

Despite the promise of individualized feedback to optimize behavior, however, little has been reported on trends in the use of IPPs on a population level. It is unknown whether their use is common or rare, or what hospital characteristics are associated with adoption. Such information would help guide future efforts to promote IPP use and understand its effect on practice. We used data from a nationally representative survey of US hospitals to examine the use of individual provider-level performance profiles.

METHODS

We used data from the American Hospital Association (AHA) Annual Survey Information Technology (IT) Supplement, which asked respondents to indicate whether they have used electronic clinical data from the EHR or other electronic system in their hospital to create IPPs. The AHA survey is sent annually to all US operating hospitals. Survey results are supplemented by data from the AHA registration database, US Census Bureau, hospital accrediting bodies, and other organizations. The AHA IT supplement is also sent yearly to each hospital’s chief executive officer, who assigns it to the most knowledgeable person in the institution to complete.12

We linked data on IPP use to AHA Annual Survey responses on hospital characteristics for all general adult and pediatric hospitals. Multivariable logistic regression was used to model the odds of individual provider performance profile use as a function of hospital characteristics, including ownership (nonprofit, for profit, or government), geographic region, teaching versus nonteaching status, rural versus urban location, size, expenditures per bed, proportion of patient days covered by Medicaid, and risk-sharing models of reimbursement (participation in a health maintenance organization or bundled payments program). Variables were chosen a priori to account for important characteristics of US hospitals (eg, size, teaching status, and geographic location). These were combined with variables representing risk-sharing arrangements based on the hypothesis that hospitals whose payments are at greater risk would be more likely to invest in tracking provider performance. We eliminated any variable with an item nonresponse rate greater than 15%, which resulted in elimination of 2 variables representing hospital revenue from capitated payments and any risk-sharing arrangement, respectively. All other variables had item nonresponse rates of 0%, except for 4.7% item nonresponse for the bundled payments variable.

We also measured the trend in individual provider performance report use between 2013 and 2015 by estimating the linear probability between IPP use and year. A P value less than .05 was considered statistically significant.

Because past work has demonstrated nonresponse bias in the AHA Survey and IT Supplement, we performed additional analyses using nonresponsive weights based on hospital characteristics. Weighting methodology was based on prior work with the AHA and AHA IT surveys.13,14 Weighting exploits the fact that a number of hospital characteristics are derived from sources outside the survey and thus are available for both respondents and nonrespondents. We created nonresponse weights based on a logistic regression model of survey response as a function of hospital characteristics (ownership, size, teaching status, systems membership, critical access hospital, and geographic region). Our findings were similar for weighted and nonweighted models and nonweighted estimates are presented throughout.

The University of Pennsylvania Institutional Review Board exempted this study from review. Analyses were performed using Stata statistical software, version 14.0 (StataCorp, College Station, TX).

 

 

RESULTS

In 2015, 2334 general hospitals completed all questions of interest in both surveys. Among respondents, 65.8% used individual provider performance reports. Individual provider performance use increased by 3.3% each year from 2013 to 2015 (P = .006; Figure).

The table shows the association between hospital characteristics and the odds of individual provider performance report use. Report use was associated with nonprofit status (odds ratio [OR], 2.77; 95% confidence interval [CI], 1.94-3.95; P < .01) compared to for-profit, large hospital size (OR, 2.37; 95% CI, 1.56-3.60; P < .01) compared to small size, highest (OR, 2.09; 95% CI, 1.55-2.82; P < .01) and second highest (OR, 1.43; 95% CI, 1.08-1.89; P = .01) quartiles of bed-adjusted expenditures compared to the bottom quartile, and West geographic region compared to Northeast (OR, 2.07; 95% CI, 1.45-2.95; P < .01). Individual provider performance use was also independently associated with participation in a health maintenance organization (OR, 1.50; 95% CI, 1.17-1.90; P < .01) or bundled payment program (OR, 1.61; 95% CI, 1.18-2.19; P < .01), controlling for other covariates. Adjustment for nonresponse bias did not change any coefficients by more than 10% (supplementary Table).

DISCUSSION

We found that a large and increasing proportion of US hospitals reported using electronic data to measure individual provider performance. Hospitals that reported IPP use tended to be larger and have higher expenditures than hospitals that did not use IPPs. Adjusting for other hospital characteristics, participation in a bundled payment program was associated with greater odds of using IPPs. To our knowledge, our study is the first population-level analysis of IPP use by US hospitals.

The Medicare Access and Children Health Insurance Program Reauthorization Act is accelerating the shift from quantity based toward value-based reimbursement. The proficient adoption of IT by healthcare providers has been cited as an important factor in adapting to new payment models.15 Physicians, and in particular hospitalists, who practice in an inpatient environment, may not directly access financial incentives aimed to adapt performance for value-based reimbursement. They may also have difficulty assessing their performance relative to peers and longitudinally over time. Individualized EHR-based provider-level performance reports offer 1 option for hospitals to measure performance and provide comparative feedback at the individual physician level. Our findings show that, in fact, a majority of US hospitals have made investments in the infrastructure necessary to create such profiles.

Nevertheless, a third of the hospitals surveyed have not adopted individualized provider performance profiles. If meeting efficiency and outcomes goals for value-based payments necessitates changes to individual provider behavior, those hospitals may be less well positioned to benefit from value-based payment models that incentivize hospitals for efficiency and outcomes. Furthermore, while we observe widespread adoption of individual performance profiles, it is unclear whether those were used to provide feedback to providers, and if so, how the feedback provided may influence its effect on behavior. Behavioral economics theory suggests, for example, that publicly reporting performance compared to peers provides stronger incentives for behavior change than “blinded” personalized reports.16

Our study has important limitations. We cannot exclude the possibility that unmeasured variables help explain individual provider performance adoption. These omitted variables may confound the association between hospital characteristics and individual provider performance adoption observed in this study. We were also unable to establish causality between bundled payments and individual provider performance profile use. For instance, hospitals may elect to make investments in IT infrastructure to enable individual provider performance profile adoption in anticipation of bundled payment reforms. Alternatively, the availability of IPPs may have led hospitals to enter bundled payments reimbursement arrangements. In addition, we are unable to describe how individual provider performance use affects physician practice or healthcare delivery more broadly. Finally, we are also unable to account for other sources of performance data. For example, some physician may receive data from their physician practice groups.

Our study suggests several avenues for future research. First, more work is needed to understand why certain types of hospitals are more likely to use IPPs. Our findings indicate that IPP use may be partly a function of hospital size and resources. However, other factors not measured here may play an important role as well, such as institutional culture. Institutions with a focus on informatics and strong IT leadership may be more likely to use their EHR to monitor performance. Second, further research should explore in greater depth how profiles are used. Future research should evaluate, for example, how hospitals are using behavioral economic principles, such as peer comparison, to motivate behavior change, and if such techniques have successfully influenced practice and patient outcomes. Ultimately, multicentered, randomized evaluations of IPP use may be necessary to understand their risks and evaluate their effect on patient outcomes. This work is necessary to inform policy and practice as hospitals transition from fee-for-service to value-based reimbursement.

In sum, we observed increasing adoption of individualized electronic provider performance profiles by US hospitals from 2013 to 2015. Hospitals that did not use IPPs were more likely to be small, for profit, and less likely to participate in bundled payment programs. Those hospitals may be less well positioned to track provider performance and implement incentives for provider behavior changes needed to meet targets for value-based reimbursement.

 

 

Disclosure

Dr. Rolnick is a consultant to Tuple Health, Inc. and was a part-time employee of Acumen, LLC outside the submitted work. Dr. Ryskina has nothing to disclose.

Files
References

1. Hussey PS, Liu JL, White C. The Medicare Access And CHIP Reauthorization Act: effects On Medicare payment policy and spending. Health Aff. 2017;36(4):697-705. PubMed
2. Navathe AS, Song Z, Emanuel EJ. The next generation of episode-based payments. JAMA. 2017;317(23):2371-2372. PubMed
3. Tsugawa Y, Jha AK, Newhouse JP, Zaslavsky AM, Jena AB. Variation in physician spending and association with patient outcomes. JAMA Intern Med. 2017;177(5):675-682. PubMed
4. Saint S, Wiese J, Amory JK, et al. Are physicians aware of which of their patients have indwelling urinary catheters? Am J Med. 2000;109(6):476-480. PubMed
5. Saturno PJ, Palmer RH, Gascón JJ. Physician attitudes, self-estimated performance and actual compliance with locally peer-defined quality evaluation criteria. Int J Qual Health Care J Int Soc Qual Health Care. 1999;11(6):487-496. PubMed
6. Mehrotra A, Sorbero MES, Damberg CL. Using the lessons of behavioral economics to design more effective pay-for-performance programs. Am J Manag Care. 2010;16(7):497-503. PubMed
7. Emanuel EJ, Ubel PA, Kessler JB, et al. Using behavioral economics to design physician Incentives that deliver high-value care. Ann Intern Med. 2016;164(2):114-119. PubMed
8. Liao JM, Fleisher LA, Navathe AS. Increasing the value of social comparisons of physician performance using norms. JAMA. 2016;316(11):1151-1152. PubMed
9. Michtalik HJ, Carolan HT, Haut ER, et al. Use of provider-level dashboards and pay-for-performance in venous thromboembolism prophylaxis. J Hosp Med. 2015;10(3):172-178. PubMed
10. Kurtzman G, Dine J, Epstein A, et al. Internal medicine resident engagement with a laboratory utilization dashboard: mixed methods study. J Hosp Med. 12(9):743-746. PubMed
11. Linder JA, Schnipper JL, Tsurikova R, et al. Electronic health record feedback to improve antibiotic prescribing for acute respiratory infections. Am J Manag Care. 2010;16 (12 Suppl HIT):e311-e319. PubMed
12. Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med. 2009;360(16):1628-1638. PubMed
13. Walker DM, Mora AM, Scheck McAlearney A. Accountable care organization hospitals differ in health IT capabilities. Am J Manag Care. 2016;22(12):802-807. PubMed
14. Adler-Milstein J, DesRoches CM, Furukawa MF, et al. More than half of US hospitals have at least a basic EHR, but stage 2 criteria remain challenging for most. Health Aff 2014;33(9):1664-1671. PubMed
15. Porter ME. A strategy for health care reform—toward a value-based system. N Engl J Med. 2009;361(2):109-112. PubMed
16. Navathe AS, Emanuel EJ. Physician peer comparisons as a nonfinancial strategy to improve the value of care. JAMA. 2016;316(17):1759-1760. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(8)
Publications
Topics
Page Number
562-565. Published online first February 7, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Reimbursement for hospitals and physicians is increasingly tied to performance.1 Bundled payments, for example, require hospitals to share risk for patient outcomes. Medicare bundled payments are becoming mandatory for some surgical and medical conditions, including joint replacement, acute myocardial infarction, and coronary artery bypass graft surgery.2 Value-based payment is anticipated to become the norm as Medicare and private payers strive to control costs and improve outcomes. Although value-based reimbursement for hospitals targets hospital-level costs and outcomes, we know that variations at the level of individual providers explain a considerable proportion of variation in utilization and outcomes.3 However, physicians often lack awareness of their own practice patterns and relative costs, and successful participation in new payment models may require an investment by hospitals in the infrastructure needed to measure and provide feedback on performance to individual providers to affect their behavior.4,5

Electronic health record (EHR)-based reports or “dashboards” have been proposed as one potential tool to provide individualized feedback on provider performance.6 Individual provider performance profiles (IPPs) offer the potential to provide peer comparisons that may adjust individual behavior by correcting misperceptions about norms.7 Behavioral economic theory suggests that individual performance data, if combined with information on peer behavior and normative goals, may be effective in changing behavior.8 Several studies have reported the effects of specific efforts to use IPPs, showing that such reports can improve care in certain clinical areas. For example, individual provider dashboards have been associated with better outcomes for hospitalized patients, such as increased compliance with recommendations for prophylaxis of venous thromboembolism, although evidence in other areas of practice is mixed.9,10 A randomized controlled trial of peer comparison feedback reduced inappropriate antibiotic prescribing for upper respiratory infections by 11% among internists.11

Despite the promise of individualized feedback to optimize behavior, however, little has been reported on trends in the use of IPPs on a population level. It is unknown whether their use is common or rare, or what hospital characteristics are associated with adoption. Such information would help guide future efforts to promote IPP use and understand its effect on practice. We used data from a nationally representative survey of US hospitals to examine the use of individual provider-level performance profiles.

METHODS

We used data from the American Hospital Association (AHA) Annual Survey Information Technology (IT) Supplement, which asked respondents to indicate whether they have used electronic clinical data from the EHR or other electronic system in their hospital to create IPPs. The AHA survey is sent annually to all US operating hospitals. Survey results are supplemented by data from the AHA registration database, US Census Bureau, hospital accrediting bodies, and other organizations. The AHA IT supplement is also sent yearly to each hospital’s chief executive officer, who assigns it to the most knowledgeable person in the institution to complete.12

We linked data on IPP use to AHA Annual Survey responses on hospital characteristics for all general adult and pediatric hospitals. Multivariable logistic regression was used to model the odds of individual provider performance profile use as a function of hospital characteristics, including ownership (nonprofit, for profit, or government), geographic region, teaching versus nonteaching status, rural versus urban location, size, expenditures per bed, proportion of patient days covered by Medicaid, and risk-sharing models of reimbursement (participation in a health maintenance organization or bundled payments program). Variables were chosen a priori to account for important characteristics of US hospitals (eg, size, teaching status, and geographic location). These were combined with variables representing risk-sharing arrangements based on the hypothesis that hospitals whose payments are at greater risk would be more likely to invest in tracking provider performance. We eliminated any variable with an item nonresponse rate greater than 15%, which resulted in elimination of 2 variables representing hospital revenue from capitated payments and any risk-sharing arrangement, respectively. All other variables had item nonresponse rates of 0%, except for 4.7% item nonresponse for the bundled payments variable.

We also measured the trend in individual provider performance report use between 2013 and 2015 by estimating the linear probability between IPP use and year. A P value less than .05 was considered statistically significant.

Because past work has demonstrated nonresponse bias in the AHA Survey and IT Supplement, we performed additional analyses using nonresponsive weights based on hospital characteristics. Weighting methodology was based on prior work with the AHA and AHA IT surveys.13,14 Weighting exploits the fact that a number of hospital characteristics are derived from sources outside the survey and thus are available for both respondents and nonrespondents. We created nonresponse weights based on a logistic regression model of survey response as a function of hospital characteristics (ownership, size, teaching status, systems membership, critical access hospital, and geographic region). Our findings were similar for weighted and nonweighted models and nonweighted estimates are presented throughout.

The University of Pennsylvania Institutional Review Board exempted this study from review. Analyses were performed using Stata statistical software, version 14.0 (StataCorp, College Station, TX).

 

 

RESULTS

In 2015, 2334 general hospitals completed all questions of interest in both surveys. Among respondents, 65.8% used individual provider performance reports. Individual provider performance use increased by 3.3% each year from 2013 to 2015 (P = .006; Figure).

The table shows the association between hospital characteristics and the odds of individual provider performance report use. Report use was associated with nonprofit status (odds ratio [OR], 2.77; 95% confidence interval [CI], 1.94-3.95; P < .01) compared to for-profit, large hospital size (OR, 2.37; 95% CI, 1.56-3.60; P < .01) compared to small size, highest (OR, 2.09; 95% CI, 1.55-2.82; P < .01) and second highest (OR, 1.43; 95% CI, 1.08-1.89; P = .01) quartiles of bed-adjusted expenditures compared to the bottom quartile, and West geographic region compared to Northeast (OR, 2.07; 95% CI, 1.45-2.95; P < .01). Individual provider performance use was also independently associated with participation in a health maintenance organization (OR, 1.50; 95% CI, 1.17-1.90; P < .01) or bundled payment program (OR, 1.61; 95% CI, 1.18-2.19; P < .01), controlling for other covariates. Adjustment for nonresponse bias did not change any coefficients by more than 10% (supplementary Table).

DISCUSSION

We found that a large and increasing proportion of US hospitals reported using electronic data to measure individual provider performance. Hospitals that reported IPP use tended to be larger and have higher expenditures than hospitals that did not use IPPs. Adjusting for other hospital characteristics, participation in a bundled payment program was associated with greater odds of using IPPs. To our knowledge, our study is the first population-level analysis of IPP use by US hospitals.

The Medicare Access and Children Health Insurance Program Reauthorization Act is accelerating the shift from quantity based toward value-based reimbursement. The proficient adoption of IT by healthcare providers has been cited as an important factor in adapting to new payment models.15 Physicians, and in particular hospitalists, who practice in an inpatient environment, may not directly access financial incentives aimed to adapt performance for value-based reimbursement. They may also have difficulty assessing their performance relative to peers and longitudinally over time. Individualized EHR-based provider-level performance reports offer 1 option for hospitals to measure performance and provide comparative feedback at the individual physician level. Our findings show that, in fact, a majority of US hospitals have made investments in the infrastructure necessary to create such profiles.

Nevertheless, a third of the hospitals surveyed have not adopted individualized provider performance profiles. If meeting efficiency and outcomes goals for value-based payments necessitates changes to individual provider behavior, those hospitals may be less well positioned to benefit from value-based payment models that incentivize hospitals for efficiency and outcomes. Furthermore, while we observe widespread adoption of individual performance profiles, it is unclear whether those were used to provide feedback to providers, and if so, how the feedback provided may influence its effect on behavior. Behavioral economics theory suggests, for example, that publicly reporting performance compared to peers provides stronger incentives for behavior change than “blinded” personalized reports.16

Our study has important limitations. We cannot exclude the possibility that unmeasured variables help explain individual provider performance adoption. These omitted variables may confound the association between hospital characteristics and individual provider performance adoption observed in this study. We were also unable to establish causality between bundled payments and individual provider performance profile use. For instance, hospitals may elect to make investments in IT infrastructure to enable individual provider performance profile adoption in anticipation of bundled payment reforms. Alternatively, the availability of IPPs may have led hospitals to enter bundled payments reimbursement arrangements. In addition, we are unable to describe how individual provider performance use affects physician practice or healthcare delivery more broadly. Finally, we are also unable to account for other sources of performance data. For example, some physician may receive data from their physician practice groups.

Our study suggests several avenues for future research. First, more work is needed to understand why certain types of hospitals are more likely to use IPPs. Our findings indicate that IPP use may be partly a function of hospital size and resources. However, other factors not measured here may play an important role as well, such as institutional culture. Institutions with a focus on informatics and strong IT leadership may be more likely to use their EHR to monitor performance. Second, further research should explore in greater depth how profiles are used. Future research should evaluate, for example, how hospitals are using behavioral economic principles, such as peer comparison, to motivate behavior change, and if such techniques have successfully influenced practice and patient outcomes. Ultimately, multicentered, randomized evaluations of IPP use may be necessary to understand their risks and evaluate their effect on patient outcomes. This work is necessary to inform policy and practice as hospitals transition from fee-for-service to value-based reimbursement.

In sum, we observed increasing adoption of individualized electronic provider performance profiles by US hospitals from 2013 to 2015. Hospitals that did not use IPPs were more likely to be small, for profit, and less likely to participate in bundled payment programs. Those hospitals may be less well positioned to track provider performance and implement incentives for provider behavior changes needed to meet targets for value-based reimbursement.

 

 

Disclosure

Dr. Rolnick is a consultant to Tuple Health, Inc. and was a part-time employee of Acumen, LLC outside the submitted work. Dr. Ryskina has nothing to disclose.

Reimbursement for hospitals and physicians is increasingly tied to performance.1 Bundled payments, for example, require hospitals to share risk for patient outcomes. Medicare bundled payments are becoming mandatory for some surgical and medical conditions, including joint replacement, acute myocardial infarction, and coronary artery bypass graft surgery.2 Value-based payment is anticipated to become the norm as Medicare and private payers strive to control costs and improve outcomes. Although value-based reimbursement for hospitals targets hospital-level costs and outcomes, we know that variations at the level of individual providers explain a considerable proportion of variation in utilization and outcomes.3 However, physicians often lack awareness of their own practice patterns and relative costs, and successful participation in new payment models may require an investment by hospitals in the infrastructure needed to measure and provide feedback on performance to individual providers to affect their behavior.4,5

Electronic health record (EHR)-based reports or “dashboards” have been proposed as one potential tool to provide individualized feedback on provider performance.6 Individual provider performance profiles (IPPs) offer the potential to provide peer comparisons that may adjust individual behavior by correcting misperceptions about norms.7 Behavioral economic theory suggests that individual performance data, if combined with information on peer behavior and normative goals, may be effective in changing behavior.8 Several studies have reported the effects of specific efforts to use IPPs, showing that such reports can improve care in certain clinical areas. For example, individual provider dashboards have been associated with better outcomes for hospitalized patients, such as increased compliance with recommendations for prophylaxis of venous thromboembolism, although evidence in other areas of practice is mixed.9,10 A randomized controlled trial of peer comparison feedback reduced inappropriate antibiotic prescribing for upper respiratory infections by 11% among internists.11

Despite the promise of individualized feedback to optimize behavior, however, little has been reported on trends in the use of IPPs on a population level. It is unknown whether their use is common or rare, or what hospital characteristics are associated with adoption. Such information would help guide future efforts to promote IPP use and understand its effect on practice. We used data from a nationally representative survey of US hospitals to examine the use of individual provider-level performance profiles.

METHODS

We used data from the American Hospital Association (AHA) Annual Survey Information Technology (IT) Supplement, which asked respondents to indicate whether they have used electronic clinical data from the EHR or other electronic system in their hospital to create IPPs. The AHA survey is sent annually to all US operating hospitals. Survey results are supplemented by data from the AHA registration database, US Census Bureau, hospital accrediting bodies, and other organizations. The AHA IT supplement is also sent yearly to each hospital’s chief executive officer, who assigns it to the most knowledgeable person in the institution to complete.12

We linked data on IPP use to AHA Annual Survey responses on hospital characteristics for all general adult and pediatric hospitals. Multivariable logistic regression was used to model the odds of individual provider performance profile use as a function of hospital characteristics, including ownership (nonprofit, for profit, or government), geographic region, teaching versus nonteaching status, rural versus urban location, size, expenditures per bed, proportion of patient days covered by Medicaid, and risk-sharing models of reimbursement (participation in a health maintenance organization or bundled payments program). Variables were chosen a priori to account for important characteristics of US hospitals (eg, size, teaching status, and geographic location). These were combined with variables representing risk-sharing arrangements based on the hypothesis that hospitals whose payments are at greater risk would be more likely to invest in tracking provider performance. We eliminated any variable with an item nonresponse rate greater than 15%, which resulted in elimination of 2 variables representing hospital revenue from capitated payments and any risk-sharing arrangement, respectively. All other variables had item nonresponse rates of 0%, except for 4.7% item nonresponse for the bundled payments variable.

We also measured the trend in individual provider performance report use between 2013 and 2015 by estimating the linear probability between IPP use and year. A P value less than .05 was considered statistically significant.

Because past work has demonstrated nonresponse bias in the AHA Survey and IT Supplement, we performed additional analyses using nonresponsive weights based on hospital characteristics. Weighting methodology was based on prior work with the AHA and AHA IT surveys.13,14 Weighting exploits the fact that a number of hospital characteristics are derived from sources outside the survey and thus are available for both respondents and nonrespondents. We created nonresponse weights based on a logistic regression model of survey response as a function of hospital characteristics (ownership, size, teaching status, systems membership, critical access hospital, and geographic region). Our findings were similar for weighted and nonweighted models and nonweighted estimates are presented throughout.

The University of Pennsylvania Institutional Review Board exempted this study from review. Analyses were performed using Stata statistical software, version 14.0 (StataCorp, College Station, TX).

 

 

RESULTS

In 2015, 2334 general hospitals completed all questions of interest in both surveys. Among respondents, 65.8% used individual provider performance reports. Individual provider performance use increased by 3.3% each year from 2013 to 2015 (P = .006; Figure).

The table shows the association between hospital characteristics and the odds of individual provider performance report use. Report use was associated with nonprofit status (odds ratio [OR], 2.77; 95% confidence interval [CI], 1.94-3.95; P < .01) compared to for-profit, large hospital size (OR, 2.37; 95% CI, 1.56-3.60; P < .01) compared to small size, highest (OR, 2.09; 95% CI, 1.55-2.82; P < .01) and second highest (OR, 1.43; 95% CI, 1.08-1.89; P = .01) quartiles of bed-adjusted expenditures compared to the bottom quartile, and West geographic region compared to Northeast (OR, 2.07; 95% CI, 1.45-2.95; P < .01). Individual provider performance use was also independently associated with participation in a health maintenance organization (OR, 1.50; 95% CI, 1.17-1.90; P < .01) or bundled payment program (OR, 1.61; 95% CI, 1.18-2.19; P < .01), controlling for other covariates. Adjustment for nonresponse bias did not change any coefficients by more than 10% (supplementary Table).

DISCUSSION

We found that a large and increasing proportion of US hospitals reported using electronic data to measure individual provider performance. Hospitals that reported IPP use tended to be larger and have higher expenditures than hospitals that did not use IPPs. Adjusting for other hospital characteristics, participation in a bundled payment program was associated with greater odds of using IPPs. To our knowledge, our study is the first population-level analysis of IPP use by US hospitals.

The Medicare Access and Children Health Insurance Program Reauthorization Act is accelerating the shift from quantity based toward value-based reimbursement. The proficient adoption of IT by healthcare providers has been cited as an important factor in adapting to new payment models.15 Physicians, and in particular hospitalists, who practice in an inpatient environment, may not directly access financial incentives aimed to adapt performance for value-based reimbursement. They may also have difficulty assessing their performance relative to peers and longitudinally over time. Individualized EHR-based provider-level performance reports offer 1 option for hospitals to measure performance and provide comparative feedback at the individual physician level. Our findings show that, in fact, a majority of US hospitals have made investments in the infrastructure necessary to create such profiles.

Nevertheless, a third of the hospitals surveyed have not adopted individualized provider performance profiles. If meeting efficiency and outcomes goals for value-based payments necessitates changes to individual provider behavior, those hospitals may be less well positioned to benefit from value-based payment models that incentivize hospitals for efficiency and outcomes. Furthermore, while we observe widespread adoption of individual performance profiles, it is unclear whether those were used to provide feedback to providers, and if so, how the feedback provided may influence its effect on behavior. Behavioral economics theory suggests, for example, that publicly reporting performance compared to peers provides stronger incentives for behavior change than “blinded” personalized reports.16

Our study has important limitations. We cannot exclude the possibility that unmeasured variables help explain individual provider performance adoption. These omitted variables may confound the association between hospital characteristics and individual provider performance adoption observed in this study. We were also unable to establish causality between bundled payments and individual provider performance profile use. For instance, hospitals may elect to make investments in IT infrastructure to enable individual provider performance profile adoption in anticipation of bundled payment reforms. Alternatively, the availability of IPPs may have led hospitals to enter bundled payments reimbursement arrangements. In addition, we are unable to describe how individual provider performance use affects physician practice or healthcare delivery more broadly. Finally, we are also unable to account for other sources of performance data. For example, some physician may receive data from their physician practice groups.

Our study suggests several avenues for future research. First, more work is needed to understand why certain types of hospitals are more likely to use IPPs. Our findings indicate that IPP use may be partly a function of hospital size and resources. However, other factors not measured here may play an important role as well, such as institutional culture. Institutions with a focus on informatics and strong IT leadership may be more likely to use their EHR to monitor performance. Second, further research should explore in greater depth how profiles are used. Future research should evaluate, for example, how hospitals are using behavioral economic principles, such as peer comparison, to motivate behavior change, and if such techniques have successfully influenced practice and patient outcomes. Ultimately, multicentered, randomized evaluations of IPP use may be necessary to understand their risks and evaluate their effect on patient outcomes. This work is necessary to inform policy and practice as hospitals transition from fee-for-service to value-based reimbursement.

In sum, we observed increasing adoption of individualized electronic provider performance profiles by US hospitals from 2013 to 2015. Hospitals that did not use IPPs were more likely to be small, for profit, and less likely to participate in bundled payment programs. Those hospitals may be less well positioned to track provider performance and implement incentives for provider behavior changes needed to meet targets for value-based reimbursement.

 

 

Disclosure

Dr. Rolnick is a consultant to Tuple Health, Inc. and was a part-time employee of Acumen, LLC outside the submitted work. Dr. Ryskina has nothing to disclose.

References

1. Hussey PS, Liu JL, White C. The Medicare Access And CHIP Reauthorization Act: effects On Medicare payment policy and spending. Health Aff. 2017;36(4):697-705. PubMed
2. Navathe AS, Song Z, Emanuel EJ. The next generation of episode-based payments. JAMA. 2017;317(23):2371-2372. PubMed
3. Tsugawa Y, Jha AK, Newhouse JP, Zaslavsky AM, Jena AB. Variation in physician spending and association with patient outcomes. JAMA Intern Med. 2017;177(5):675-682. PubMed
4. Saint S, Wiese J, Amory JK, et al. Are physicians aware of which of their patients have indwelling urinary catheters? Am J Med. 2000;109(6):476-480. PubMed
5. Saturno PJ, Palmer RH, Gascón JJ. Physician attitudes, self-estimated performance and actual compliance with locally peer-defined quality evaluation criteria. Int J Qual Health Care J Int Soc Qual Health Care. 1999;11(6):487-496. PubMed
6. Mehrotra A, Sorbero MES, Damberg CL. Using the lessons of behavioral economics to design more effective pay-for-performance programs. Am J Manag Care. 2010;16(7):497-503. PubMed
7. Emanuel EJ, Ubel PA, Kessler JB, et al. Using behavioral economics to design physician Incentives that deliver high-value care. Ann Intern Med. 2016;164(2):114-119. PubMed
8. Liao JM, Fleisher LA, Navathe AS. Increasing the value of social comparisons of physician performance using norms. JAMA. 2016;316(11):1151-1152. PubMed
9. Michtalik HJ, Carolan HT, Haut ER, et al. Use of provider-level dashboards and pay-for-performance in venous thromboembolism prophylaxis. J Hosp Med. 2015;10(3):172-178. PubMed
10. Kurtzman G, Dine J, Epstein A, et al. Internal medicine resident engagement with a laboratory utilization dashboard: mixed methods study. J Hosp Med. 12(9):743-746. PubMed
11. Linder JA, Schnipper JL, Tsurikova R, et al. Electronic health record feedback to improve antibiotic prescribing for acute respiratory infections. Am J Manag Care. 2010;16 (12 Suppl HIT):e311-e319. PubMed
12. Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med. 2009;360(16):1628-1638. PubMed
13. Walker DM, Mora AM, Scheck McAlearney A. Accountable care organization hospitals differ in health IT capabilities. Am J Manag Care. 2016;22(12):802-807. PubMed
14. Adler-Milstein J, DesRoches CM, Furukawa MF, et al. More than half of US hospitals have at least a basic EHR, but stage 2 criteria remain challenging for most. Health Aff 2014;33(9):1664-1671. PubMed
15. Porter ME. A strategy for health care reform—toward a value-based system. N Engl J Med. 2009;361(2):109-112. PubMed
16. Navathe AS, Emanuel EJ. Physician peer comparisons as a nonfinancial strategy to improve the value of care. JAMA. 2016;316(17):1759-1760. PubMed

References

1. Hussey PS, Liu JL, White C. The Medicare Access And CHIP Reauthorization Act: effects On Medicare payment policy and spending. Health Aff. 2017;36(4):697-705. PubMed
2. Navathe AS, Song Z, Emanuel EJ. The next generation of episode-based payments. JAMA. 2017;317(23):2371-2372. PubMed
3. Tsugawa Y, Jha AK, Newhouse JP, Zaslavsky AM, Jena AB. Variation in physician spending and association with patient outcomes. JAMA Intern Med. 2017;177(5):675-682. PubMed
4. Saint S, Wiese J, Amory JK, et al. Are physicians aware of which of their patients have indwelling urinary catheters? Am J Med. 2000;109(6):476-480. PubMed
5. Saturno PJ, Palmer RH, Gascón JJ. Physician attitudes, self-estimated performance and actual compliance with locally peer-defined quality evaluation criteria. Int J Qual Health Care J Int Soc Qual Health Care. 1999;11(6):487-496. PubMed
6. Mehrotra A, Sorbero MES, Damberg CL. Using the lessons of behavioral economics to design more effective pay-for-performance programs. Am J Manag Care. 2010;16(7):497-503. PubMed
7. Emanuel EJ, Ubel PA, Kessler JB, et al. Using behavioral economics to design physician Incentives that deliver high-value care. Ann Intern Med. 2016;164(2):114-119. PubMed
8. Liao JM, Fleisher LA, Navathe AS. Increasing the value of social comparisons of physician performance using norms. JAMA. 2016;316(11):1151-1152. PubMed
9. Michtalik HJ, Carolan HT, Haut ER, et al. Use of provider-level dashboards and pay-for-performance in venous thromboembolism prophylaxis. J Hosp Med. 2015;10(3):172-178. PubMed
10. Kurtzman G, Dine J, Epstein A, et al. Internal medicine resident engagement with a laboratory utilization dashboard: mixed methods study. J Hosp Med. 12(9):743-746. PubMed
11. Linder JA, Schnipper JL, Tsurikova R, et al. Electronic health record feedback to improve antibiotic prescribing for acute respiratory infections. Am J Manag Care. 2010;16 (12 Suppl HIT):e311-e319. PubMed
12. Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med. 2009;360(16):1628-1638. PubMed
13. Walker DM, Mora AM, Scheck McAlearney A. Accountable care organization hospitals differ in health IT capabilities. Am J Manag Care. 2016;22(12):802-807. PubMed
14. Adler-Milstein J, DesRoches CM, Furukawa MF, et al. More than half of US hospitals have at least a basic EHR, but stage 2 criteria remain challenging for most. Health Aff 2014;33(9):1664-1671. PubMed
15. Porter ME. A strategy for health care reform—toward a value-based system. N Engl J Med. 2009;361(2):109-112. PubMed
16. Navathe AS, Emanuel EJ. Physician peer comparisons as a nonfinancial strategy to improve the value of care. JAMA. 2016;316(17):1759-1760. PubMed

Issue
Journal of Hospital Medicine 13(8)
Issue
Journal of Hospital Medicine 13(8)
Page Number
562-565. Published online first February 7, 2018
Page Number
562-565. Published online first February 7, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Joshua A. Rolnick, MD, JD, University of Pennsylvania, National Clinician Scholars Program, Blockley Hall, 13th Floor, 423 Guardian Drive, Philadelphia, PA 19104-6021; Telephone: 617-538-5191; Fax: 610-642-4380; E-mail: rolnick@pennmedicine.upenn.edu
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Collaborations with Pediatric Hospitalists: National Surveys of Pediatric Surgeons and Orthopedic Surgeons

Article Type
Changed
Fri, 10/04/2019 - 16:31

Pediatric expertise is critical in caring for children during the perioperative and postoperative periods.1,2 Some postoperative care models involve pediatric hospitalists (PH) as collaborators for global care (comanagement),3 as consultants for specific issues, or not at all.

Single-site studies in specific pediatric surgical populations4-7and medically fragile adults8 suggest improved outcomes for patients and systems by using hospitalist-surgeon collaboration. However, including PH in the care of surgical patients may also disrupt systems. No studies have broadly examined the clinical relationships between surgeons and PH.

The aims of this cross-sectional survey of US pediatric surgeons (PS) and pediatric orthopedic surgeons (OS) were to understand (1) the prevalence and characteristics of surgical care models in pediatrics, specifically those involving PH, and (2) surgeons’ perceptions of PH in caring for surgical patients.

METHODS

The target US surgeon population was the estimated 850 active PS and at least 600 pediatric OS.9 Most US PS (n = 606) are affiliated with the American Academy of Pediatrics (AAP) Section on Surgery (SoSu), representing at least 200 programs. Nearly all pediatric OS belong to the Pediatric Orthopedic Society of North America (POSNA) (n = 706), representing 340 programs; a subset (n = 130) also belong to the AAP SoSu.

Survey Development and Distribution

Survey questions were developed to elicit surgeons’ descriptions of their program structure and their perceptions of PH involvement. For programs with PH involvement, program variables included primary assignment of clinical responsibilities by service line (surgery, hospitalist, shared) and use of a written service agreement, which defines each service’s roles and responsibilities.

The web-based survey, created by using Survey Monkey (San Mateo, CA), was pilot tested for usability and clarity among 8 surgeons and 1 PH. The survey had logic points around involvement of hospitalists and multiple hospital affiliations (supplemental Appendix A). The survey request with a web-based link was e-mailed 3 times to surgical and orthopedic distribution outlets, endorsed by organizational leadership. Respondents’ hospital ZIP codes were used as a proxy for program. If there was more than 1 complete survey response per ZIP code, 1 response with complete data was randomly selected to ensure a unique entry per program.

Classification of Care Models

Each surgical program was classified into 1 of the following 3 categories based on reported care of primary surgical patients: (1) comanagement, described as PH writing orders and/or functioning as the primary service; (2) consultation, described as PH providing clinical recommendations only; and (3) no PH involvement, described as “rarely” or “never” involving PH.

Clinical Responsibility Score

To estimate the degree of hospitalist involvement, we devised and calculated a composite score of service responsibilities for each program. This score involved the following 7 clinical domains: management of fluids or nutrition, pain, comorbidities, antibiotics, medication dosing, wound care, and discharge planning. Scores were summed for each domain: 0 for surgical team primary responsibility, 1 for shared surgical and hospitalist responsibility, and 2 for hospitalist primary responsibility. Composite scores could range from 0 to 14; lower scores represented a stronger tendency for surgeon management, and higher scores represented a stronger tendency toward PH management.

Data Analysis

For data analysis, simple exploratory tests with χ2 analysis and Student t tests were performed by using Stata 14.2 (StataCorp LLC, College Station, TX) to compare differences by surgical specialty programs and individuals by role assignment and perceptions of PH involvement.

The NYU School of Medicine Institutional Review Board approved this study.

RESULTS

Respondents and Programs

Of the estimated 606 PS in the AAP SoSu, 291 (49%) US-based surgeons (PS) responded with 251 (41%) sufficiently completed surveys (Table). The initial and completed survey response rate for pediatric OS through the POSNA listserv was 58% and 48% (340/706), respectively. These respondents represented 185 unique PS programs and 212/340 (62%) unique OS programs in the US (supplemental Appendix B).

Among the unique 185 PS programs and 212 OS programs represented, PH were often engaged in the care of primary surgical patients (Table).

 

 

Roles of PH in Collaborative Programs

Among programs that reported any hospitalist involvement (PS, n = 100; OS, n = 157), few (≤15%) programs involved hospitalists with all patients. Pediatric OS programs were significantly more likely than pediatric surgical programs to involve PH for healthy patients with any high-risk surgery (27% vs 9%; P = .001). Most PS (64%) and OS (83%) reported involving PH for all medically complex patients, regardless of surgery risk (P = .003).

In programs involving PH, few PS (11%) or OS programs (16%) reported by using a written service agreement.

Care of Surgical Patients in PH-involved programs

Both PS and OS programs with hospitalist involvement reported that surgical teams were either primarily responsible for, or shared with the hospitalist, most aspects of patient care, including medication dosing, nutrition, and fluids (Figure). PH management of antibiotic and nonsurgical comorbidities was higher for OS programs than PS programs.

Composite clinical responsibility scores ranged from 0 to 8, with a median score of 2.3 (interquartile range [IQR] 0-3) for consultation programs and 5 (IQR 1-7) for comanagement programs. Composite scores were higher for OS (7.4; SD 3.4) versus PS (3.3; SD 3.4) programs (P < .001; 95% CI, 3.3-5.5; supplemental Appendix C).

Surgeons’ Perspectives on Hospitalist Involvement

Surgeons in programs without PH involvement viewed PH overall impact less positively than those with PH (27% vs 58%). Among all surgeons surveyed, few perceived positive (agree/strongly agree) PH impact on pain management (<15%) or decreasing LOS (<15%; supplemental Appendix D).

Most surgeons (n = 355) believed that PH financial support should come from separate billing (“patient fee”) (48%) or hospital budget (36%). Only 17% endorsed PH receiving part of the surgical global fee, with no significant difference by surgical specialty or current PH involvement status.

DISCUSSION

This study is the first comprehensive assessment of surgeons’ perspectives on the involvement and effectiveness of PH in the postoperative care of children undergoing inpatient general or orthopedic surgeries. The high prevalence (>70%) of PH involvement among responding surgical programs suggests that PH comanagement of hospitalized patients merits attention from providers, systems, educators, and payors.

Collaboration and Roles are Correlated with Surgical Specialty and Setting

Forty percent of inpatient pediatric surgeries occur outside of children’s hospitals.10 We found that PH involvement was higher at smaller and general hospitals where PH may provide pediatric expertise when insufficient pediatric resources, like pain teams, exist.7 Alternately, some quaternary centers have dedicated surgical hospitalists. The extensive involvement of PH in the bulk of certain clinical care domains, especially care coordination, in OS and in many PS programs (Figure) suggests that PH are well integrated into many programs and provide essential clinical care.

In many large freestanding children’s hospitals, though, surgical teams may have sufficient depth and breadth to manage most aspects of care. There may be an exception for care coordination of medically complex patients. Care coordination is a patient- and family-centered care best practice,11 encompasses integrating and aligning medical care among clinical services, and is focused on shared decision making and communication. High-quality care coordination processes are of great value to patients and families, especially in medically complex children,11 and are associated with improved transitions from hospital to home.12 Well-planned transitions likely decrease these special populations’ postoperative readmission risk, complications, and prolonged length of stay.13 Reimbursement for these services could integrate these contributions needed for safe and patient-centered pediatric inpatient surgical care.

Perceptions of PH Impact

The variation in perception of PH by surgical specialty, with higher prevalence as well as higher regard for PH among OS, is intriguing. This disparity may reflect current training and clinical expectations of each surgical specialty, with larger emphasis on medical management for surgical compared with orthopedic curricula (www.acgme.org).

While PS and OS respondents perceived that PH involvement did not influence length of stay, pain management, and resource use, single-site studies suggest otherwise.4,8,14 Objective data on the impact of PH involvement on patient and systems outcomes may help elucidate whether this is a perceived or actual lack of impact. Future metrics might include pain scores, patient centered care measures on communication and coordination, patient complaints and/or lawsuits, resource utilization and/or cost, readmission, and medical errors.

This study has several limitations. There is likely a (self) selection bias by surgeons with either strongly positive or negative views of PH involvement. Future studies may target a random sampling of programs rather than a cross-sectional survey of individual providers. Relatively few respondents represented community hospitals, possibly because these facilities are staffed by general OS and general surgeons10 who were not included in this sample.

 

 

CONCLUSION

Given the high prevalence of PH involvement in caring for surgical pediatric patients in varied settings, the field of pediatric hospital medicine should support increased PH training and standardized practice around perioperative management, particularly for medically complex patients with increased care coordination needs. Surgical comanagement, including interdisciplinary communication skills, deserves inclusion as a PH core competency and as an entrustable professional activity for pediatric hospital medicine and pediatric graduate medical education programs,15 especially orthopedic surgeries.

Further research on effective and evidence-based pediatric postoperative care and collaboration models will help PH and surgeons to most effectively and respectfully partner to improve care.

Acknowledgments

The authors thank the members of the AAP Section on Hospital Medicine Surgical Care Subcommittee, AAP SOHM leadership, and Ms. Alexandra Case.

Disclosure 

The authors have no conflicts of interest relevant to this manuscript to report. This study was supported in part by the Agency for Health Care Research and Quality (LM, R00HS022198).

Files
References

1. Task Force for Children’s Surgical Care. Optimal resources for children’s surgical care in the United States. J Am Coll Surg. 2014;218(3):479-487, 487.e1-4. PubMed
2. Section on Hospital Medicine, American Academy of Pediatrics. Guiding principles for pediatric hospital medicine programs. Pediatrics. 2013;132(4):782-786. PubMed
3. Freiburg C, James T, Ashikaga T, Moalem J, Cherr G. Strategies to accommodate resident work-hour restrictions: Impact on surgical education. J Surg Educ. 2011;68(5):387-392. PubMed
4. Pressel DM, Rappaport DI, Watson N. Nurses’ assessment of pediatric physicians: Are hospitalists different? J Healthc Manag. 2008;53(1):14-24; discussion 24-25. PubMed
5. Simon TD, Eilert R, Dickinson LM, Kempe A, Benefield E, Berman S. Pediatric hospitalist comanagement of spinal fusion surgery patients. J Hosp Med. 2007;2(1):23-30. PubMed
6. Rosenberg RE, Ardalan K, Wong W, et al. Postoperative spinal fusion care in pediatric patients: Co-management decreases length of stay. Bull Hosp Jt Dis (2013). 2014;72(3):197-203. PubMed
7. Dua K, McAvoy WC, Klaus SA, Rappaport DI, Rosenberg RE, Abzug JM. Hospitalist co-management of pediatric orthopaedic surgical patients at a community hospital. Md Med. 2016;17(1):34-36. PubMed
8. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. PubMed
9. Poley S, Ricketts T, Belsky D, Gaul K. Pediatric surgeons: Subspecialists increase faster than generalists. Bull Amer Coll Surg. 2010;95(10):36-39. PubMed
10. Somme S, Bronsert M, Morrato E, Ziegler M. Frequency and variety of inpatient pediatric surgical procedures in the United States. Pediatrics. 2013;132(6):e1466-e1472. PubMed
11. Frampton SB, Guastello S, Hoy L, Naylor M, Sheridan S, Johnston-Fleece M, eds. Harnessing Evidence and Experience to Change Culture: A Guiding Framework for Patient and Family Engaged Care. Washington, DC: National Academies of Medicine; 2017. 
12. Auger KA, Kenyon CC, Feudtner C, Davis MM. Pediatric hospital discharge interventions to reduce subsequent utilization: A systematic review. J Hosp Med. 2014;9(4):251-260. PubMed
13. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the united states. Pediatrics. 2010;126(4):647-655. PubMed
14. Rappaport DI, Adelizzi-Delany J, Rogers KJ, et al. Outcomes and costs associated with hospitalist comanagement of medically complex children undergoing spinal fusion surgery. Hosp Pediatr. 2013;3(3):233-241. PubMed
15. Jerardi K, Meier K, Shaughnessy E. Management of postoperative pediatric patients. MedEdPORTAL. 2015;11:10241. doi:10.15766/mep_2374-8265.10241. 

Article PDF
Issue
Journal of Hospital Medicine 13(8)
Publications
Topics
Page Number
566-569. Published online first February 6, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Pediatric expertise is critical in caring for children during the perioperative and postoperative periods.1,2 Some postoperative care models involve pediatric hospitalists (PH) as collaborators for global care (comanagement),3 as consultants for specific issues, or not at all.

Single-site studies in specific pediatric surgical populations4-7and medically fragile adults8 suggest improved outcomes for patients and systems by using hospitalist-surgeon collaboration. However, including PH in the care of surgical patients may also disrupt systems. No studies have broadly examined the clinical relationships between surgeons and PH.

The aims of this cross-sectional survey of US pediatric surgeons (PS) and pediatric orthopedic surgeons (OS) were to understand (1) the prevalence and characteristics of surgical care models in pediatrics, specifically those involving PH, and (2) surgeons’ perceptions of PH in caring for surgical patients.

METHODS

The target US surgeon population was the estimated 850 active PS and at least 600 pediatric OS.9 Most US PS (n = 606) are affiliated with the American Academy of Pediatrics (AAP) Section on Surgery (SoSu), representing at least 200 programs. Nearly all pediatric OS belong to the Pediatric Orthopedic Society of North America (POSNA) (n = 706), representing 340 programs; a subset (n = 130) also belong to the AAP SoSu.

Survey Development and Distribution

Survey questions were developed to elicit surgeons’ descriptions of their program structure and their perceptions of PH involvement. For programs with PH involvement, program variables included primary assignment of clinical responsibilities by service line (surgery, hospitalist, shared) and use of a written service agreement, which defines each service’s roles and responsibilities.

The web-based survey, created by using Survey Monkey (San Mateo, CA), was pilot tested for usability and clarity among 8 surgeons and 1 PH. The survey had logic points around involvement of hospitalists and multiple hospital affiliations (supplemental Appendix A). The survey request with a web-based link was e-mailed 3 times to surgical and orthopedic distribution outlets, endorsed by organizational leadership. Respondents’ hospital ZIP codes were used as a proxy for program. If there was more than 1 complete survey response per ZIP code, 1 response with complete data was randomly selected to ensure a unique entry per program.

Classification of Care Models

Each surgical program was classified into 1 of the following 3 categories based on reported care of primary surgical patients: (1) comanagement, described as PH writing orders and/or functioning as the primary service; (2) consultation, described as PH providing clinical recommendations only; and (3) no PH involvement, described as “rarely” or “never” involving PH.

Clinical Responsibility Score

To estimate the degree of hospitalist involvement, we devised and calculated a composite score of service responsibilities for each program. This score involved the following 7 clinical domains: management of fluids or nutrition, pain, comorbidities, antibiotics, medication dosing, wound care, and discharge planning. Scores were summed for each domain: 0 for surgical team primary responsibility, 1 for shared surgical and hospitalist responsibility, and 2 for hospitalist primary responsibility. Composite scores could range from 0 to 14; lower scores represented a stronger tendency for surgeon management, and higher scores represented a stronger tendency toward PH management.

Data Analysis

For data analysis, simple exploratory tests with χ2 analysis and Student t tests were performed by using Stata 14.2 (StataCorp LLC, College Station, TX) to compare differences by surgical specialty programs and individuals by role assignment and perceptions of PH involvement.

The NYU School of Medicine Institutional Review Board approved this study.

RESULTS

Respondents and Programs

Of the estimated 606 PS in the AAP SoSu, 291 (49%) US-based surgeons (PS) responded with 251 (41%) sufficiently completed surveys (Table). The initial and completed survey response rate for pediatric OS through the POSNA listserv was 58% and 48% (340/706), respectively. These respondents represented 185 unique PS programs and 212/340 (62%) unique OS programs in the US (supplemental Appendix B).

Among the unique 185 PS programs and 212 OS programs represented, PH were often engaged in the care of primary surgical patients (Table).

 

 

Roles of PH in Collaborative Programs

Among programs that reported any hospitalist involvement (PS, n = 100; OS, n = 157), few (≤15%) programs involved hospitalists with all patients. Pediatric OS programs were significantly more likely than pediatric surgical programs to involve PH for healthy patients with any high-risk surgery (27% vs 9%; P = .001). Most PS (64%) and OS (83%) reported involving PH for all medically complex patients, regardless of surgery risk (P = .003).

In programs involving PH, few PS (11%) or OS programs (16%) reported by using a written service agreement.

Care of Surgical Patients in PH-involved programs

Both PS and OS programs with hospitalist involvement reported that surgical teams were either primarily responsible for, or shared with the hospitalist, most aspects of patient care, including medication dosing, nutrition, and fluids (Figure). PH management of antibiotic and nonsurgical comorbidities was higher for OS programs than PS programs.

Composite clinical responsibility scores ranged from 0 to 8, with a median score of 2.3 (interquartile range [IQR] 0-3) for consultation programs and 5 (IQR 1-7) for comanagement programs. Composite scores were higher for OS (7.4; SD 3.4) versus PS (3.3; SD 3.4) programs (P < .001; 95% CI, 3.3-5.5; supplemental Appendix C).

Surgeons’ Perspectives on Hospitalist Involvement

Surgeons in programs without PH involvement viewed PH overall impact less positively than those with PH (27% vs 58%). Among all surgeons surveyed, few perceived positive (agree/strongly agree) PH impact on pain management (<15%) or decreasing LOS (<15%; supplemental Appendix D).

Most surgeons (n = 355) believed that PH financial support should come from separate billing (“patient fee”) (48%) or hospital budget (36%). Only 17% endorsed PH receiving part of the surgical global fee, with no significant difference by surgical specialty or current PH involvement status.

DISCUSSION

This study is the first comprehensive assessment of surgeons’ perspectives on the involvement and effectiveness of PH in the postoperative care of children undergoing inpatient general or orthopedic surgeries. The high prevalence (>70%) of PH involvement among responding surgical programs suggests that PH comanagement of hospitalized patients merits attention from providers, systems, educators, and payors.

Collaboration and Roles are Correlated with Surgical Specialty and Setting

Forty percent of inpatient pediatric surgeries occur outside of children’s hospitals.10 We found that PH involvement was higher at smaller and general hospitals where PH may provide pediatric expertise when insufficient pediatric resources, like pain teams, exist.7 Alternately, some quaternary centers have dedicated surgical hospitalists. The extensive involvement of PH in the bulk of certain clinical care domains, especially care coordination, in OS and in many PS programs (Figure) suggests that PH are well integrated into many programs and provide essential clinical care.

In many large freestanding children’s hospitals, though, surgical teams may have sufficient depth and breadth to manage most aspects of care. There may be an exception for care coordination of medically complex patients. Care coordination is a patient- and family-centered care best practice,11 encompasses integrating and aligning medical care among clinical services, and is focused on shared decision making and communication. High-quality care coordination processes are of great value to patients and families, especially in medically complex children,11 and are associated with improved transitions from hospital to home.12 Well-planned transitions likely decrease these special populations’ postoperative readmission risk, complications, and prolonged length of stay.13 Reimbursement for these services could integrate these contributions needed for safe and patient-centered pediatric inpatient surgical care.

Perceptions of PH Impact

The variation in perception of PH by surgical specialty, with higher prevalence as well as higher regard for PH among OS, is intriguing. This disparity may reflect current training and clinical expectations of each surgical specialty, with larger emphasis on medical management for surgical compared with orthopedic curricula (www.acgme.org).

While PS and OS respondents perceived that PH involvement did not influence length of stay, pain management, and resource use, single-site studies suggest otherwise.4,8,14 Objective data on the impact of PH involvement on patient and systems outcomes may help elucidate whether this is a perceived or actual lack of impact. Future metrics might include pain scores, patient centered care measures on communication and coordination, patient complaints and/or lawsuits, resource utilization and/or cost, readmission, and medical errors.

This study has several limitations. There is likely a (self) selection bias by surgeons with either strongly positive or negative views of PH involvement. Future studies may target a random sampling of programs rather than a cross-sectional survey of individual providers. Relatively few respondents represented community hospitals, possibly because these facilities are staffed by general OS and general surgeons10 who were not included in this sample.

 

 

CONCLUSION

Given the high prevalence of PH involvement in caring for surgical pediatric patients in varied settings, the field of pediatric hospital medicine should support increased PH training and standardized practice around perioperative management, particularly for medically complex patients with increased care coordination needs. Surgical comanagement, including interdisciplinary communication skills, deserves inclusion as a PH core competency and as an entrustable professional activity for pediatric hospital medicine and pediatric graduate medical education programs,15 especially orthopedic surgeries.

Further research on effective and evidence-based pediatric postoperative care and collaboration models will help PH and surgeons to most effectively and respectfully partner to improve care.

Acknowledgments

The authors thank the members of the AAP Section on Hospital Medicine Surgical Care Subcommittee, AAP SOHM leadership, and Ms. Alexandra Case.

Disclosure 

The authors have no conflicts of interest relevant to this manuscript to report. This study was supported in part by the Agency for Health Care Research and Quality (LM, R00HS022198).

Pediatric expertise is critical in caring for children during the perioperative and postoperative periods.1,2 Some postoperative care models involve pediatric hospitalists (PH) as collaborators for global care (comanagement),3 as consultants for specific issues, or not at all.

Single-site studies in specific pediatric surgical populations4-7and medically fragile adults8 suggest improved outcomes for patients and systems by using hospitalist-surgeon collaboration. However, including PH in the care of surgical patients may also disrupt systems. No studies have broadly examined the clinical relationships between surgeons and PH.

The aims of this cross-sectional survey of US pediatric surgeons (PS) and pediatric orthopedic surgeons (OS) were to understand (1) the prevalence and characteristics of surgical care models in pediatrics, specifically those involving PH, and (2) surgeons’ perceptions of PH in caring for surgical patients.

METHODS

The target US surgeon population was the estimated 850 active PS and at least 600 pediatric OS.9 Most US PS (n = 606) are affiliated with the American Academy of Pediatrics (AAP) Section on Surgery (SoSu), representing at least 200 programs. Nearly all pediatric OS belong to the Pediatric Orthopedic Society of North America (POSNA) (n = 706), representing 340 programs; a subset (n = 130) also belong to the AAP SoSu.

Survey Development and Distribution

Survey questions were developed to elicit surgeons’ descriptions of their program structure and their perceptions of PH involvement. For programs with PH involvement, program variables included primary assignment of clinical responsibilities by service line (surgery, hospitalist, shared) and use of a written service agreement, which defines each service’s roles and responsibilities.

The web-based survey, created by using Survey Monkey (San Mateo, CA), was pilot tested for usability and clarity among 8 surgeons and 1 PH. The survey had logic points around involvement of hospitalists and multiple hospital affiliations (supplemental Appendix A). The survey request with a web-based link was e-mailed 3 times to surgical and orthopedic distribution outlets, endorsed by organizational leadership. Respondents’ hospital ZIP codes were used as a proxy for program. If there was more than 1 complete survey response per ZIP code, 1 response with complete data was randomly selected to ensure a unique entry per program.

Classification of Care Models

Each surgical program was classified into 1 of the following 3 categories based on reported care of primary surgical patients: (1) comanagement, described as PH writing orders and/or functioning as the primary service; (2) consultation, described as PH providing clinical recommendations only; and (3) no PH involvement, described as “rarely” or “never” involving PH.

Clinical Responsibility Score

To estimate the degree of hospitalist involvement, we devised and calculated a composite score of service responsibilities for each program. This score involved the following 7 clinical domains: management of fluids or nutrition, pain, comorbidities, antibiotics, medication dosing, wound care, and discharge planning. Scores were summed for each domain: 0 for surgical team primary responsibility, 1 for shared surgical and hospitalist responsibility, and 2 for hospitalist primary responsibility. Composite scores could range from 0 to 14; lower scores represented a stronger tendency for surgeon management, and higher scores represented a stronger tendency toward PH management.

Data Analysis

For data analysis, simple exploratory tests with χ2 analysis and Student t tests were performed by using Stata 14.2 (StataCorp LLC, College Station, TX) to compare differences by surgical specialty programs and individuals by role assignment and perceptions of PH involvement.

The NYU School of Medicine Institutional Review Board approved this study.

RESULTS

Respondents and Programs

Of the estimated 606 PS in the AAP SoSu, 291 (49%) US-based surgeons (PS) responded with 251 (41%) sufficiently completed surveys (Table). The initial and completed survey response rate for pediatric OS through the POSNA listserv was 58% and 48% (340/706), respectively. These respondents represented 185 unique PS programs and 212/340 (62%) unique OS programs in the US (supplemental Appendix B).

Among the unique 185 PS programs and 212 OS programs represented, PH were often engaged in the care of primary surgical patients (Table).

 

 

Roles of PH in Collaborative Programs

Among programs that reported any hospitalist involvement (PS, n = 100; OS, n = 157), few (≤15%) programs involved hospitalists with all patients. Pediatric OS programs were significantly more likely than pediatric surgical programs to involve PH for healthy patients with any high-risk surgery (27% vs 9%; P = .001). Most PS (64%) and OS (83%) reported involving PH for all medically complex patients, regardless of surgery risk (P = .003).

In programs involving PH, few PS (11%) or OS programs (16%) reported by using a written service agreement.

Care of Surgical Patients in PH-involved programs

Both PS and OS programs with hospitalist involvement reported that surgical teams were either primarily responsible for, or shared with the hospitalist, most aspects of patient care, including medication dosing, nutrition, and fluids (Figure). PH management of antibiotic and nonsurgical comorbidities was higher for OS programs than PS programs.

Composite clinical responsibility scores ranged from 0 to 8, with a median score of 2.3 (interquartile range [IQR] 0-3) for consultation programs and 5 (IQR 1-7) for comanagement programs. Composite scores were higher for OS (7.4; SD 3.4) versus PS (3.3; SD 3.4) programs (P < .001; 95% CI, 3.3-5.5; supplemental Appendix C).

Surgeons’ Perspectives on Hospitalist Involvement

Surgeons in programs without PH involvement viewed PH overall impact less positively than those with PH (27% vs 58%). Among all surgeons surveyed, few perceived positive (agree/strongly agree) PH impact on pain management (<15%) or decreasing LOS (<15%; supplemental Appendix D).

Most surgeons (n = 355) believed that PH financial support should come from separate billing (“patient fee”) (48%) or hospital budget (36%). Only 17% endorsed PH receiving part of the surgical global fee, with no significant difference by surgical specialty or current PH involvement status.

DISCUSSION

This study is the first comprehensive assessment of surgeons’ perspectives on the involvement and effectiveness of PH in the postoperative care of children undergoing inpatient general or orthopedic surgeries. The high prevalence (>70%) of PH involvement among responding surgical programs suggests that PH comanagement of hospitalized patients merits attention from providers, systems, educators, and payors.

Collaboration and Roles are Correlated with Surgical Specialty and Setting

Forty percent of inpatient pediatric surgeries occur outside of children’s hospitals.10 We found that PH involvement was higher at smaller and general hospitals where PH may provide pediatric expertise when insufficient pediatric resources, like pain teams, exist.7 Alternately, some quaternary centers have dedicated surgical hospitalists. The extensive involvement of PH in the bulk of certain clinical care domains, especially care coordination, in OS and in many PS programs (Figure) suggests that PH are well integrated into many programs and provide essential clinical care.

In many large freestanding children’s hospitals, though, surgical teams may have sufficient depth and breadth to manage most aspects of care. There may be an exception for care coordination of medically complex patients. Care coordination is a patient- and family-centered care best practice,11 encompasses integrating and aligning medical care among clinical services, and is focused on shared decision making and communication. High-quality care coordination processes are of great value to patients and families, especially in medically complex children,11 and are associated with improved transitions from hospital to home.12 Well-planned transitions likely decrease these special populations’ postoperative readmission risk, complications, and prolonged length of stay.13 Reimbursement for these services could integrate these contributions needed for safe and patient-centered pediatric inpatient surgical care.

Perceptions of PH Impact

The variation in perception of PH by surgical specialty, with higher prevalence as well as higher regard for PH among OS, is intriguing. This disparity may reflect current training and clinical expectations of each surgical specialty, with larger emphasis on medical management for surgical compared with orthopedic curricula (www.acgme.org).

While PS and OS respondents perceived that PH involvement did not influence length of stay, pain management, and resource use, single-site studies suggest otherwise.4,8,14 Objective data on the impact of PH involvement on patient and systems outcomes may help elucidate whether this is a perceived or actual lack of impact. Future metrics might include pain scores, patient centered care measures on communication and coordination, patient complaints and/or lawsuits, resource utilization and/or cost, readmission, and medical errors.

This study has several limitations. There is likely a (self) selection bias by surgeons with either strongly positive or negative views of PH involvement. Future studies may target a random sampling of programs rather than a cross-sectional survey of individual providers. Relatively few respondents represented community hospitals, possibly because these facilities are staffed by general OS and general surgeons10 who were not included in this sample.

 

 

CONCLUSION

Given the high prevalence of PH involvement in caring for surgical pediatric patients in varied settings, the field of pediatric hospital medicine should support increased PH training and standardized practice around perioperative management, particularly for medically complex patients with increased care coordination needs. Surgical comanagement, including interdisciplinary communication skills, deserves inclusion as a PH core competency and as an entrustable professional activity for pediatric hospital medicine and pediatric graduate medical education programs,15 especially orthopedic surgeries.

Further research on effective and evidence-based pediatric postoperative care and collaboration models will help PH and surgeons to most effectively and respectfully partner to improve care.

Acknowledgments

The authors thank the members of the AAP Section on Hospital Medicine Surgical Care Subcommittee, AAP SOHM leadership, and Ms. Alexandra Case.

Disclosure 

The authors have no conflicts of interest relevant to this manuscript to report. This study was supported in part by the Agency for Health Care Research and Quality (LM, R00HS022198).

References

1. Task Force for Children’s Surgical Care. Optimal resources for children’s surgical care in the United States. J Am Coll Surg. 2014;218(3):479-487, 487.e1-4. PubMed
2. Section on Hospital Medicine, American Academy of Pediatrics. Guiding principles for pediatric hospital medicine programs. Pediatrics. 2013;132(4):782-786. PubMed
3. Freiburg C, James T, Ashikaga T, Moalem J, Cherr G. Strategies to accommodate resident work-hour restrictions: Impact on surgical education. J Surg Educ. 2011;68(5):387-392. PubMed
4. Pressel DM, Rappaport DI, Watson N. Nurses’ assessment of pediatric physicians: Are hospitalists different? J Healthc Manag. 2008;53(1):14-24; discussion 24-25. PubMed
5. Simon TD, Eilert R, Dickinson LM, Kempe A, Benefield E, Berman S. Pediatric hospitalist comanagement of spinal fusion surgery patients. J Hosp Med. 2007;2(1):23-30. PubMed
6. Rosenberg RE, Ardalan K, Wong W, et al. Postoperative spinal fusion care in pediatric patients: Co-management decreases length of stay. Bull Hosp Jt Dis (2013). 2014;72(3):197-203. PubMed
7. Dua K, McAvoy WC, Klaus SA, Rappaport DI, Rosenberg RE, Abzug JM. Hospitalist co-management of pediatric orthopaedic surgical patients at a community hospital. Md Med. 2016;17(1):34-36. PubMed
8. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. PubMed
9. Poley S, Ricketts T, Belsky D, Gaul K. Pediatric surgeons: Subspecialists increase faster than generalists. Bull Amer Coll Surg. 2010;95(10):36-39. PubMed
10. Somme S, Bronsert M, Morrato E, Ziegler M. Frequency and variety of inpatient pediatric surgical procedures in the United States. Pediatrics. 2013;132(6):e1466-e1472. PubMed
11. Frampton SB, Guastello S, Hoy L, Naylor M, Sheridan S, Johnston-Fleece M, eds. Harnessing Evidence and Experience to Change Culture: A Guiding Framework for Patient and Family Engaged Care. Washington, DC: National Academies of Medicine; 2017. 
12. Auger KA, Kenyon CC, Feudtner C, Davis MM. Pediatric hospital discharge interventions to reduce subsequent utilization: A systematic review. J Hosp Med. 2014;9(4):251-260. PubMed
13. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the united states. Pediatrics. 2010;126(4):647-655. PubMed
14. Rappaport DI, Adelizzi-Delany J, Rogers KJ, et al. Outcomes and costs associated with hospitalist comanagement of medically complex children undergoing spinal fusion surgery. Hosp Pediatr. 2013;3(3):233-241. PubMed
15. Jerardi K, Meier K, Shaughnessy E. Management of postoperative pediatric patients. MedEdPORTAL. 2015;11:10241. doi:10.15766/mep_2374-8265.10241. 

References

1. Task Force for Children’s Surgical Care. Optimal resources for children’s surgical care in the United States. J Am Coll Surg. 2014;218(3):479-487, 487.e1-4. PubMed
2. Section on Hospital Medicine, American Academy of Pediatrics. Guiding principles for pediatric hospital medicine programs. Pediatrics. 2013;132(4):782-786. PubMed
3. Freiburg C, James T, Ashikaga T, Moalem J, Cherr G. Strategies to accommodate resident work-hour restrictions: Impact on surgical education. J Surg Educ. 2011;68(5):387-392. PubMed
4. Pressel DM, Rappaport DI, Watson N. Nurses’ assessment of pediatric physicians: Are hospitalists different? J Healthc Manag. 2008;53(1):14-24; discussion 24-25. PubMed
5. Simon TD, Eilert R, Dickinson LM, Kempe A, Benefield E, Berman S. Pediatric hospitalist comanagement of spinal fusion surgery patients. J Hosp Med. 2007;2(1):23-30. PubMed
6. Rosenberg RE, Ardalan K, Wong W, et al. Postoperative spinal fusion care in pediatric patients: Co-management decreases length of stay. Bull Hosp Jt Dis (2013). 2014;72(3):197-203. PubMed
7. Dua K, McAvoy WC, Klaus SA, Rappaport DI, Rosenberg RE, Abzug JM. Hospitalist co-management of pediatric orthopaedic surgical patients at a community hospital. Md Med. 2016;17(1):34-36. PubMed
8. Rohatgi N, Loftus P, Grujic O, Cullen M, Hopkins J, Ahuja N. Surgical comanagement by hospitalists improves patient outcomes: A propensity score analysis. Ann Surg. 2016;264(2):275-282. PubMed
9. Poley S, Ricketts T, Belsky D, Gaul K. Pediatric surgeons: Subspecialists increase faster than generalists. Bull Amer Coll Surg. 2010;95(10):36-39. PubMed
10. Somme S, Bronsert M, Morrato E, Ziegler M. Frequency and variety of inpatient pediatric surgical procedures in the United States. Pediatrics. 2013;132(6):e1466-e1472. PubMed
11. Frampton SB, Guastello S, Hoy L, Naylor M, Sheridan S, Johnston-Fleece M, eds. Harnessing Evidence and Experience to Change Culture: A Guiding Framework for Patient and Family Engaged Care. Washington, DC: National Academies of Medicine; 2017. 
12. Auger KA, Kenyon CC, Feudtner C, Davis MM. Pediatric hospital discharge interventions to reduce subsequent utilization: A systematic review. J Hosp Med. 2014;9(4):251-260. PubMed
13. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the united states. Pediatrics. 2010;126(4):647-655. PubMed
14. Rappaport DI, Adelizzi-Delany J, Rogers KJ, et al. Outcomes and costs associated with hospitalist comanagement of medically complex children undergoing spinal fusion surgery. Hosp Pediatr. 2013;3(3):233-241. PubMed
15. Jerardi K, Meier K, Shaughnessy E. Management of postoperative pediatric patients. MedEdPORTAL. 2015;11:10241. doi:10.15766/mep_2374-8265.10241. 

Issue
Journal of Hospital Medicine 13(8)
Issue
Journal of Hospital Medicine 13(8)
Page Number
566-569. Published online first February 6, 2018
Page Number
566-569. Published online first February 6, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Rebecca E. Rosenberg, MD, MPH, FAAP, 550 First Ave, New York, NY 10016; Telephone: 212-263-0959; Fax: 212-263-0557; E-mail: Rebecca.rosenberg@nyumc.org
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 03/13/2018 - 06:00
Un-Gate On Date
Tue, 02/27/2018 - 06:00
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Trends in Inpatient Admission Comorbidity and Electronic Health Data: Implications for Resident Workload Intensity

Article Type
Changed
Sun, 08/19/2018 - 21:33

Since the Accreditation Council for Graduate Medical Education (ACGME) posed new duty hour regulations in 2003 and again in 2011, there have been concerns that the substantial compression of resident workload may have resulted in a negative learning environment.1-3 Residents are now expected to complete more work in a reduced amount of time and with less flexibility.4 In addition to time constraints, the actual work of a resident today may differ from that of a resident in the past, especially in the area of clinical documentation.5 Restricting resident work hours without examining the workload may result in increased work intensity and counter the potential benefits of working fewer hours.6 Measuring workload, as well as electronic health record (EHR)–related stress, may also help combat burnout in internal medicine.7 There are many components that influence resident workload, including patient census, patient comorbidities and acuity,EHR data and other available documentation, and ancillary tasks and procedures.7 We define resident workload intensity as the responsibilities required to provide patient care within a specified time. There is a paucity of objective data regarding the workload intensity of residents, which are essential to graduate medical education reform and optimization. Patient census, ancillary responsibilities, number of procedures, and conference length and frequency are some of the variables that can be adjusted by each residency program. As a first step to objective measurement of resident workload intensity, we endeavored to evaluate the less easily residency program–controlled workload components of patient comorbidity and EHR data the time of patient admission.

METHODS

We conducted an observational, retrospective assessment of all admissions to the Louis Stokes Cleveland VA Medical Center (LSCVAMC) internal medicine service from January 1, 2000 to December 31, 2015. The inclusion criteria were admission to non-ICU internal medicine services and an admission note written by a resident physician. Otherwise, there were no exclusions. Data were accessed using VA Informatics and Computing Infrastructure. This study was approved by the LSCVAMC institutional review board.

We evaluated multiple patient characteristics for each admission that were accessible in the EHR at the time of hospital admission including patient comorbidities, medication count, and number of notes and discharge summaries. The Charlson Comorbidity Index (CCI) Deyo version was used to score all patients based on the EHR’s active problem list at the time of admission.8,9 The CCI is a validated score created by categorizing comorbidities using International Classification of Diseases, Ninth and Tenth Revisions.8 Higher CCI scores predict increased mortality and resource usage. For each admission, we also counted the number of active medications, the number of prior discharge summaries, and the total number of notes available in the EHR at the time of patient admission. Patient admissions were grouped by calendar year, the mean numbers of active medications, prior discharge summaries, and total available notes per patient during each year were calculated (Table). Data comparisons were completed between 2003 and 2011 as well as between 2011 and 2015; median data are also provided for these years (Table). These years were chosen based on the years of the duty hour changes as well as comparing a not brand new, but still immature EHR (2003), a mature EHR (2011), and the most recent available data (2015).

RESULTS

A total of 67,346 admissions were included in the analysis. All parameters increased from 2000 to 2015. Mean CCI increased from 1.60 in 2003 (95% CI, 1.54–1.65) to 3.05 in 2011 (95% CI, 2.97–3.13) and to 3.77 in 2015 (95% CI, 3.67–3.87). Mean number of comorbidities increased from 6.21 in 2003 (95% CI, 6.05–6.36) to 16.09 in 2011 (95% CI, 15.84–16.34) and to 19.89 in 2015 (95% CI, 19.57–20.21). Mean number of notes increased from 193 in 2003 (95% CI, 186–199) to 841 in 2011 (95% CI, 815–868) and to 1289 in 2015 (95% CI, 1243–1335). Mean number of medications increased from 8.37 in 2003 (95% CI, 8.15–8.59) to 16.89 in 2011 (95% CI 16.60–17.20) and decreased to 16.49 in 2015 (95% CI, 16.18–16.80). Mean number of discharge summaries available at admission increased from 2.29 in 2003 (95% CI, 2.19–2.38) to 4.42 in 2011 (95% CI, 4.27–4.58) and to 5.48 in 2015 (95% CI, 5.27–5.69).

 

 

DISCUSSION

This retrospective, observational study shows that patient comorbidity and EHR data burden have increased over time, both of which impact resident workload at the time of admission. These findings, combined with the duty hour regulations, suggest that resident workload intensity at the time of admission may be increasing over time.

Patient comorbidity has likely increased due to a combination of factors. Elective admissions have decreased, and demographics have changed consistent with an aging population. Trainee admissions patterns also have changed over time, with less-acute admissions often admitted to nonacademic providers. Additionally, there are more stringent requirements for inpatient admissions, resulting in higher acuity and comorbidity.

As EHRs have matured and documentation requirements have expanded, the amount of electronic data has grown per patient, substantially increasing the time required to review a patient’s medical record.5,10 In our evaluation, all EHR metrics increased between 2003 and 2011. The only metric that did not increase between 2011 and 2015 was the mean number of medications. The number of notes per patient has shown a dramatic increase. Even in an EHR that has reached maturity (in use more than 10 years), the number of notes per patient still increased by greater than 50% between 2011 and 2015. The VA EHR has been in use for more than 15 years, making it an ideal resource to study data trends. As many EHRs are in their infancy in comparison, these data may serve as a predictor of how other EHRs will mature. While all notes are not reviewed at every admission, this illustrates how increasing data burden combined with poor usability can be time consuming and promote inefficient patient care.11 Moreover, many argue that poor EHR usability also affects cognitive workflow and clinical decision making, a task that is of utmost value to patient quality and safety as well as resident education.12Common program requirements for internal medicine as set forth by the ACGME state that residency programs should give adequate attention to scheduling, work intensity, and work compression to optimize resident well-being and prevent burnout.13 Resident workload intensity is multifaceted and encompasses many elements, including patient census and acuity, EHR data assessment, components of patient complexity such as comorbidity and psychosocial situation, and time.13 The work intensity increases with increase in the overall patient census, complexity, acuity, or data burden. Similarly, work intensity increases with time restrictions for patient care (in the form of duty hours). In addition, work intensity is affected by the time allotted for nonclinical responsibilities, such as morning reports and conferences, as these decrease the amount of time a resident can spend providing patient care.

Many programs have responded to the duty hour restrictions by decreasing patient caps.14 Our data suggest that decreasing patient census alone may not adequately mitigate the workload intensity of residents. There are other alternatives to prevent the increasing workload intensity that may have already been employed by some institutions. One such method is that programs can take into account patient complexity or acuity when allocating patients to teaching teams.14 Another method is to adjust the time spent on ancillary tasks such as obtaining outside hospital records, transporting patients, and scheduling follow-up appointments. Foregoing routine conferences such as morning reports or noon conferences would decrease work intensity, although obviously at the expense of resident education. Geographic rounding can encourage more efficient use of clinical time. One of the most difficult, but potentially impactful strategies would be to streamline EHRs to simplify and speed documentation, refocus regulations, and support and build based on the view of clinicians.15

The main limitations of this study include its retrospective design, single-center site, and focus on the internal medicine admissions to a VA hospital. Therefore, these findings may not be generalizable to other patient populations and training programs. Another potential limitation may be that changes in documentation practices have led to “upcoding” of patient comorbidy within the EHR. In addition, in this study, we looked only at the data available at the time of admission. To get a more complete picture of true workload intensity, understanding the day-to-day metrics of inpatient care would be crucial.

CONCLUSION

Our study demonstrates that components of resident workload (patient comorbidity and EHR data burden), specifically at the time of admission, have increased over time. These findings, combined with the duty hour regulations, suggest resident workload intensity at the time of admission has increased over time. This can have significant implications regarding graduate medical education, patient safety, and burnout. To optimize resident workload, innovation will be required in the areas of workflow, informatics, and curriculum. Future studies to assess the workload and intensity of the course of the entire patient hospitalization are needed.

 

 

Acknowledgments

The authors thank Paul E. Drawz, MD, MHS, MS (University of Minnesota) for contributions in designing and reviewing the study.

Ethical approval: The study was approved by the Institutional Review Board at the LSCVAMC. The contents do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government. This material is the result of work supported with resources and the use of facilities of the LSCVAMC.

Disclosures

The authors declare that they have no conflicts of interest to disclose.

References

1. Bolster L, Rourke L. The Effect of Restricting Residents’ Duty Hours on Patient Safety, Resident Well-Being, and Resident Education: An Updated Systematic Review. J Grad Med Educ. 2015;7(3):349-363. PubMed
2. Fletcher KE, Underwood W, Davis SQ, Mangrulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents’ lives: a systematic review. JAMA. 2005; 294(9):1088-1100. PubMed
3. Amin A, Choe J, Collichio F, et al. Resident Duty Hours: An Alliance for Academic Internal Medicine Position Paper. http://www.im.org/d/do/6967. Published February 2016. Accessed November 30, 2017.
4. Goitein L, Ludmerer KM. Resident workload-let’s treat the disease, not just the symptom. JAMA Intern Med. 2013;173(8):655-656. PubMed
5. Oxentenko AS, West CP, Popkave C, Weinberger SE, Kolars JC. Time spent on clinical documentation: a survey of internal medicine residents and program directors. Arch Intern Med. 2010;170(4):377-380. PubMed
6. Fletcher KE, Reed DA, Arora VM. Doing the dirty work: measuring and optimizing resident workload. J Gen Intern Med. 2011;26(1):8-9. PubMed
7. Linzer M, Levine R, Meltzer D, Poplau S, Warde C, West CP. 10 bold steps to prevent burnout in general internal medicine. J Gen Intern Med. 2014;29(1):18-20. PubMed
8. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. PubMed
9. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619. PubMed
10. Kuhn T, Basch P, Barr M, Yackel T, et al; Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
11. Friedberg MW, Chen PG, Van Busum KR, et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Rand Health Q. 2014;3(4):1. PubMed
12. Smith SW, Koppel R. Healthcare information technology’s relativity problems: a typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ. J Am Med Inform Assoc. 2014; 21(1):117-131. PubMed
13. ACGME Program Requirements for Graduate Medical Education in Internal Medicine. http://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_2017-07-01.pdf. Revised July 1, 2017. Accessed July 22, 2017.
14. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit-based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320-327. PubMed
15. Payne TH, Corley S, Cullen TA, et al. Report of the AMIA EHR-2020 Task Force on the status and future direction of EHRs. J Am Med Inform Assoc. 2015;22(5):1102-1110. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(8)
Publications
Topics
Page Number
570-572. Published online first March 26, 2018
Sections
Article PDF
Article PDF
Related Articles

Since the Accreditation Council for Graduate Medical Education (ACGME) posed new duty hour regulations in 2003 and again in 2011, there have been concerns that the substantial compression of resident workload may have resulted in a negative learning environment.1-3 Residents are now expected to complete more work in a reduced amount of time and with less flexibility.4 In addition to time constraints, the actual work of a resident today may differ from that of a resident in the past, especially in the area of clinical documentation.5 Restricting resident work hours without examining the workload may result in increased work intensity and counter the potential benefits of working fewer hours.6 Measuring workload, as well as electronic health record (EHR)–related stress, may also help combat burnout in internal medicine.7 There are many components that influence resident workload, including patient census, patient comorbidities and acuity,EHR data and other available documentation, and ancillary tasks and procedures.7 We define resident workload intensity as the responsibilities required to provide patient care within a specified time. There is a paucity of objective data regarding the workload intensity of residents, which are essential to graduate medical education reform and optimization. Patient census, ancillary responsibilities, number of procedures, and conference length and frequency are some of the variables that can be adjusted by each residency program. As a first step to objective measurement of resident workload intensity, we endeavored to evaluate the less easily residency program–controlled workload components of patient comorbidity and EHR data the time of patient admission.

METHODS

We conducted an observational, retrospective assessment of all admissions to the Louis Stokes Cleveland VA Medical Center (LSCVAMC) internal medicine service from January 1, 2000 to December 31, 2015. The inclusion criteria were admission to non-ICU internal medicine services and an admission note written by a resident physician. Otherwise, there were no exclusions. Data were accessed using VA Informatics and Computing Infrastructure. This study was approved by the LSCVAMC institutional review board.

We evaluated multiple patient characteristics for each admission that were accessible in the EHR at the time of hospital admission including patient comorbidities, medication count, and number of notes and discharge summaries. The Charlson Comorbidity Index (CCI) Deyo version was used to score all patients based on the EHR’s active problem list at the time of admission.8,9 The CCI is a validated score created by categorizing comorbidities using International Classification of Diseases, Ninth and Tenth Revisions.8 Higher CCI scores predict increased mortality and resource usage. For each admission, we also counted the number of active medications, the number of prior discharge summaries, and the total number of notes available in the EHR at the time of patient admission. Patient admissions were grouped by calendar year, the mean numbers of active medications, prior discharge summaries, and total available notes per patient during each year were calculated (Table). Data comparisons were completed between 2003 and 2011 as well as between 2011 and 2015; median data are also provided for these years (Table). These years were chosen based on the years of the duty hour changes as well as comparing a not brand new, but still immature EHR (2003), a mature EHR (2011), and the most recent available data (2015).

RESULTS

A total of 67,346 admissions were included in the analysis. All parameters increased from 2000 to 2015. Mean CCI increased from 1.60 in 2003 (95% CI, 1.54–1.65) to 3.05 in 2011 (95% CI, 2.97–3.13) and to 3.77 in 2015 (95% CI, 3.67–3.87). Mean number of comorbidities increased from 6.21 in 2003 (95% CI, 6.05–6.36) to 16.09 in 2011 (95% CI, 15.84–16.34) and to 19.89 in 2015 (95% CI, 19.57–20.21). Mean number of notes increased from 193 in 2003 (95% CI, 186–199) to 841 in 2011 (95% CI, 815–868) and to 1289 in 2015 (95% CI, 1243–1335). Mean number of medications increased from 8.37 in 2003 (95% CI, 8.15–8.59) to 16.89 in 2011 (95% CI 16.60–17.20) and decreased to 16.49 in 2015 (95% CI, 16.18–16.80). Mean number of discharge summaries available at admission increased from 2.29 in 2003 (95% CI, 2.19–2.38) to 4.42 in 2011 (95% CI, 4.27–4.58) and to 5.48 in 2015 (95% CI, 5.27–5.69).

 

 

DISCUSSION

This retrospective, observational study shows that patient comorbidity and EHR data burden have increased over time, both of which impact resident workload at the time of admission. These findings, combined with the duty hour regulations, suggest that resident workload intensity at the time of admission may be increasing over time.

Patient comorbidity has likely increased due to a combination of factors. Elective admissions have decreased, and demographics have changed consistent with an aging population. Trainee admissions patterns also have changed over time, with less-acute admissions often admitted to nonacademic providers. Additionally, there are more stringent requirements for inpatient admissions, resulting in higher acuity and comorbidity.

As EHRs have matured and documentation requirements have expanded, the amount of electronic data has grown per patient, substantially increasing the time required to review a patient’s medical record.5,10 In our evaluation, all EHR metrics increased between 2003 and 2011. The only metric that did not increase between 2011 and 2015 was the mean number of medications. The number of notes per patient has shown a dramatic increase. Even in an EHR that has reached maturity (in use more than 10 years), the number of notes per patient still increased by greater than 50% between 2011 and 2015. The VA EHR has been in use for more than 15 years, making it an ideal resource to study data trends. As many EHRs are in their infancy in comparison, these data may serve as a predictor of how other EHRs will mature. While all notes are not reviewed at every admission, this illustrates how increasing data burden combined with poor usability can be time consuming and promote inefficient patient care.11 Moreover, many argue that poor EHR usability also affects cognitive workflow and clinical decision making, a task that is of utmost value to patient quality and safety as well as resident education.12Common program requirements for internal medicine as set forth by the ACGME state that residency programs should give adequate attention to scheduling, work intensity, and work compression to optimize resident well-being and prevent burnout.13 Resident workload intensity is multifaceted and encompasses many elements, including patient census and acuity, EHR data assessment, components of patient complexity such as comorbidity and psychosocial situation, and time.13 The work intensity increases with increase in the overall patient census, complexity, acuity, or data burden. Similarly, work intensity increases with time restrictions for patient care (in the form of duty hours). In addition, work intensity is affected by the time allotted for nonclinical responsibilities, such as morning reports and conferences, as these decrease the amount of time a resident can spend providing patient care.

Many programs have responded to the duty hour restrictions by decreasing patient caps.14 Our data suggest that decreasing patient census alone may not adequately mitigate the workload intensity of residents. There are other alternatives to prevent the increasing workload intensity that may have already been employed by some institutions. One such method is that programs can take into account patient complexity or acuity when allocating patients to teaching teams.14 Another method is to adjust the time spent on ancillary tasks such as obtaining outside hospital records, transporting patients, and scheduling follow-up appointments. Foregoing routine conferences such as morning reports or noon conferences would decrease work intensity, although obviously at the expense of resident education. Geographic rounding can encourage more efficient use of clinical time. One of the most difficult, but potentially impactful strategies would be to streamline EHRs to simplify and speed documentation, refocus regulations, and support and build based on the view of clinicians.15

The main limitations of this study include its retrospective design, single-center site, and focus on the internal medicine admissions to a VA hospital. Therefore, these findings may not be generalizable to other patient populations and training programs. Another potential limitation may be that changes in documentation practices have led to “upcoding” of patient comorbidy within the EHR. In addition, in this study, we looked only at the data available at the time of admission. To get a more complete picture of true workload intensity, understanding the day-to-day metrics of inpatient care would be crucial.

CONCLUSION

Our study demonstrates that components of resident workload (patient comorbidity and EHR data burden), specifically at the time of admission, have increased over time. These findings, combined with the duty hour regulations, suggest resident workload intensity at the time of admission has increased over time. This can have significant implications regarding graduate medical education, patient safety, and burnout. To optimize resident workload, innovation will be required in the areas of workflow, informatics, and curriculum. Future studies to assess the workload and intensity of the course of the entire patient hospitalization are needed.

 

 

Acknowledgments

The authors thank Paul E. Drawz, MD, MHS, MS (University of Minnesota) for contributions in designing and reviewing the study.

Ethical approval: The study was approved by the Institutional Review Board at the LSCVAMC. The contents do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government. This material is the result of work supported with resources and the use of facilities of the LSCVAMC.

Disclosures

The authors declare that they have no conflicts of interest to disclose.

Since the Accreditation Council for Graduate Medical Education (ACGME) posed new duty hour regulations in 2003 and again in 2011, there have been concerns that the substantial compression of resident workload may have resulted in a negative learning environment.1-3 Residents are now expected to complete more work in a reduced amount of time and with less flexibility.4 In addition to time constraints, the actual work of a resident today may differ from that of a resident in the past, especially in the area of clinical documentation.5 Restricting resident work hours without examining the workload may result in increased work intensity and counter the potential benefits of working fewer hours.6 Measuring workload, as well as electronic health record (EHR)–related stress, may also help combat burnout in internal medicine.7 There are many components that influence resident workload, including patient census, patient comorbidities and acuity,EHR data and other available documentation, and ancillary tasks and procedures.7 We define resident workload intensity as the responsibilities required to provide patient care within a specified time. There is a paucity of objective data regarding the workload intensity of residents, which are essential to graduate medical education reform and optimization. Patient census, ancillary responsibilities, number of procedures, and conference length and frequency are some of the variables that can be adjusted by each residency program. As a first step to objective measurement of resident workload intensity, we endeavored to evaluate the less easily residency program–controlled workload components of patient comorbidity and EHR data the time of patient admission.

METHODS

We conducted an observational, retrospective assessment of all admissions to the Louis Stokes Cleveland VA Medical Center (LSCVAMC) internal medicine service from January 1, 2000 to December 31, 2015. The inclusion criteria were admission to non-ICU internal medicine services and an admission note written by a resident physician. Otherwise, there were no exclusions. Data were accessed using VA Informatics and Computing Infrastructure. This study was approved by the LSCVAMC institutional review board.

We evaluated multiple patient characteristics for each admission that were accessible in the EHR at the time of hospital admission including patient comorbidities, medication count, and number of notes and discharge summaries. The Charlson Comorbidity Index (CCI) Deyo version was used to score all patients based on the EHR’s active problem list at the time of admission.8,9 The CCI is a validated score created by categorizing comorbidities using International Classification of Diseases, Ninth and Tenth Revisions.8 Higher CCI scores predict increased mortality and resource usage. For each admission, we also counted the number of active medications, the number of prior discharge summaries, and the total number of notes available in the EHR at the time of patient admission. Patient admissions were grouped by calendar year, the mean numbers of active medications, prior discharge summaries, and total available notes per patient during each year were calculated (Table). Data comparisons were completed between 2003 and 2011 as well as between 2011 and 2015; median data are also provided for these years (Table). These years were chosen based on the years of the duty hour changes as well as comparing a not brand new, but still immature EHR (2003), a mature EHR (2011), and the most recent available data (2015).

RESULTS

A total of 67,346 admissions were included in the analysis. All parameters increased from 2000 to 2015. Mean CCI increased from 1.60 in 2003 (95% CI, 1.54–1.65) to 3.05 in 2011 (95% CI, 2.97–3.13) and to 3.77 in 2015 (95% CI, 3.67–3.87). Mean number of comorbidities increased from 6.21 in 2003 (95% CI, 6.05–6.36) to 16.09 in 2011 (95% CI, 15.84–16.34) and to 19.89 in 2015 (95% CI, 19.57–20.21). Mean number of notes increased from 193 in 2003 (95% CI, 186–199) to 841 in 2011 (95% CI, 815–868) and to 1289 in 2015 (95% CI, 1243–1335). Mean number of medications increased from 8.37 in 2003 (95% CI, 8.15–8.59) to 16.89 in 2011 (95% CI 16.60–17.20) and decreased to 16.49 in 2015 (95% CI, 16.18–16.80). Mean number of discharge summaries available at admission increased from 2.29 in 2003 (95% CI, 2.19–2.38) to 4.42 in 2011 (95% CI, 4.27–4.58) and to 5.48 in 2015 (95% CI, 5.27–5.69).

 

 

DISCUSSION

This retrospective, observational study shows that patient comorbidity and EHR data burden have increased over time, both of which impact resident workload at the time of admission. These findings, combined with the duty hour regulations, suggest that resident workload intensity at the time of admission may be increasing over time.

Patient comorbidity has likely increased due to a combination of factors. Elective admissions have decreased, and demographics have changed consistent with an aging population. Trainee admissions patterns also have changed over time, with less-acute admissions often admitted to nonacademic providers. Additionally, there are more stringent requirements for inpatient admissions, resulting in higher acuity and comorbidity.

As EHRs have matured and documentation requirements have expanded, the amount of electronic data has grown per patient, substantially increasing the time required to review a patient’s medical record.5,10 In our evaluation, all EHR metrics increased between 2003 and 2011. The only metric that did not increase between 2011 and 2015 was the mean number of medications. The number of notes per patient has shown a dramatic increase. Even in an EHR that has reached maturity (in use more than 10 years), the number of notes per patient still increased by greater than 50% between 2011 and 2015. The VA EHR has been in use for more than 15 years, making it an ideal resource to study data trends. As many EHRs are in their infancy in comparison, these data may serve as a predictor of how other EHRs will mature. While all notes are not reviewed at every admission, this illustrates how increasing data burden combined with poor usability can be time consuming and promote inefficient patient care.11 Moreover, many argue that poor EHR usability also affects cognitive workflow and clinical decision making, a task that is of utmost value to patient quality and safety as well as resident education.12Common program requirements for internal medicine as set forth by the ACGME state that residency programs should give adequate attention to scheduling, work intensity, and work compression to optimize resident well-being and prevent burnout.13 Resident workload intensity is multifaceted and encompasses many elements, including patient census and acuity, EHR data assessment, components of patient complexity such as comorbidity and psychosocial situation, and time.13 The work intensity increases with increase in the overall patient census, complexity, acuity, or data burden. Similarly, work intensity increases with time restrictions for patient care (in the form of duty hours). In addition, work intensity is affected by the time allotted for nonclinical responsibilities, such as morning reports and conferences, as these decrease the amount of time a resident can spend providing patient care.

Many programs have responded to the duty hour restrictions by decreasing patient caps.14 Our data suggest that decreasing patient census alone may not adequately mitigate the workload intensity of residents. There are other alternatives to prevent the increasing workload intensity that may have already been employed by some institutions. One such method is that programs can take into account patient complexity or acuity when allocating patients to teaching teams.14 Another method is to adjust the time spent on ancillary tasks such as obtaining outside hospital records, transporting patients, and scheduling follow-up appointments. Foregoing routine conferences such as morning reports or noon conferences would decrease work intensity, although obviously at the expense of resident education. Geographic rounding can encourage more efficient use of clinical time. One of the most difficult, but potentially impactful strategies would be to streamline EHRs to simplify and speed documentation, refocus regulations, and support and build based on the view of clinicians.15

The main limitations of this study include its retrospective design, single-center site, and focus on the internal medicine admissions to a VA hospital. Therefore, these findings may not be generalizable to other patient populations and training programs. Another potential limitation may be that changes in documentation practices have led to “upcoding” of patient comorbidy within the EHR. In addition, in this study, we looked only at the data available at the time of admission. To get a more complete picture of true workload intensity, understanding the day-to-day metrics of inpatient care would be crucial.

CONCLUSION

Our study demonstrates that components of resident workload (patient comorbidity and EHR data burden), specifically at the time of admission, have increased over time. These findings, combined with the duty hour regulations, suggest resident workload intensity at the time of admission has increased over time. This can have significant implications regarding graduate medical education, patient safety, and burnout. To optimize resident workload, innovation will be required in the areas of workflow, informatics, and curriculum. Future studies to assess the workload and intensity of the course of the entire patient hospitalization are needed.

 

 

Acknowledgments

The authors thank Paul E. Drawz, MD, MHS, MS (University of Minnesota) for contributions in designing and reviewing the study.

Ethical approval: The study was approved by the Institutional Review Board at the LSCVAMC. The contents do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government. This material is the result of work supported with resources and the use of facilities of the LSCVAMC.

Disclosures

The authors declare that they have no conflicts of interest to disclose.

References

1. Bolster L, Rourke L. The Effect of Restricting Residents’ Duty Hours on Patient Safety, Resident Well-Being, and Resident Education: An Updated Systematic Review. J Grad Med Educ. 2015;7(3):349-363. PubMed
2. Fletcher KE, Underwood W, Davis SQ, Mangrulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents’ lives: a systematic review. JAMA. 2005; 294(9):1088-1100. PubMed
3. Amin A, Choe J, Collichio F, et al. Resident Duty Hours: An Alliance for Academic Internal Medicine Position Paper. http://www.im.org/d/do/6967. Published February 2016. Accessed November 30, 2017.
4. Goitein L, Ludmerer KM. Resident workload-let’s treat the disease, not just the symptom. JAMA Intern Med. 2013;173(8):655-656. PubMed
5. Oxentenko AS, West CP, Popkave C, Weinberger SE, Kolars JC. Time spent on clinical documentation: a survey of internal medicine residents and program directors. Arch Intern Med. 2010;170(4):377-380. PubMed
6. Fletcher KE, Reed DA, Arora VM. Doing the dirty work: measuring and optimizing resident workload. J Gen Intern Med. 2011;26(1):8-9. PubMed
7. Linzer M, Levine R, Meltzer D, Poplau S, Warde C, West CP. 10 bold steps to prevent burnout in general internal medicine. J Gen Intern Med. 2014;29(1):18-20. PubMed
8. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. PubMed
9. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619. PubMed
10. Kuhn T, Basch P, Barr M, Yackel T, et al; Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
11. Friedberg MW, Chen PG, Van Busum KR, et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Rand Health Q. 2014;3(4):1. PubMed
12. Smith SW, Koppel R. Healthcare information technology’s relativity problems: a typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ. J Am Med Inform Assoc. 2014; 21(1):117-131. PubMed
13. ACGME Program Requirements for Graduate Medical Education in Internal Medicine. http://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_2017-07-01.pdf. Revised July 1, 2017. Accessed July 22, 2017.
14. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit-based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320-327. PubMed
15. Payne TH, Corley S, Cullen TA, et al. Report of the AMIA EHR-2020 Task Force on the status and future direction of EHRs. J Am Med Inform Assoc. 2015;22(5):1102-1110. PubMed

References

1. Bolster L, Rourke L. The Effect of Restricting Residents’ Duty Hours on Patient Safety, Resident Well-Being, and Resident Education: An Updated Systematic Review. J Grad Med Educ. 2015;7(3):349-363. PubMed
2. Fletcher KE, Underwood W, Davis SQ, Mangrulkar RS, McMahon LF, Saint S. Effects of work hour reduction on residents’ lives: a systematic review. JAMA. 2005; 294(9):1088-1100. PubMed
3. Amin A, Choe J, Collichio F, et al. Resident Duty Hours: An Alliance for Academic Internal Medicine Position Paper. http://www.im.org/d/do/6967. Published February 2016. Accessed November 30, 2017.
4. Goitein L, Ludmerer KM. Resident workload-let’s treat the disease, not just the symptom. JAMA Intern Med. 2013;173(8):655-656. PubMed
5. Oxentenko AS, West CP, Popkave C, Weinberger SE, Kolars JC. Time spent on clinical documentation: a survey of internal medicine residents and program directors. Arch Intern Med. 2010;170(4):377-380. PubMed
6. Fletcher KE, Reed DA, Arora VM. Doing the dirty work: measuring and optimizing resident workload. J Gen Intern Med. 2011;26(1):8-9. PubMed
7. Linzer M, Levine R, Meltzer D, Poplau S, Warde C, West CP. 10 bold steps to prevent burnout in general internal medicine. J Gen Intern Med. 2014;29(1):18-20. PubMed
8. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. PubMed
9. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613-619. PubMed
10. Kuhn T, Basch P, Barr M, Yackel T, et al; Physicians MICotACo. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301-303. PubMed
11. Friedberg MW, Chen PG, Van Busum KR, et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Rand Health Q. 2014;3(4):1. PubMed
12. Smith SW, Koppel R. Healthcare information technology’s relativity problems: a typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ. J Am Med Inform Assoc. 2014; 21(1):117-131. PubMed
13. ACGME Program Requirements for Graduate Medical Education in Internal Medicine. http://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/140_internal_medicine_2017-07-01.pdf. Revised July 1, 2017. Accessed July 22, 2017.
14. Thanarajasingam U, McDonald FS, Halvorsen AJ, et al. Service census caps and unit-based admissions: resident workload, conference attendance, duty hour compliance, and patient safety. Mayo Clin Proc. 2012;87(4):320-327. PubMed
15. Payne TH, Corley S, Cullen TA, et al. Report of the AMIA EHR-2020 Task Force on the status and future direction of EHRs. J Am Med Inform Assoc. 2015;22(5):1102-1110. PubMed

Issue
Journal of Hospital Medicine 13(8)
Issue
Journal of Hospital Medicine 13(8)
Page Number
570-572. Published online first March 26, 2018
Page Number
570-572. Published online first March 26, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Todd I. Smith, MD, FHM,Louis Stokes Cleveland Department of Veterans Affairs Medical Center, 10701 East Blvd 111(W), Cleveland, OH 44106; ; E-mail: Todd.Smith@va.gov
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media