Mobile Apps for Professional Dermatology Education: An Objective Review

Article Type
Changed

With today’s technology, it is easier than ever to access web-based tools that enrich traditional dermatology education. The literature supports the use of these innovative platforms to enhance learning at the student and trainee levels. A controlled study of pediatric residents showed that online modules effectively supplemented clinical experience with atopic dermatitis.1 In a randomized diagnostic study of medical students, practice with an image-based web application (app) that teaches rapid recognition of melanoma proved more effective than learning a rule-based algorithm.2 Given the visual nature of dermatology, pattern recognition is an essential skill that is fostered through experience and is only made more accessible with technology.

With the added benefit of convenience and accessibility, mobile apps can supplement experiential learning. Mirroring the overall growth of mobile apps, the number of available dermatology apps has increased.3 Dermatology mobile apps serve purposes ranging from quick reference tools to comprehensive modules, journals, and question banks. At an academic hospital in Taiwan, both nondermatology and dermatology trainees’ examination performance improved after 3 weeks of using a smartphone-based wallpaper learning module displaying morphologic characteristics of fungi.4 With the expansion of virtual microscopy, mobile apps also have been created as a learning tool for dermatopathology, giving trainees the flexibility and autonomy to view slides on their own time.5 Nevertheless, the literature on dermatology mobile apps designed for the education of medical students and trainees is limited, demonstrating a need for further investigation.

Prior studies have reviewed dermatology apps for patients and practicing dermatologists.6-8 Herein, we focus on mobile apps targeting students and residents learning dermatology. General dermatology reference apps and educational aid apps have grown by 33% and 32%, respectively, from 2014 to 2017.3 As with any resource meant to educate future and current medical providers, there must be an objective review process in place to ensure accurate, unbiased, evidence-based teaching.

Well-organized, comprehensive information and a user-friendly interface are additional factors of importance when selecting an educational mobile app. When discussing supplemental resources, accessibility and affordability also are priorities given the high cost of a medical education at baseline. Overall, there is a need for a standardized method to evaluate the key factors of an educational mobile app that make it appropriate for this demographic. We conducted a search of mobile apps relating to dermatology education for students and residents.

Methods

We searched for publicly available mobile apps relating to dermatology education in the App Store (Apple Inc) from September to November 2019 using the search terms dermatology education, dermoscopy education, melanoma education, skin cancer education, psoriasis education, rosacea education, acne education, eczema education, dermal fillers education, and Mohs surgery education. We excluded apps that were not in English, were created for a conference, cost more than $5 to download, or did not include a specific dermatology education section. In this way, we hoped to evaluate apps that were relevant, accessible, and affordable.

We modeled our study after a review of patient education apps performed by Masud et al6 and utilized their quantified grading rubric (scale of 1 to 4). We found their established criteria—educational objectives, content, accuracy, design, and conflict of interest—to be equally applicable for evaluating apps designed for professional education.6 Each app earned a minimum of 1 point and a maximum of 4 points per criterion. One point was given if the app did not fulfill the criterion, 2 points for minimally fulfilling the criterion, 3 points for mostly fulfilling the criterion, and 4 points if the criterion was completely fulfilled. Two medical students (E.H. and N.C.)—one at the preclinical stage and the other at the clinical stage of medical education—reviewed the apps using the given rubric, then discussed and resolved any discrepancies in points assigned. A dermatology resident (M.A.) independently reviewed the apps using the given rubric.



The mean of the student score and the resident score was calculated for each category. The sum of the averages for each category was considered the final score for an app, determining its overall quality. Apps with a total score of 5 to 10 were considered poor and inadequate for education. A total score of 10.5 to 15 indicated that an app was somewhat adequate (ie, useful for education in some aspects but falling short in others). Apps that were considered adequate for education, across all or most criteria, received a total score ranging from 15.5 to 20.

Results

Our search generated 130 apps. After applying exclusion criteria, 42 apps were eligible for review. At the time of publication, 36 of these apps were still available. The possible range of scores based on the rubric was 5 to 20. The actual range of scores was 7 to 20. Of the 36 apps, 2 (5.6%) were poor, 16 (44.4%) were somewhat adequate, and 18 (50%) were adequate. Formats included primary resources, such as clinical decision support tools, journals, references, and a podcast (Table 1). Additionally, interactive learning tools included games, learning modules, and apps for self-evaluation (Table 2). Thirty apps covered general dermatology; others focused on skin cancer (n=5) and cosmetic dermatology (n=1). Regarding cost, 29 apps were free to download, whereas 7 charged a fee (mean price, $2.56).

 

 

Comment

In addition to the convenience of having an educational tool in their white-coat pocket, learners of dermatology have been shown to benefit from supplementing their curriculum with mobile apps, which sets the stage for formal integration of mobile apps into dermatology teaching in the future.8 Prior to widespread adoption, mobile apps must be evaluated for content and utility, starting with an objective rubric.

Without official scientific standards in place, it was unsurprising that only half of the dermatology education applications were classified as adequate in this study. Among the types of apps offered—clinical decision support tools, journals, references, podcast, games, learning modules, and self-evaluation—certain categories scored higher than others. App formats with the highest average score (16.5 out of 20) were journals and podcast.

One barrier to utilization of these apps was that a subscription to the journals and podcast was required to obtain access to all available content. Students and trainees can seek out library resources at their academic institutions to take advantage of journal subscriptions available to them at no additional cost. Dermatology residents can take advantage of their complimentary membership in the American Academy of Dermatology for a free subscription to AAD Dialogues in Dermatology (otherwise $179 annually for nonresident members and $320 annually for nonmembers).

On the other hand, learning module was the lowest-rated format (average score, 11.3 out of 20), with only Medical Student: Dermatology qualifying as adequate (total score, 16). This finding is worrisome given that students and residents might look to learning modules for quick targeted lessons on specific topics.

The lowest-scoring app, a clinical decision support tool called Naturelize, received a total score of 7. Although it listed the indications and contraindications for dermal filler types to be used in different locations on the face, there was a clear conflict of interest, oversimplified design, and little evidence-based education, mirroring the current state of cosmetic dermatology training in residency, in which trainees think they are inadequately prepared for aesthetic procedures and comparative effectiveness research is lacking.9-11

At the opposite end of the spectrum, MyDermPath+ was a reference app with a total score of 20. The app cited credible authors with a medical degree (MD) and had an easy-to-use, well-designed interface, including a reference guide, differential builder, and quiz for a range of topics within dermatology. As a free download without in-app purchases or advertisements, there was no evidence of conflict of interest. The position of a dermatopathology app as the top dermatology education mobile app might reflect an increased emphasis on dermatopathology education in residency as well as a transition to digitization of slides.5

The second-highest scoring apps (total score of 19 points) were Dermatology Database and VisualDx. Both were references covering a wide range of dermatology topics. Dermatology Database was a comprehensive search tool for diseases, drugs, procedures, and terms that was simple and entirely free to use but did not cite references. VisualDx, as its name suggests, offered quality clinical images, complete guides with references, and a unique differential builder. An annual subscription is $399.99, but the process to gain free access through a participating academic institution was simple.

Games were a unique mobile app format; however, 2 of 3 games scored in the somewhat adequate range. The game DiagnosUs, which tested users’ ability to differentiate skin cancer and psoriasis from dermatitis on clinical images, would benefit from more comprehensive content as well as professional verification of true diagnoses, which earned the app 2 points in both the content and accuracy categories. The Unusual Suspects tested the ABCDE algorithm in a short learning module, followed by a simple game that involved identification of melanoma in a timed setting. Although the design was novel and interactive, the game was limited to the same 5 melanoma tumors overlaid on pictures of normal skin. The narrow scope earned 1 point for content, the redundancy in the game earned 3 points for design, and the lack of real clinical images earned 2 points for educational objectives. Although game-format mobile apps have the capability to challenge the user’s knowledge with a built-in feedback or reward system, improvements should be made to ensure that apps are equally educational as they are engaging.

AAD Dialogues in Dermatology was the only app in the form of a podcast and provided expert interviews along with disclosures, transcripts, commentary, and references. More than half the content in the app could not be accessed without a subscription, earning 2.5 points in the conflict of interest category. Additionally, several flaws resulted in a design score of 2.5, including inconsistent availability of transcripts, poor quality of sound on some episodes, difficulty distinguishing new episodes from those already played, and a glitch that removed the episode duration. Still, the app was a valuable and comprehensive resource, with clear objectives and cited references. With improvements in content, affordability, and user experience, apps in unique formats such as games and podcasts might appeal to kinesthetic and auditory learners.

An important factor to consider when discussing mobile apps for students and residents is cost. With rising prices of board examinations and preparation materials, supplementary study tools should not come with an exorbitant price tag. Therefore, we limited our evaluation to apps that were free or cost less than $5 to download. Even so, subscriptions and other in-app purchases were an obstacle in one-third of apps, ranging from $4.99 to unlock additional content in Rash Decisions to $69.99 to access most topics in Fitzpatrick’s Color Atlas. The highest-rated app in our study, MyDermPath+, historically cost $19.99 to download but became free with a grant from the Sulzberger Foundation.12 An initial investment to develop quality apps for the purpose of dermatology education might pay off in the end.

To evaluate the apps from the perspective of the target demographic of this study, 2 medical students—one in the preclinical stage and the other in the clinical stage of medical education—and a dermatology resident graded the apps. Certain limitations exist in this type of study, including differing learning styles, which might influence the types of apps that evaluators found most impactful to their education. Interestingly, some apps earned a higher resident score than student score. In particular, RightSite (a reference that helps with anatomically correct labeling) and Mohs Surgery Appropriate Use Criteria (a clinical decision support tool to determine whether to perform Mohs surgery) each had a 3-point discrepancy (data not shown). A resident might benefit from these practical apps in day-to-day practice, but a student would be less likely to find them useful as a learning tool.



Still, by defining adequate teaching value using specific categories of educational objectives, content, accuracy, design, and conflict of interest, we attempted to minimize the effect of personal preference on the grading process. Although we acknowledge a degree of subjectivity, we found that utilizing a previously published rubric with defined criteria was crucial in remaining unbiased.

Conclusion

Further studies should evaluate additional apps available on Apple’s iPad (tablet), as well as those on other operating systems, including Google’s Android. To ensure the existence of mobile apps as adequate education tools, they should be peer reviewed prior to publication or before widespread use by future and current providers at the minimum. To maximize free access to highly valuable resources available in the palm of their hand, students and trainees should contact the library at their academic institution.

References
  1. Craddock MF, Blondin HM, Youssef MJ, et al. Online education improves pediatric residents' understanding of atopic dermatitis. Pediatr Dermatol. 2018;35:64-69. 
  2. Lacy FA, Coman GC, Holliday AC, et al. Assessment of smartphone application for teaching intuitive visual diagnosis of melanoma. JAMA Dermatol. 2018;154:730-731. 
  3. Flaten HK, St Claire C, Schlager E, et al. Growth of mobile applications in dermatology--2017 update. Dermatol Online J. 2018;24:13. 
  4. Liu R-F, Wang F-Y, Yen H, et al. A new mobile learning module using smartphone wallpapers in identification of medical fungi for medical students and residents. Int J Dermatol. 2018;57:458-462.  
  5. Shahriari N, Grant-Kels J, Murphy MJ. Dermatopathology education in the era of modern technology. J Cutan Pathol. 2017;44:763-771. 
  6. Masud A, Shafi S, Rao BK. Mobile medical apps for patient education: a graded review of available dermatology apps. Cutis. 2018;101:141-144.  
  7. Mercer JM. An array of mobile apps for dermatologists. J Cutan Med Surg. 2014;18:295-297.  
  8. Tongdee E, Markowitz O. Mobile app rankings in dermatology. Cutis. 2018;102:252-256.  
  9. Kirby JS, Adgerson CN, Anderson BE. A survey of dermatology resident education in cosmetic procedures. J Am Acad Dermatol. 2013;68:e23-e28. 
  10. Waldman A, Sobanko JF, Alam M. Practice and educational gaps in cosmetic dermatologic surgery. Dermatol Clin. 2016;34:341-346.  
  11. Nielson CB, Harb JN, Motaparthi K. Education in cosmetic procedural dermatology: resident experiences and perceptions. J Clin Aesthet Dermatol. 2019;12:E70-E72.  
  12. Hanna MG, Parwani AV, Pantanowitz L, et al. Smartphone applications: a contemporary resource for dermatopathology. J Pathol Inform. 2015;6:44.
Article PDF
Author and Disclosure Information

From the Center for Dermatology, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey. Dr. Rao also is from the Department of Dermatology, Weill Cornell Medicine, New York, New York.

The authors report no conflict of interest.

Correspondence: Nadiya Chuchvara, BA, 1 Worlds Fair Dr, 2nd Floor, Ste 2400, Somerset, NJ 08873 (nadiyac94@gmail.com).

Issue
Cutis - 106(6)
Publications
Topics
Page Number
321-325
Sections
Author and Disclosure Information

From the Center for Dermatology, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey. Dr. Rao also is from the Department of Dermatology, Weill Cornell Medicine, New York, New York.

The authors report no conflict of interest.

Correspondence: Nadiya Chuchvara, BA, 1 Worlds Fair Dr, 2nd Floor, Ste 2400, Somerset, NJ 08873 (nadiyac94@gmail.com).

Author and Disclosure Information

From the Center for Dermatology, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey. Dr. Rao also is from the Department of Dermatology, Weill Cornell Medicine, New York, New York.

The authors report no conflict of interest.

Correspondence: Nadiya Chuchvara, BA, 1 Worlds Fair Dr, 2nd Floor, Ste 2400, Somerset, NJ 08873 (nadiyac94@gmail.com).

Article PDF
Article PDF

With today’s technology, it is easier than ever to access web-based tools that enrich traditional dermatology education. The literature supports the use of these innovative platforms to enhance learning at the student and trainee levels. A controlled study of pediatric residents showed that online modules effectively supplemented clinical experience with atopic dermatitis.1 In a randomized diagnostic study of medical students, practice with an image-based web application (app) that teaches rapid recognition of melanoma proved more effective than learning a rule-based algorithm.2 Given the visual nature of dermatology, pattern recognition is an essential skill that is fostered through experience and is only made more accessible with technology.

With the added benefit of convenience and accessibility, mobile apps can supplement experiential learning. Mirroring the overall growth of mobile apps, the number of available dermatology apps has increased.3 Dermatology mobile apps serve purposes ranging from quick reference tools to comprehensive modules, journals, and question banks. At an academic hospital in Taiwan, both nondermatology and dermatology trainees’ examination performance improved after 3 weeks of using a smartphone-based wallpaper learning module displaying morphologic characteristics of fungi.4 With the expansion of virtual microscopy, mobile apps also have been created as a learning tool for dermatopathology, giving trainees the flexibility and autonomy to view slides on their own time.5 Nevertheless, the literature on dermatology mobile apps designed for the education of medical students and trainees is limited, demonstrating a need for further investigation.

Prior studies have reviewed dermatology apps for patients and practicing dermatologists.6-8 Herein, we focus on mobile apps targeting students and residents learning dermatology. General dermatology reference apps and educational aid apps have grown by 33% and 32%, respectively, from 2014 to 2017.3 As with any resource meant to educate future and current medical providers, there must be an objective review process in place to ensure accurate, unbiased, evidence-based teaching.

Well-organized, comprehensive information and a user-friendly interface are additional factors of importance when selecting an educational mobile app. When discussing supplemental resources, accessibility and affordability also are priorities given the high cost of a medical education at baseline. Overall, there is a need for a standardized method to evaluate the key factors of an educational mobile app that make it appropriate for this demographic. We conducted a search of mobile apps relating to dermatology education for students and residents.

Methods

We searched for publicly available mobile apps relating to dermatology education in the App Store (Apple Inc) from September to November 2019 using the search terms dermatology education, dermoscopy education, melanoma education, skin cancer education, psoriasis education, rosacea education, acne education, eczema education, dermal fillers education, and Mohs surgery education. We excluded apps that were not in English, were created for a conference, cost more than $5 to download, or did not include a specific dermatology education section. In this way, we hoped to evaluate apps that were relevant, accessible, and affordable.

We modeled our study after a review of patient education apps performed by Masud et al6 and utilized their quantified grading rubric (scale of 1 to 4). We found their established criteria—educational objectives, content, accuracy, design, and conflict of interest—to be equally applicable for evaluating apps designed for professional education.6 Each app earned a minimum of 1 point and a maximum of 4 points per criterion. One point was given if the app did not fulfill the criterion, 2 points for minimally fulfilling the criterion, 3 points for mostly fulfilling the criterion, and 4 points if the criterion was completely fulfilled. Two medical students (E.H. and N.C.)—one at the preclinical stage and the other at the clinical stage of medical education—reviewed the apps using the given rubric, then discussed and resolved any discrepancies in points assigned. A dermatology resident (M.A.) independently reviewed the apps using the given rubric.



The mean of the student score and the resident score was calculated for each category. The sum of the averages for each category was considered the final score for an app, determining its overall quality. Apps with a total score of 5 to 10 were considered poor and inadequate for education. A total score of 10.5 to 15 indicated that an app was somewhat adequate (ie, useful for education in some aspects but falling short in others). Apps that were considered adequate for education, across all or most criteria, received a total score ranging from 15.5 to 20.

Results

Our search generated 130 apps. After applying exclusion criteria, 42 apps were eligible for review. At the time of publication, 36 of these apps were still available. The possible range of scores based on the rubric was 5 to 20. The actual range of scores was 7 to 20. Of the 36 apps, 2 (5.6%) were poor, 16 (44.4%) were somewhat adequate, and 18 (50%) were adequate. Formats included primary resources, such as clinical decision support tools, journals, references, and a podcast (Table 1). Additionally, interactive learning tools included games, learning modules, and apps for self-evaluation (Table 2). Thirty apps covered general dermatology; others focused on skin cancer (n=5) and cosmetic dermatology (n=1). Regarding cost, 29 apps were free to download, whereas 7 charged a fee (mean price, $2.56).

 

 

Comment

In addition to the convenience of having an educational tool in their white-coat pocket, learners of dermatology have been shown to benefit from supplementing their curriculum with mobile apps, which sets the stage for formal integration of mobile apps into dermatology teaching in the future.8 Prior to widespread adoption, mobile apps must be evaluated for content and utility, starting with an objective rubric.

Without official scientific standards in place, it was unsurprising that only half of the dermatology education applications were classified as adequate in this study. Among the types of apps offered—clinical decision support tools, journals, references, podcast, games, learning modules, and self-evaluation—certain categories scored higher than others. App formats with the highest average score (16.5 out of 20) were journals and podcast.

One barrier to utilization of these apps was that a subscription to the journals and podcast was required to obtain access to all available content. Students and trainees can seek out library resources at their academic institutions to take advantage of journal subscriptions available to them at no additional cost. Dermatology residents can take advantage of their complimentary membership in the American Academy of Dermatology for a free subscription to AAD Dialogues in Dermatology (otherwise $179 annually for nonresident members and $320 annually for nonmembers).

On the other hand, learning module was the lowest-rated format (average score, 11.3 out of 20), with only Medical Student: Dermatology qualifying as adequate (total score, 16). This finding is worrisome given that students and residents might look to learning modules for quick targeted lessons on specific topics.

The lowest-scoring app, a clinical decision support tool called Naturelize, received a total score of 7. Although it listed the indications and contraindications for dermal filler types to be used in different locations on the face, there was a clear conflict of interest, oversimplified design, and little evidence-based education, mirroring the current state of cosmetic dermatology training in residency, in which trainees think they are inadequately prepared for aesthetic procedures and comparative effectiveness research is lacking.9-11

At the opposite end of the spectrum, MyDermPath+ was a reference app with a total score of 20. The app cited credible authors with a medical degree (MD) and had an easy-to-use, well-designed interface, including a reference guide, differential builder, and quiz for a range of topics within dermatology. As a free download without in-app purchases or advertisements, there was no evidence of conflict of interest. The position of a dermatopathology app as the top dermatology education mobile app might reflect an increased emphasis on dermatopathology education in residency as well as a transition to digitization of slides.5

The second-highest scoring apps (total score of 19 points) were Dermatology Database and VisualDx. Both were references covering a wide range of dermatology topics. Dermatology Database was a comprehensive search tool for diseases, drugs, procedures, and terms that was simple and entirely free to use but did not cite references. VisualDx, as its name suggests, offered quality clinical images, complete guides with references, and a unique differential builder. An annual subscription is $399.99, but the process to gain free access through a participating academic institution was simple.

Games were a unique mobile app format; however, 2 of 3 games scored in the somewhat adequate range. The game DiagnosUs, which tested users’ ability to differentiate skin cancer and psoriasis from dermatitis on clinical images, would benefit from more comprehensive content as well as professional verification of true diagnoses, which earned the app 2 points in both the content and accuracy categories. The Unusual Suspects tested the ABCDE algorithm in a short learning module, followed by a simple game that involved identification of melanoma in a timed setting. Although the design was novel and interactive, the game was limited to the same 5 melanoma tumors overlaid on pictures of normal skin. The narrow scope earned 1 point for content, the redundancy in the game earned 3 points for design, and the lack of real clinical images earned 2 points for educational objectives. Although game-format mobile apps have the capability to challenge the user’s knowledge with a built-in feedback or reward system, improvements should be made to ensure that apps are equally educational as they are engaging.

AAD Dialogues in Dermatology was the only app in the form of a podcast and provided expert interviews along with disclosures, transcripts, commentary, and references. More than half the content in the app could not be accessed without a subscription, earning 2.5 points in the conflict of interest category. Additionally, several flaws resulted in a design score of 2.5, including inconsistent availability of transcripts, poor quality of sound on some episodes, difficulty distinguishing new episodes from those already played, and a glitch that removed the episode duration. Still, the app was a valuable and comprehensive resource, with clear objectives and cited references. With improvements in content, affordability, and user experience, apps in unique formats such as games and podcasts might appeal to kinesthetic and auditory learners.

An important factor to consider when discussing mobile apps for students and residents is cost. With rising prices of board examinations and preparation materials, supplementary study tools should not come with an exorbitant price tag. Therefore, we limited our evaluation to apps that were free or cost less than $5 to download. Even so, subscriptions and other in-app purchases were an obstacle in one-third of apps, ranging from $4.99 to unlock additional content in Rash Decisions to $69.99 to access most topics in Fitzpatrick’s Color Atlas. The highest-rated app in our study, MyDermPath+, historically cost $19.99 to download but became free with a grant from the Sulzberger Foundation.12 An initial investment to develop quality apps for the purpose of dermatology education might pay off in the end.

To evaluate the apps from the perspective of the target demographic of this study, 2 medical students—one in the preclinical stage and the other in the clinical stage of medical education—and a dermatology resident graded the apps. Certain limitations exist in this type of study, including differing learning styles, which might influence the types of apps that evaluators found most impactful to their education. Interestingly, some apps earned a higher resident score than student score. In particular, RightSite (a reference that helps with anatomically correct labeling) and Mohs Surgery Appropriate Use Criteria (a clinical decision support tool to determine whether to perform Mohs surgery) each had a 3-point discrepancy (data not shown). A resident might benefit from these practical apps in day-to-day practice, but a student would be less likely to find them useful as a learning tool.



Still, by defining adequate teaching value using specific categories of educational objectives, content, accuracy, design, and conflict of interest, we attempted to minimize the effect of personal preference on the grading process. Although we acknowledge a degree of subjectivity, we found that utilizing a previously published rubric with defined criteria was crucial in remaining unbiased.

Conclusion

Further studies should evaluate additional apps available on Apple’s iPad (tablet), as well as those on other operating systems, including Google’s Android. To ensure the existence of mobile apps as adequate education tools, they should be peer reviewed prior to publication or before widespread use by future and current providers at the minimum. To maximize free access to highly valuable resources available in the palm of their hand, students and trainees should contact the library at their academic institution.

With today’s technology, it is easier than ever to access web-based tools that enrich traditional dermatology education. The literature supports the use of these innovative platforms to enhance learning at the student and trainee levels. A controlled study of pediatric residents showed that online modules effectively supplemented clinical experience with atopic dermatitis.1 In a randomized diagnostic study of medical students, practice with an image-based web application (app) that teaches rapid recognition of melanoma proved more effective than learning a rule-based algorithm.2 Given the visual nature of dermatology, pattern recognition is an essential skill that is fostered through experience and is only made more accessible with technology.

With the added benefit of convenience and accessibility, mobile apps can supplement experiential learning. Mirroring the overall growth of mobile apps, the number of available dermatology apps has increased.3 Dermatology mobile apps serve purposes ranging from quick reference tools to comprehensive modules, journals, and question banks. At an academic hospital in Taiwan, both nondermatology and dermatology trainees’ examination performance improved after 3 weeks of using a smartphone-based wallpaper learning module displaying morphologic characteristics of fungi.4 With the expansion of virtual microscopy, mobile apps also have been created as a learning tool for dermatopathology, giving trainees the flexibility and autonomy to view slides on their own time.5 Nevertheless, the literature on dermatology mobile apps designed for the education of medical students and trainees is limited, demonstrating a need for further investigation.

Prior studies have reviewed dermatology apps for patients and practicing dermatologists.6-8 Herein, we focus on mobile apps targeting students and residents learning dermatology. General dermatology reference apps and educational aid apps have grown by 33% and 32%, respectively, from 2014 to 2017.3 As with any resource meant to educate future and current medical providers, there must be an objective review process in place to ensure accurate, unbiased, evidence-based teaching.

Well-organized, comprehensive information and a user-friendly interface are additional factors of importance when selecting an educational mobile app. When discussing supplemental resources, accessibility and affordability also are priorities given the high cost of a medical education at baseline. Overall, there is a need for a standardized method to evaluate the key factors of an educational mobile app that make it appropriate for this demographic. We conducted a search of mobile apps relating to dermatology education for students and residents.

Methods

We searched for publicly available mobile apps relating to dermatology education in the App Store (Apple Inc) from September to November 2019 using the search terms dermatology education, dermoscopy education, melanoma education, skin cancer education, psoriasis education, rosacea education, acne education, eczema education, dermal fillers education, and Mohs surgery education. We excluded apps that were not in English, were created for a conference, cost more than $5 to download, or did not include a specific dermatology education section. In this way, we hoped to evaluate apps that were relevant, accessible, and affordable.

We modeled our study after a review of patient education apps performed by Masud et al6 and utilized their quantified grading rubric (scale of 1 to 4). We found their established criteria—educational objectives, content, accuracy, design, and conflict of interest—to be equally applicable for evaluating apps designed for professional education.6 Each app earned a minimum of 1 point and a maximum of 4 points per criterion. One point was given if the app did not fulfill the criterion, 2 points for minimally fulfilling the criterion, 3 points for mostly fulfilling the criterion, and 4 points if the criterion was completely fulfilled. Two medical students (E.H. and N.C.)—one at the preclinical stage and the other at the clinical stage of medical education—reviewed the apps using the given rubric, then discussed and resolved any discrepancies in points assigned. A dermatology resident (M.A.) independently reviewed the apps using the given rubric.



The mean of the student score and the resident score was calculated for each category. The sum of the averages for each category was considered the final score for an app, determining its overall quality. Apps with a total score of 5 to 10 were considered poor and inadequate for education. A total score of 10.5 to 15 indicated that an app was somewhat adequate (ie, useful for education in some aspects but falling short in others). Apps that were considered adequate for education, across all or most criteria, received a total score ranging from 15.5 to 20.

Results

Our search generated 130 apps. After applying exclusion criteria, 42 apps were eligible for review. At the time of publication, 36 of these apps were still available. The possible range of scores based on the rubric was 5 to 20. The actual range of scores was 7 to 20. Of the 36 apps, 2 (5.6%) were poor, 16 (44.4%) were somewhat adequate, and 18 (50%) were adequate. Formats included primary resources, such as clinical decision support tools, journals, references, and a podcast (Table 1). Additionally, interactive learning tools included games, learning modules, and apps for self-evaluation (Table 2). Thirty apps covered general dermatology; others focused on skin cancer (n=5) and cosmetic dermatology (n=1). Regarding cost, 29 apps were free to download, whereas 7 charged a fee (mean price, $2.56).

 

 

Comment

In addition to the convenience of having an educational tool in their white-coat pocket, learners of dermatology have been shown to benefit from supplementing their curriculum with mobile apps, which sets the stage for formal integration of mobile apps into dermatology teaching in the future.8 Prior to widespread adoption, mobile apps must be evaluated for content and utility, starting with an objective rubric.

Without official scientific standards in place, it was unsurprising that only half of the dermatology education applications were classified as adequate in this study. Among the types of apps offered—clinical decision support tools, journals, references, podcast, games, learning modules, and self-evaluation—certain categories scored higher than others. App formats with the highest average score (16.5 out of 20) were journals and podcast.

One barrier to utilization of these apps was that a subscription to the journals and podcast was required to obtain access to all available content. Students and trainees can seek out library resources at their academic institutions to take advantage of journal subscriptions available to them at no additional cost. Dermatology residents can take advantage of their complimentary membership in the American Academy of Dermatology for a free subscription to AAD Dialogues in Dermatology (otherwise $179 annually for nonresident members and $320 annually for nonmembers).

On the other hand, learning module was the lowest-rated format (average score, 11.3 out of 20), with only Medical Student: Dermatology qualifying as adequate (total score, 16). This finding is worrisome given that students and residents might look to learning modules for quick targeted lessons on specific topics.

The lowest-scoring app, a clinical decision support tool called Naturelize, received a total score of 7. Although it listed the indications and contraindications for dermal filler types to be used in different locations on the face, there was a clear conflict of interest, oversimplified design, and little evidence-based education, mirroring the current state of cosmetic dermatology training in residency, in which trainees think they are inadequately prepared for aesthetic procedures and comparative effectiveness research is lacking.9-11

At the opposite end of the spectrum, MyDermPath+ was a reference app with a total score of 20. The app cited credible authors with a medical degree (MD) and had an easy-to-use, well-designed interface, including a reference guide, differential builder, and quiz for a range of topics within dermatology. As a free download without in-app purchases or advertisements, there was no evidence of conflict of interest. The position of a dermatopathology app as the top dermatology education mobile app might reflect an increased emphasis on dermatopathology education in residency as well as a transition to digitization of slides.5

The second-highest scoring apps (total score of 19 points) were Dermatology Database and VisualDx. Both were references covering a wide range of dermatology topics. Dermatology Database was a comprehensive search tool for diseases, drugs, procedures, and terms that was simple and entirely free to use but did not cite references. VisualDx, as its name suggests, offered quality clinical images, complete guides with references, and a unique differential builder. An annual subscription is $399.99, but the process to gain free access through a participating academic institution was simple.

Games were a unique mobile app format; however, 2 of 3 games scored in the somewhat adequate range. The game DiagnosUs, which tested users’ ability to differentiate skin cancer and psoriasis from dermatitis on clinical images, would benefit from more comprehensive content as well as professional verification of true diagnoses, which earned the app 2 points in both the content and accuracy categories. The Unusual Suspects tested the ABCDE algorithm in a short learning module, followed by a simple game that involved identification of melanoma in a timed setting. Although the design was novel and interactive, the game was limited to the same 5 melanoma tumors overlaid on pictures of normal skin. The narrow scope earned 1 point for content, the redundancy in the game earned 3 points for design, and the lack of real clinical images earned 2 points for educational objectives. Although game-format mobile apps have the capability to challenge the user’s knowledge with a built-in feedback or reward system, improvements should be made to ensure that apps are equally educational as they are engaging.

AAD Dialogues in Dermatology was the only app in the form of a podcast and provided expert interviews along with disclosures, transcripts, commentary, and references. More than half the content in the app could not be accessed without a subscription, earning 2.5 points in the conflict of interest category. Additionally, several flaws resulted in a design score of 2.5, including inconsistent availability of transcripts, poor quality of sound on some episodes, difficulty distinguishing new episodes from those already played, and a glitch that removed the episode duration. Still, the app was a valuable and comprehensive resource, with clear objectives and cited references. With improvements in content, affordability, and user experience, apps in unique formats such as games and podcasts might appeal to kinesthetic and auditory learners.

An important factor to consider when discussing mobile apps for students and residents is cost. With rising prices of board examinations and preparation materials, supplementary study tools should not come with an exorbitant price tag. Therefore, we limited our evaluation to apps that were free or cost less than $5 to download. Even so, subscriptions and other in-app purchases were an obstacle in one-third of apps, ranging from $4.99 to unlock additional content in Rash Decisions to $69.99 to access most topics in Fitzpatrick’s Color Atlas. The highest-rated app in our study, MyDermPath+, historically cost $19.99 to download but became free with a grant from the Sulzberger Foundation.12 An initial investment to develop quality apps for the purpose of dermatology education might pay off in the end.

To evaluate the apps from the perspective of the target demographic of this study, 2 medical students—one in the preclinical stage and the other in the clinical stage of medical education—and a dermatology resident graded the apps. Certain limitations exist in this type of study, including differing learning styles, which might influence the types of apps that evaluators found most impactful to their education. Interestingly, some apps earned a higher resident score than student score. In particular, RightSite (a reference that helps with anatomically correct labeling) and Mohs Surgery Appropriate Use Criteria (a clinical decision support tool to determine whether to perform Mohs surgery) each had a 3-point discrepancy (data not shown). A resident might benefit from these practical apps in day-to-day practice, but a student would be less likely to find them useful as a learning tool.



Still, by defining adequate teaching value using specific categories of educational objectives, content, accuracy, design, and conflict of interest, we attempted to minimize the effect of personal preference on the grading process. Although we acknowledge a degree of subjectivity, we found that utilizing a previously published rubric with defined criteria was crucial in remaining unbiased.

Conclusion

Further studies should evaluate additional apps available on Apple’s iPad (tablet), as well as those on other operating systems, including Google’s Android. To ensure the existence of mobile apps as adequate education tools, they should be peer reviewed prior to publication or before widespread use by future and current providers at the minimum. To maximize free access to highly valuable resources available in the palm of their hand, students and trainees should contact the library at their academic institution.

References
  1. Craddock MF, Blondin HM, Youssef MJ, et al. Online education improves pediatric residents' understanding of atopic dermatitis. Pediatr Dermatol. 2018;35:64-69. 
  2. Lacy FA, Coman GC, Holliday AC, et al. Assessment of smartphone application for teaching intuitive visual diagnosis of melanoma. JAMA Dermatol. 2018;154:730-731. 
  3. Flaten HK, St Claire C, Schlager E, et al. Growth of mobile applications in dermatology--2017 update. Dermatol Online J. 2018;24:13. 
  4. Liu R-F, Wang F-Y, Yen H, et al. A new mobile learning module using smartphone wallpapers in identification of medical fungi for medical students and residents. Int J Dermatol. 2018;57:458-462.  
  5. Shahriari N, Grant-Kels J, Murphy MJ. Dermatopathology education in the era of modern technology. J Cutan Pathol. 2017;44:763-771. 
  6. Masud A, Shafi S, Rao BK. Mobile medical apps for patient education: a graded review of available dermatology apps. Cutis. 2018;101:141-144.  
  7. Mercer JM. An array of mobile apps for dermatologists. J Cutan Med Surg. 2014;18:295-297.  
  8. Tongdee E, Markowitz O. Mobile app rankings in dermatology. Cutis. 2018;102:252-256.  
  9. Kirby JS, Adgerson CN, Anderson BE. A survey of dermatology resident education in cosmetic procedures. J Am Acad Dermatol. 2013;68:e23-e28. 
  10. Waldman A, Sobanko JF, Alam M. Practice and educational gaps in cosmetic dermatologic surgery. Dermatol Clin. 2016;34:341-346.  
  11. Nielson CB, Harb JN, Motaparthi K. Education in cosmetic procedural dermatology: resident experiences and perceptions. J Clin Aesthet Dermatol. 2019;12:E70-E72.  
  12. Hanna MG, Parwani AV, Pantanowitz L, et al. Smartphone applications: a contemporary resource for dermatopathology. J Pathol Inform. 2015;6:44.
References
  1. Craddock MF, Blondin HM, Youssef MJ, et al. Online education improves pediatric residents' understanding of atopic dermatitis. Pediatr Dermatol. 2018;35:64-69. 
  2. Lacy FA, Coman GC, Holliday AC, et al. Assessment of smartphone application for teaching intuitive visual diagnosis of melanoma. JAMA Dermatol. 2018;154:730-731. 
  3. Flaten HK, St Claire C, Schlager E, et al. Growth of mobile applications in dermatology--2017 update. Dermatol Online J. 2018;24:13. 
  4. Liu R-F, Wang F-Y, Yen H, et al. A new mobile learning module using smartphone wallpapers in identification of medical fungi for medical students and residents. Int J Dermatol. 2018;57:458-462.  
  5. Shahriari N, Grant-Kels J, Murphy MJ. Dermatopathology education in the era of modern technology. J Cutan Pathol. 2017;44:763-771. 
  6. Masud A, Shafi S, Rao BK. Mobile medical apps for patient education: a graded review of available dermatology apps. Cutis. 2018;101:141-144.  
  7. Mercer JM. An array of mobile apps for dermatologists. J Cutan Med Surg. 2014;18:295-297.  
  8. Tongdee E, Markowitz O. Mobile app rankings in dermatology. Cutis. 2018;102:252-256.  
  9. Kirby JS, Adgerson CN, Anderson BE. A survey of dermatology resident education in cosmetic procedures. J Am Acad Dermatol. 2013;68:e23-e28. 
  10. Waldman A, Sobanko JF, Alam M. Practice and educational gaps in cosmetic dermatologic surgery. Dermatol Clin. 2016;34:341-346.  
  11. Nielson CB, Harb JN, Motaparthi K. Education in cosmetic procedural dermatology: resident experiences and perceptions. J Clin Aesthet Dermatol. 2019;12:E70-E72.  
  12. Hanna MG, Parwani AV, Pantanowitz L, et al. Smartphone applications: a contemporary resource for dermatopathology. J Pathol Inform. 2015;6:44.
Issue
Cutis - 106(6)
Issue
Cutis - 106(6)
Page Number
321-325
Page Number
321-325
Publications
Publications
Topics
Article Type
Sections
Inside the Article

Practice Points

  • Mobile applications (apps) are a convenient way to learn dermatology, but there is no objective method to assess their quality.
  • To determine which apps are most useful for education, we performed a graded review of dermatology apps targeted to students and residents.
  • By applying a rubric to 36 affordable apps, we identified 18 (50%) with adequate teaching value.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Article PDF Media

Reliability of Biopsy Margin Status for Basal Cell Carcinoma: A Retrospective Study

Article Type
Changed

Basal cell carcinoma (BCC) is the most common type of skin cancer frequently encountered in both dermatology and primary care settings.1 When biopsies of these neoplasms are performed to confirm the diagnosis, pathology reports may indicate positive or negative margin status. No guidelines exist for reporting biopsy margin status for BCC, resulting in varied reporting practices among dermatopathologists. Furthermore, the terminology used to describe margin status can be ambiguous and differs among pathologists; language such as “approaches the margin” or “margins appear free” may be used, with nonuniform interpretation between pathologists and providers, leading to variability in patient management.2

When interpreting a negative margin status on a pathology report, one must question if the BCC extends beyond the margin in unexamined sections of the specimen, which could be the result of an irregular tumor growth pattern or tissue processing. It has been estimated that less than 2% of the peripheral surgical margin is ultimately examined when serial cross-sections are prepared histologically (the bread loaf technique). However, this estimation would depend on several variables, including the number and thickness of sections and the amount of tissue discarded during processing.3 Importantly, reports of a false-negative margin could lead both the clinician and patient to believe that the neoplasm has been completely removed, which could have serious consequences.

Our study sought to determine the reliability of negative biopsy margin status for BCC. We examined BCC biopsy specimens initially determined to have uninvolved margins on routine tissue processing and determined the proportion with truly negative margins after complete tissue block sectioning of the initial biopsy specimen. We felt this technique was a more accurate measurement of true margin status than examination of a re-excision specimen. We also identified any factors that were predictive of positive true margins.

Methods

We conducted a retrospective study evaluating tissue samples collected at Geisinger Health System (Danville, Pennsylvania) from January to December 2016. Specimens were queried via the electronic database system at our institution (CoPath). We included BCC biopsy specimens with negative histologic margins on initial assessment that subsequently had block exhaust levels routinely ordered. These levels are cut every 100 to 150 µm, generating approximately 8 glass slides. We excluded all tumors that did not fit these criteria as well as those in patients younger than 18 years. Data collection was performed utilizing specimen pathology reports in addition to the note from the corresponding clinician office visit from the institution’s electronic medical record (Epic). Appropriate statistical calculations were performed. This study was approved by an institutional review board at our institution, which is required for all research involving human participants. This served to ensure the proper review and storage of patients’ protected health information.

 

 

Results

The search yielded a total of 122 specimens from 104 patients after appropriate exclusions. We examined a total of 122 BCC biopsy specimens with negative initial margins: 121 (99.2%) shave biopsies and 1 (0.8%) punch biopsy. Of 122 specimens with negative initial margins, 53 (43.4%) were found to have a truly positive margin based on the presence of either tumor or stroma at the lateral or deep tissue edge after complete tissue block sectioning. Sixty-nine (56.6%) specimens had clear margins and were categorized as truly negative after complete tissue block sectioning. Specimens with positive and negative final margin status did not differ significantly with respect to patient age; gender; biopsy technique; number of gross specimen sections; or tumor characteristics, including location, size, and subtype (Table)(P>.05).

We also examined the type of treatment performed, which varied and included curettage, electrodesiccation and curettage, excision, and Mohs micrographic surgery. Clinicians, who were not made aware of the exhaust level protocol, chose not to pursue further treatment in 6 (4.9%) of the cases because of negative biopsy margins. Four (66.7%) of the 6 providers were physicians, and 2 (33.3%) were advanced practitioners. All of the providers practiced within the Department of Dermatology.

Comment

Our findings support prior smaller studies investigating this topic. A prospective study by Schnebelen et al4 examined 27 BCC biopsy specimens and found that 8 (30%) were erroneously classified as negative on routine examination. This study similarly determined true margin status by assessing the margins at complete tissue block exhaustion.4 Willardson et al5 also demonstrated the poor predictive value of margin status based on the presence of residual BCC in subsequent excisions. They found that 34 (24%) of 143 cases with negative biopsy margins contained residual tumor in the corresponding excision.5

Our study revealed that almost half of BCC biopsy specimens that had negative histologic margins with routine sectioning had truly positive margins on complete block exhaustion. This finding was independent of multiple factors, including tumor subtype, indicating that even nonaggressive tumors are prone to false-negative margin reports. We also found that reports of negative margins persuaded some clinicians to forgo definitive treatment. This study serves to remind clinicians of the limitations of margin assessment and provides impetus for dermatopathologists to consider modifying how margin status is reported.

Limitations of this study include a small number of cases and limited generalizability. Institutions that routinely examine more levels of each biopsy specimen may be less likely to erroneously categorize a positive margin as negative. Furthermore, despite exhausting the tissue block, we still may have underestimated the number of cases with truly positive margins, as this method inherently does not allow for complete margin examination.



Acknowledgments
We thank the Geisinger Department of Dermatopathology and the Geisinger Biostatistics & Research Data Core (Danville, Pennsylvania) for their assistance with our project.

References
  1. Lukowiak TM, Aizman L, Perz A, et al. Association of age, sex, race, and geographic region with variation of the ratio of basal cell to squamous cell carcinomas in the United States. JAMA Dermatol. 2020;156:1149-1276.
  2. Abide JM, Nahai F, Bennett RG. The meaning of surgical margins. Plast Reconstr Surg. 1984;73:492-497.
  3. Kimyai-Asadi A, Goldberg LH, Jih MH. Accuracy of serial transverse cross-sections in detecting residual basal cell carcinoma at the surgical margins of an elliptical excision specimen. J Am Acad Dermatol. 2005;53:469-473.
  4. Schnebelen AM, Gardner JM, Shalin SC. Margin status in shave biopsies of nonmelanoma skin cancers: is it worth reporting? Arch Pathol Lab Med. 2016;140:678-681.
  5. Willardson HB, Lombardo J, Raines M, et al. Predictive value of basal cell carcinoma biopsies with negative margins: a retrospective cohort study. J Am Acad Dermatol. 2018;79:42-46.
Article PDF
Author and Disclosure Information

From the Department of Dermatology, Geisinger Health System, Danville, Pennsylvania.

The authors report no conflict of interest.

Correspondence: Mary C. Brady, MD, 493 Columbia Hill Rd, Danville, PA 17821 (mcb018@bucknell.edu).

Issue
Cutis - 106(6)
Publications
Topics
Page Number
315-317
Sections
Author and Disclosure Information

From the Department of Dermatology, Geisinger Health System, Danville, Pennsylvania.

The authors report no conflict of interest.

Correspondence: Mary C. Brady, MD, 493 Columbia Hill Rd, Danville, PA 17821 (mcb018@bucknell.edu).

Author and Disclosure Information

From the Department of Dermatology, Geisinger Health System, Danville, Pennsylvania.

The authors report no conflict of interest.

Correspondence: Mary C. Brady, MD, 493 Columbia Hill Rd, Danville, PA 17821 (mcb018@bucknell.edu).

Article PDF
Article PDF

Basal cell carcinoma (BCC) is the most common type of skin cancer frequently encountered in both dermatology and primary care settings.1 When biopsies of these neoplasms are performed to confirm the diagnosis, pathology reports may indicate positive or negative margin status. No guidelines exist for reporting biopsy margin status for BCC, resulting in varied reporting practices among dermatopathologists. Furthermore, the terminology used to describe margin status can be ambiguous and differs among pathologists; language such as “approaches the margin” or “margins appear free” may be used, with nonuniform interpretation between pathologists and providers, leading to variability in patient management.2

When interpreting a negative margin status on a pathology report, one must question if the BCC extends beyond the margin in unexamined sections of the specimen, which could be the result of an irregular tumor growth pattern or tissue processing. It has been estimated that less than 2% of the peripheral surgical margin is ultimately examined when serial cross-sections are prepared histologically (the bread loaf technique). However, this estimation would depend on several variables, including the number and thickness of sections and the amount of tissue discarded during processing.3 Importantly, reports of a false-negative margin could lead both the clinician and patient to believe that the neoplasm has been completely removed, which could have serious consequences.

Our study sought to determine the reliability of negative biopsy margin status for BCC. We examined BCC biopsy specimens initially determined to have uninvolved margins on routine tissue processing and determined the proportion with truly negative margins after complete tissue block sectioning of the initial biopsy specimen. We felt this technique was a more accurate measurement of true margin status than examination of a re-excision specimen. We also identified any factors that were predictive of positive true margins.

Methods

We conducted a retrospective study evaluating tissue samples collected at Geisinger Health System (Danville, Pennsylvania) from January to December 2016. Specimens were queried via the electronic database system at our institution (CoPath). We included BCC biopsy specimens with negative histologic margins on initial assessment that subsequently had block exhaust levels routinely ordered. These levels are cut every 100 to 150 µm, generating approximately 8 glass slides. We excluded all tumors that did not fit these criteria as well as those in patients younger than 18 years. Data collection was performed utilizing specimen pathology reports in addition to the note from the corresponding clinician office visit from the institution’s electronic medical record (Epic). Appropriate statistical calculations were performed. This study was approved by an institutional review board at our institution, which is required for all research involving human participants. This served to ensure the proper review and storage of patients’ protected health information.

 

 

Results

The search yielded a total of 122 specimens from 104 patients after appropriate exclusions. We examined a total of 122 BCC biopsy specimens with negative initial margins: 121 (99.2%) shave biopsies and 1 (0.8%) punch biopsy. Of 122 specimens with negative initial margins, 53 (43.4%) were found to have a truly positive margin based on the presence of either tumor or stroma at the lateral or deep tissue edge after complete tissue block sectioning. Sixty-nine (56.6%) specimens had clear margins and were categorized as truly negative after complete tissue block sectioning. Specimens with positive and negative final margin status did not differ significantly with respect to patient age; gender; biopsy technique; number of gross specimen sections; or tumor characteristics, including location, size, and subtype (Table)(P>.05).

We also examined the type of treatment performed, which varied and included curettage, electrodesiccation and curettage, excision, and Mohs micrographic surgery. Clinicians, who were not made aware of the exhaust level protocol, chose not to pursue further treatment in 6 (4.9%) of the cases because of negative biopsy margins. Four (66.7%) of the 6 providers were physicians, and 2 (33.3%) were advanced practitioners. All of the providers practiced within the Department of Dermatology.

Comment

Our findings support prior smaller studies investigating this topic. A prospective study by Schnebelen et al4 examined 27 BCC biopsy specimens and found that 8 (30%) were erroneously classified as negative on routine examination. This study similarly determined true margin status by assessing the margins at complete tissue block exhaustion.4 Willardson et al5 also demonstrated the poor predictive value of margin status based on the presence of residual BCC in subsequent excisions. They found that 34 (24%) of 143 cases with negative biopsy margins contained residual tumor in the corresponding excision.5

Our study revealed that almost half of BCC biopsy specimens that had negative histologic margins with routine sectioning had truly positive margins on complete block exhaustion. This finding was independent of multiple factors, including tumor subtype, indicating that even nonaggressive tumors are prone to false-negative margin reports. We also found that reports of negative margins persuaded some clinicians to forgo definitive treatment. This study serves to remind clinicians of the limitations of margin assessment and provides impetus for dermatopathologists to consider modifying how margin status is reported.

Limitations of this study include a small number of cases and limited generalizability. Institutions that routinely examine more levels of each biopsy specimen may be less likely to erroneously categorize a positive margin as negative. Furthermore, despite exhausting the tissue block, we still may have underestimated the number of cases with truly positive margins, as this method inherently does not allow for complete margin examination.



Acknowledgments
We thank the Geisinger Department of Dermatopathology and the Geisinger Biostatistics & Research Data Core (Danville, Pennsylvania) for their assistance with our project.

Basal cell carcinoma (BCC) is the most common type of skin cancer frequently encountered in both dermatology and primary care settings.1 When biopsies of these neoplasms are performed to confirm the diagnosis, pathology reports may indicate positive or negative margin status. No guidelines exist for reporting biopsy margin status for BCC, resulting in varied reporting practices among dermatopathologists. Furthermore, the terminology used to describe margin status can be ambiguous and differs among pathologists; language such as “approaches the margin” or “margins appear free” may be used, with nonuniform interpretation between pathologists and providers, leading to variability in patient management.2

When interpreting a negative margin status on a pathology report, one must question if the BCC extends beyond the margin in unexamined sections of the specimen, which could be the result of an irregular tumor growth pattern or tissue processing. It has been estimated that less than 2% of the peripheral surgical margin is ultimately examined when serial cross-sections are prepared histologically (the bread loaf technique). However, this estimation would depend on several variables, including the number and thickness of sections and the amount of tissue discarded during processing.3 Importantly, reports of a false-negative margin could lead both the clinician and patient to believe that the neoplasm has been completely removed, which could have serious consequences.

Our study sought to determine the reliability of negative biopsy margin status for BCC. We examined BCC biopsy specimens initially determined to have uninvolved margins on routine tissue processing and determined the proportion with truly negative margins after complete tissue block sectioning of the initial biopsy specimen. We felt this technique was a more accurate measurement of true margin status than examination of a re-excision specimen. We also identified any factors that were predictive of positive true margins.

Methods

We conducted a retrospective study evaluating tissue samples collected at Geisinger Health System (Danville, Pennsylvania) from January to December 2016. Specimens were queried via the electronic database system at our institution (CoPath). We included BCC biopsy specimens with negative histologic margins on initial assessment that subsequently had block exhaust levels routinely ordered. These levels are cut every 100 to 150 µm, generating approximately 8 glass slides. We excluded all tumors that did not fit these criteria as well as those in patients younger than 18 years. Data collection was performed utilizing specimen pathology reports in addition to the note from the corresponding clinician office visit from the institution’s electronic medical record (Epic). Appropriate statistical calculations were performed. This study was approved by an institutional review board at our institution, which is required for all research involving human participants. This served to ensure the proper review and storage of patients’ protected health information.

 

 

Results

The search yielded a total of 122 specimens from 104 patients after appropriate exclusions. We examined a total of 122 BCC biopsy specimens with negative initial margins: 121 (99.2%) shave biopsies and 1 (0.8%) punch biopsy. Of 122 specimens with negative initial margins, 53 (43.4%) were found to have a truly positive margin based on the presence of either tumor or stroma at the lateral or deep tissue edge after complete tissue block sectioning. Sixty-nine (56.6%) specimens had clear margins and were categorized as truly negative after complete tissue block sectioning. Specimens with positive and negative final margin status did not differ significantly with respect to patient age; gender; biopsy technique; number of gross specimen sections; or tumor characteristics, including location, size, and subtype (Table)(P>.05).

We also examined the type of treatment performed, which varied and included curettage, electrodesiccation and curettage, excision, and Mohs micrographic surgery. Clinicians, who were not made aware of the exhaust level protocol, chose not to pursue further treatment in 6 (4.9%) of the cases because of negative biopsy margins. Four (66.7%) of the 6 providers were physicians, and 2 (33.3%) were advanced practitioners. All of the providers practiced within the Department of Dermatology.

Comment

Our findings support prior smaller studies investigating this topic. A prospective study by Schnebelen et al4 examined 27 BCC biopsy specimens and found that 8 (30%) were erroneously classified as negative on routine examination. This study similarly determined true margin status by assessing the margins at complete tissue block exhaustion.4 Willardson et al5 also demonstrated the poor predictive value of margin status based on the presence of residual BCC in subsequent excisions. They found that 34 (24%) of 143 cases with negative biopsy margins contained residual tumor in the corresponding excision.5

Our study revealed that almost half of BCC biopsy specimens that had negative histologic margins with routine sectioning had truly positive margins on complete block exhaustion. This finding was independent of multiple factors, including tumor subtype, indicating that even nonaggressive tumors are prone to false-negative margin reports. We also found that reports of negative margins persuaded some clinicians to forgo definitive treatment. This study serves to remind clinicians of the limitations of margin assessment and provides impetus for dermatopathologists to consider modifying how margin status is reported.

Limitations of this study include a small number of cases and limited generalizability. Institutions that routinely examine more levels of each biopsy specimen may be less likely to erroneously categorize a positive margin as negative. Furthermore, despite exhausting the tissue block, we still may have underestimated the number of cases with truly positive margins, as this method inherently does not allow for complete margin examination.



Acknowledgments
We thank the Geisinger Department of Dermatopathology and the Geisinger Biostatistics & Research Data Core (Danville, Pennsylvania) for their assistance with our project.

References
  1. Lukowiak TM, Aizman L, Perz A, et al. Association of age, sex, race, and geographic region with variation of the ratio of basal cell to squamous cell carcinomas in the United States. JAMA Dermatol. 2020;156:1149-1276.
  2. Abide JM, Nahai F, Bennett RG. The meaning of surgical margins. Plast Reconstr Surg. 1984;73:492-497.
  3. Kimyai-Asadi A, Goldberg LH, Jih MH. Accuracy of serial transverse cross-sections in detecting residual basal cell carcinoma at the surgical margins of an elliptical excision specimen. J Am Acad Dermatol. 2005;53:469-473.
  4. Schnebelen AM, Gardner JM, Shalin SC. Margin status in shave biopsies of nonmelanoma skin cancers: is it worth reporting? Arch Pathol Lab Med. 2016;140:678-681.
  5. Willardson HB, Lombardo J, Raines M, et al. Predictive value of basal cell carcinoma biopsies with negative margins: a retrospective cohort study. J Am Acad Dermatol. 2018;79:42-46.
References
  1. Lukowiak TM, Aizman L, Perz A, et al. Association of age, sex, race, and geographic region with variation of the ratio of basal cell to squamous cell carcinomas in the United States. JAMA Dermatol. 2020;156:1149-1276.
  2. Abide JM, Nahai F, Bennett RG. The meaning of surgical margins. Plast Reconstr Surg. 1984;73:492-497.
  3. Kimyai-Asadi A, Goldberg LH, Jih MH. Accuracy of serial transverse cross-sections in detecting residual basal cell carcinoma at the surgical margins of an elliptical excision specimen. J Am Acad Dermatol. 2005;53:469-473.
  4. Schnebelen AM, Gardner JM, Shalin SC. Margin status in shave biopsies of nonmelanoma skin cancers: is it worth reporting? Arch Pathol Lab Med. 2016;140:678-681.
  5. Willardson HB, Lombardo J, Raines M, et al. Predictive value of basal cell carcinoma biopsies with negative margins: a retrospective cohort study. J Am Acad Dermatol. 2018;79:42-46.
Issue
Cutis - 106(6)
Issue
Cutis - 106(6)
Page Number
315-317
Page Number
315-317
Publications
Publications
Topics
Article Type
Sections
Inside the Article

Practice Points

  • Clinicians must recognize the limitations of margin assessment of biopsy specimens and not rely on margin status to dictate treatment.
  • Dermatopathologists should consider modifying how margin status is reported, either by omitting it or clarifying its limitations on the pathology report.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Article PDF Media

Reducing Inappropriate Laboratory Testing in the Hospital Setting: How Low Can We Go?

Article Type
Changed
Display Headline
Reducing Inappropriate Laboratory Testing in the Hospital Setting: How Low Can We Go?

From the University of Toronto (Dr. Basuita, Corey L. Kamen, and Dr. Soong) and Sinai Health System (Corey L. Kamen, Cheryl Ethier, and Dr. Soong), Toronto, Ontario, Canada. Co-first authors are Manpreet Basuita, MD, and Corey L. Kamen, BSc.

Abstract

  • Objective: Routine laboratory testing is common among medical inpatients; however, when ordered inappropriately testing can represent low-value care. We examined the impact of an evidence-based intervention bundle on utilization.
  • Participants/setting: This prospective cohort study took place at a tertiary academic medical center and included 6424 patients admitted to the general internal medicine service between April 2016 and March 2018.
  • Intervention: An intervention bundle, whose first components were implemented in July 2016, included computer order entry restrictions on repetitive laboratory testing, education, and audit-feedback.
  • Measures: Data were extracted from the hospital electronic health record. The primary outcome was the number of routine blood tests (complete blood count, creatinine, and electrolytes) ordered per inpatient day.
  • Analysis: Descriptive statistics were calculated for demographic variables. We used statistical process control charts to compare the baseline period (April 2016-June 2017) and the intervention period (July 2017-March 2018) for the primary outcome.
  • Results: The mean number of combined routine laboratory tests ordered per inpatient day decreased from 1.19 (SD, 0.21) tests to 1.11 (SD, 0.05), a relative reduction of 6.7% (P < 0.0001). Mean cost per case related to laboratory tests decreased from $17.24 in the pre-intervention period to $16.17 in the post-intervention period (relative reduction of 6.2%). This resulted in savings of $26,851 in the intervention year.
  • Conclusion: A laboratory intervention bundle was associated with small reductions in testing and costs. A routine test performed less than once per inpatient day may not be clinically appropriate or possible.

Keywords: utilization; clinical costs; quality improvement; QI intervention; internal medicine; inpatient.

Routine laboratory blood testing is a commonly used diagnostic tool that physicians rely on to provide patient care. Although routine blood testing represents less than 5% of most hospital budgets, routine use and over-reliance on testing among physicians makes it a target of cost-reduction efforts.1-3 A variety of interventions have been proposed to reduce inappropriate laboratory tests, with varying results.1,4-6 Successful interventions include providing physicians with fee data associated with ordered laboratory tests, unbundling panels of tests, and multicomponent interventions.6 We conducted a multifaceted quality improvement study to promote and develop interventions to adopt appropriate blood test ordering practices.

Methods

Setting

This prospective cohort study took place at Mount Sinai Hospital, a 443-bed academic hospital affiliated with the University of Toronto, where more than 2400 learners rotate through annually. The study was approved by the Mount Sinai Hospital Research Ethics Board.

Participants

We included all inpatient admissions to the general internal medicine service between April 2016 and March 2018. Exclusion criteria included a length of stay (LOS) longer than 365 days and admission to a critical care unit. Patients with more than 1 admission were counted as separate hospital inpatient visits.

 

 

Intervention

Based on internal data, we targeted the top 3 most frequently ordered routine blood tests: complete blood count (CBC), creatinine, and electrolytes. Trainee interviews revealed that habit, bundled order sets, and fear of “missing something” contributed to inappropriate routine blood test ordering. Based on these root causes, we used the Model for Improvement to iteratively develop a multimodal intervention that began in July 2016.7,8 This included a change to the computerized provider order entry (CPOE) to nudge clinicians to a restrictive ordering strategy by substituting the “Daily x3” frequency of blood test ordering with a “Daily x1” option on a pick list of order options. Clinicians could still order daily routine blood tests for any specified duration, but would have to do so by manually changing the default setting within the CPOE.

From July 2017 to March 2018, the research team educated residents on appropriate laboratory test ordering and provided audit and feedback data to the clinicians. Diagnostic uncertainty was addressed in teaching sessions. Attending physicians were surveyed on appropriate indications for daily laboratory testing for each of CBC, electrolytes, and creatinine. Appropriate indications (Figure 1) were displayed in visible clinical areas and incorporated into teaching sessions.9

Educational tool displaying appropriate indications for routine daily laboratory testing based on consensus

Clinician teams received real-time performance data on their routine blood test ordering patterns compared with an institutional benchmark. Bar graphs of blood work ordering rates (sum of CBCs, creatinine, and electrolytes ordered for all patients on a given team divided by the total LOS for all patients) were distributed to each internal medicine team via email every 2 weeks (Figure 2).1,10-12

 

Sample of biweekly data distributed to each general internal medicine (GIM) team to illustrate blood work ordering patterns relative to average of all teams

Data Collection and Analysis

Data were extracted from the hospital electronic health record (EHR). The primary outcome was the number of routine blood tests (CBC, creatinine, and electrolytes) ordered per inpatient day. Descriptive statistics were calculated for demographic variables. We used statistical process control (SPC) charts to compare the baseline period (April 2016-June 2017) and the intervention period (July 2017-March 2018) for the primary outcome. SPC charts display process changes over time. Data are plotted in chronological order, with the central line representing the outcome mean, an upper line representing the upper control limit, and a lower line representing the lower control limit. The upper and lower limits were set at 3δ, which correspond to 3 standard deviations above and below the mean. Six successive points above or beyond the mean suggests “special cause variation,” indicating that observed results are unlikely due to secular trends. SPC charts are commonly used quality tools for process improvement as well as research.13-16 These charts were created using QI Macros SPC software for Excel V. 2012.07 (KnowWare International, Denver, CO).

The direct cost of each laboratory test was acquired from the hospital laboratory department. The cost of each laboratory test (CBC = $7.54/test, electrolytes = $2.04/test, creatinine = $1.28/test, in Canadian dollars) was subsequently added together and multiplied by the pre- and post-intervention difference of total blood tests saved per inpatient day and then multiplied by 365 to arrive at an estimated cost savings per year.

 

 

Results

Over the study period, there were 6424 unique patient admissions on the general internal medicine service, with a median LOS of 3.5 days (Table).

Characteristics and Outcomes of Patients Discharged From General Internal Medicine Ward, April 2016 to March 2018

The majority of inpatient visits had at least 1 test of CBC (80%; mean, 3.6 tests/visit), creatinine (79.3%; mean, 3.5 tests/visit), or electrolytes (81.6%; mean, 3.9 tests/visit) completed. In total, 56,767 laboratory tests were ordered.

Following the intervention, there was a reduction in both rates of routine blood test orders and their associated costs, with a shift below the mean. The mean number of tests ordered (combined CBC, creatinine, and electrolytes) per inpatient day decreased from 1.19 (SD, 0.21) in the pre-intervention period to 1.11 (SD, 0.05) in the post-intervention period (P < 0.0001), representing a 6.7% relative reduction (Figure 3). We observed a 6.2% relative reduction in costs per inpatient day, translating to a total savings of $26,851 over 1 year for the intervention period.

Routine blood work ordering rates pre- and post-intervention

Discussion

Our study suggests that a multimodal intervention, including CPOE restrictions, resident education with posters, and audit and feedback strategies, can reduce lab test ordering on general internal medicine wards. This finding is similar to those of previous studies using a similar intervention, although different laboratory tests were targeted.1,2,5,6,10,17

Our study found lower test result reductions than those reported by a previous study, which reported a relative reduction of 17% to 30%,18 and by another investigation that was conducted recently in a similar setting.17 In the latter study, reductions in laboratory testing were mostly found in nonroutine tests, and no significant improvements were noted in CBC, electrolytes, and creatine, the 3 tests we studied over the same duration.17 This may represent a ceiling effect to reducing laboratory testing, and efforts to reduce CBC, electrolytes, and creatinine testing beyond 0.3 to 0.4 tests per inpatient day (or combined 1.16 tests per inpatient day) may not be clinically appropriate or possible. This information can guide institutions to include other areas of overuse based on rates of utilization in order to maximize the benefits from a resource intensive intervention.

There are a number of limitations that merit discussion. First, observational studies do not demonstrate causation; however, to our knowledge, there were no other co-interventions that were being conducted during the study duration. One important note is that our project’s intervention began in July, at which point there are new internal medicine residents beginning their training. As the concept of resource allocation becomes more important, medical schools are spending more time educating students about Choosing Wisely, and, therefore, newer cohorts of residents may be more cognizant of appropriate blood testing. Second, this is a single-center study, limiting generalizability; however, we note that many other centers have reported similar findings. Another limitation is that we do not know whether there were any adverse clinical events associated with blood work ordering that was too restrictive, although informal tracking of STAT laboratory testing remained stable throughout the study period. It is important to ensure that blood work is ordered in moderation and tailored to patients using one’s clinical judgment.

Future Directions

We observed modest reductions in the quantity and costs associated with a quality improvement intervention aimed at reducing routine blood testing. A baseline rate of laboratory testing of less than 1 test per inpatient day may require including other target tests to drive down absolute utilization.

Corresponding author: Christine Soong, MD, MSc, 433-600 University Avenue, Toronto, Ontario, Canada M5G 1X5; Christine.soong@utoronto.ca.

Financial disclosures: None.

References

1. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;178:431.

2. May TA, Clancy M, Critchfield J, et al. Reducing unnecessary inpatient laboratory testing in a teaching hospital. Am J Clin Pathol. 2006;126:200-206.

3. Thavendiranathan P, Bagai A, Ebidia A, et al. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20:520-524.

4. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173:903-908.

5. Attali, M, Barel Y, Somin M, et al. A cost-effective method for reducing the volume of laboratory tests in a university-associated teaching hospital. Mt Sinai J Med. 2006;73:787-794.

6. Faisal A, Andres K, Rind JAK, et al. Reducing the number of unnecessary routine laboratory tests through education of internal medicine residents. Postgrad Med J. 2018;94:716-719.

7. How to Improve. Institute for Healthcare Improvement. 2009. http://www.ihi.org/resources/Pages/HowtoImprove/default.aspx. Accessed June 5, 2019.

8. Langley GL, Moen R, Nolan KM, et al. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco: Jossey-Bass Publishers; 2009.

9. Hicks L. Blood Draws Toolkit. Choosing Wisely Canada. 2017. https://choosingwiselycanada.org/wpcontent/uploads/2017/10/CWC_BloodDraws_Toolkit.pdf. Accessed March 5, 2019.

10. Sadowski BW, Lane AB, Wood SM, et al. High-value, cost-conscious care: iterative systems-based interventions to reduce unnecessary laboratory testing. Am J Med. 2017;130:1112e1-1112e7.

11. Minerowicz C, Abel N, Hunter K, et al. Impact of weekly feedback on test ordering patterns. Am J Manag Care. 2015;21:763-768.

12. Calderon-Margalit R, Mor-Yosef S, et al. An administrative intervention to improve the utilization of laboratory tests within a university hospital. Int J Qual Health Care. 2005;17:243-248.

13. Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12:458-64.

14. American Society for Quality. Control chart. ASM website. https://asq.org/quality-resources/control-chart. Accessed November 5, 2020.

15. American Society for Quality. The 7 Basic Quality Tools For Process Improvement. ASM website. https://asq.org/quality-resources/seven-basic-quality-tools. Accessed November 5, 2020.

16. Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12:458-464.

17. Ambasta A, Ma IWY, Woo S, et al. Impact of an education and multilevel social comparison-based intervention bundle on use of routine blood tests in hospitalised patients at an academic tertiary care hospital: a controlled pre-intervention post-intervention study. BMJ Qual Saf. 2020;29:1-2.

18. Lee VS, Kawamoto K, Hess R, et al. Implementation of a value-driven outcomes program to identify high variability in clinical costs and outcomes and association with reduced cost and improved quality. JAMA. 2016;316:1061-1072.

Article PDF
Issue
Journal of Clinical Outcomes Management - 27(6)
Publications
Topics
Page Number
261-264,269
Sections
Article PDF
Article PDF

From the University of Toronto (Dr. Basuita, Corey L. Kamen, and Dr. Soong) and Sinai Health System (Corey L. Kamen, Cheryl Ethier, and Dr. Soong), Toronto, Ontario, Canada. Co-first authors are Manpreet Basuita, MD, and Corey L. Kamen, BSc.

Abstract

  • Objective: Routine laboratory testing is common among medical inpatients; however, when ordered inappropriately testing can represent low-value care. We examined the impact of an evidence-based intervention bundle on utilization.
  • Participants/setting: This prospective cohort study took place at a tertiary academic medical center and included 6424 patients admitted to the general internal medicine service between April 2016 and March 2018.
  • Intervention: An intervention bundle, whose first components were implemented in July 2016, included computer order entry restrictions on repetitive laboratory testing, education, and audit-feedback.
  • Measures: Data were extracted from the hospital electronic health record. The primary outcome was the number of routine blood tests (complete blood count, creatinine, and electrolytes) ordered per inpatient day.
  • Analysis: Descriptive statistics were calculated for demographic variables. We used statistical process control charts to compare the baseline period (April 2016-June 2017) and the intervention period (July 2017-March 2018) for the primary outcome.
  • Results: The mean number of combined routine laboratory tests ordered per inpatient day decreased from 1.19 (SD, 0.21) tests to 1.11 (SD, 0.05), a relative reduction of 6.7% (P < 0.0001). Mean cost per case related to laboratory tests decreased from $17.24 in the pre-intervention period to $16.17 in the post-intervention period (relative reduction of 6.2%). This resulted in savings of $26,851 in the intervention year.
  • Conclusion: A laboratory intervention bundle was associated with small reductions in testing and costs. A routine test performed less than once per inpatient day may not be clinically appropriate or possible.

Keywords: utilization; clinical costs; quality improvement; QI intervention; internal medicine; inpatient.

Routine laboratory blood testing is a commonly used diagnostic tool that physicians rely on to provide patient care. Although routine blood testing represents less than 5% of most hospital budgets, routine use and over-reliance on testing among physicians makes it a target of cost-reduction efforts.1-3 A variety of interventions have been proposed to reduce inappropriate laboratory tests, with varying results.1,4-6 Successful interventions include providing physicians with fee data associated with ordered laboratory tests, unbundling panels of tests, and multicomponent interventions.6 We conducted a multifaceted quality improvement study to promote and develop interventions to adopt appropriate blood test ordering practices.

Methods

Setting

This prospective cohort study took place at Mount Sinai Hospital, a 443-bed academic hospital affiliated with the University of Toronto, where more than 2400 learners rotate through annually. The study was approved by the Mount Sinai Hospital Research Ethics Board.

Participants

We included all inpatient admissions to the general internal medicine service between April 2016 and March 2018. Exclusion criteria included a length of stay (LOS) longer than 365 days and admission to a critical care unit. Patients with more than 1 admission were counted as separate hospital inpatient visits.

 

 

Intervention

Based on internal data, we targeted the top 3 most frequently ordered routine blood tests: complete blood count (CBC), creatinine, and electrolytes. Trainee interviews revealed that habit, bundled order sets, and fear of “missing something” contributed to inappropriate routine blood test ordering. Based on these root causes, we used the Model for Improvement to iteratively develop a multimodal intervention that began in July 2016.7,8 This included a change to the computerized provider order entry (CPOE) to nudge clinicians to a restrictive ordering strategy by substituting the “Daily x3” frequency of blood test ordering with a “Daily x1” option on a pick list of order options. Clinicians could still order daily routine blood tests for any specified duration, but would have to do so by manually changing the default setting within the CPOE.

From July 2017 to March 2018, the research team educated residents on appropriate laboratory test ordering and provided audit and feedback data to the clinicians. Diagnostic uncertainty was addressed in teaching sessions. Attending physicians were surveyed on appropriate indications for daily laboratory testing for each of CBC, electrolytes, and creatinine. Appropriate indications (Figure 1) were displayed in visible clinical areas and incorporated into teaching sessions.9

Educational tool displaying appropriate indications for routine daily laboratory testing based on consensus

Clinician teams received real-time performance data on their routine blood test ordering patterns compared with an institutional benchmark. Bar graphs of blood work ordering rates (sum of CBCs, creatinine, and electrolytes ordered for all patients on a given team divided by the total LOS for all patients) were distributed to each internal medicine team via email every 2 weeks (Figure 2).1,10-12

 

Sample of biweekly data distributed to each general internal medicine (GIM) team to illustrate blood work ordering patterns relative to average of all teams

Data Collection and Analysis

Data were extracted from the hospital electronic health record (EHR). The primary outcome was the number of routine blood tests (CBC, creatinine, and electrolytes) ordered per inpatient day. Descriptive statistics were calculated for demographic variables. We used statistical process control (SPC) charts to compare the baseline period (April 2016-June 2017) and the intervention period (July 2017-March 2018) for the primary outcome. SPC charts display process changes over time. Data are plotted in chronological order, with the central line representing the outcome mean, an upper line representing the upper control limit, and a lower line representing the lower control limit. The upper and lower limits were set at 3δ, which correspond to 3 standard deviations above and below the mean. Six successive points above or beyond the mean suggests “special cause variation,” indicating that observed results are unlikely due to secular trends. SPC charts are commonly used quality tools for process improvement as well as research.13-16 These charts were created using QI Macros SPC software for Excel V. 2012.07 (KnowWare International, Denver, CO).

The direct cost of each laboratory test was acquired from the hospital laboratory department. The cost of each laboratory test (CBC = $7.54/test, electrolytes = $2.04/test, creatinine = $1.28/test, in Canadian dollars) was subsequently added together and multiplied by the pre- and post-intervention difference of total blood tests saved per inpatient day and then multiplied by 365 to arrive at an estimated cost savings per year.

 

 

Results

Over the study period, there were 6424 unique patient admissions on the general internal medicine service, with a median LOS of 3.5 days (Table).

Characteristics and Outcomes of Patients Discharged From General Internal Medicine Ward, April 2016 to March 2018

The majority of inpatient visits had at least 1 test of CBC (80%; mean, 3.6 tests/visit), creatinine (79.3%; mean, 3.5 tests/visit), or electrolytes (81.6%; mean, 3.9 tests/visit) completed. In total, 56,767 laboratory tests were ordered.

Following the intervention, there was a reduction in both rates of routine blood test orders and their associated costs, with a shift below the mean. The mean number of tests ordered (combined CBC, creatinine, and electrolytes) per inpatient day decreased from 1.19 (SD, 0.21) in the pre-intervention period to 1.11 (SD, 0.05) in the post-intervention period (P < 0.0001), representing a 6.7% relative reduction (Figure 3). We observed a 6.2% relative reduction in costs per inpatient day, translating to a total savings of $26,851 over 1 year for the intervention period.

Routine blood work ordering rates pre- and post-intervention

Discussion

Our study suggests that a multimodal intervention, including CPOE restrictions, resident education with posters, and audit and feedback strategies, can reduce lab test ordering on general internal medicine wards. This finding is similar to those of previous studies using a similar intervention, although different laboratory tests were targeted.1,2,5,6,10,17

Our study found lower test result reductions than those reported by a previous study, which reported a relative reduction of 17% to 30%,18 and by another investigation that was conducted recently in a similar setting.17 In the latter study, reductions in laboratory testing were mostly found in nonroutine tests, and no significant improvements were noted in CBC, electrolytes, and creatine, the 3 tests we studied over the same duration.17 This may represent a ceiling effect to reducing laboratory testing, and efforts to reduce CBC, electrolytes, and creatinine testing beyond 0.3 to 0.4 tests per inpatient day (or combined 1.16 tests per inpatient day) may not be clinically appropriate or possible. This information can guide institutions to include other areas of overuse based on rates of utilization in order to maximize the benefits from a resource intensive intervention.

There are a number of limitations that merit discussion. First, observational studies do not demonstrate causation; however, to our knowledge, there were no other co-interventions that were being conducted during the study duration. One important note is that our project’s intervention began in July, at which point there are new internal medicine residents beginning their training. As the concept of resource allocation becomes more important, medical schools are spending more time educating students about Choosing Wisely, and, therefore, newer cohorts of residents may be more cognizant of appropriate blood testing. Second, this is a single-center study, limiting generalizability; however, we note that many other centers have reported similar findings. Another limitation is that we do not know whether there were any adverse clinical events associated with blood work ordering that was too restrictive, although informal tracking of STAT laboratory testing remained stable throughout the study period. It is important to ensure that blood work is ordered in moderation and tailored to patients using one’s clinical judgment.

Future Directions

We observed modest reductions in the quantity and costs associated with a quality improvement intervention aimed at reducing routine blood testing. A baseline rate of laboratory testing of less than 1 test per inpatient day may require including other target tests to drive down absolute utilization.

Corresponding author: Christine Soong, MD, MSc, 433-600 University Avenue, Toronto, Ontario, Canada M5G 1X5; Christine.soong@utoronto.ca.

Financial disclosures: None.

From the University of Toronto (Dr. Basuita, Corey L. Kamen, and Dr. Soong) and Sinai Health System (Corey L. Kamen, Cheryl Ethier, and Dr. Soong), Toronto, Ontario, Canada. Co-first authors are Manpreet Basuita, MD, and Corey L. Kamen, BSc.

Abstract

  • Objective: Routine laboratory testing is common among medical inpatients; however, when ordered inappropriately testing can represent low-value care. We examined the impact of an evidence-based intervention bundle on utilization.
  • Participants/setting: This prospective cohort study took place at a tertiary academic medical center and included 6424 patients admitted to the general internal medicine service between April 2016 and March 2018.
  • Intervention: An intervention bundle, whose first components were implemented in July 2016, included computer order entry restrictions on repetitive laboratory testing, education, and audit-feedback.
  • Measures: Data were extracted from the hospital electronic health record. The primary outcome was the number of routine blood tests (complete blood count, creatinine, and electrolytes) ordered per inpatient day.
  • Analysis: Descriptive statistics were calculated for demographic variables. We used statistical process control charts to compare the baseline period (April 2016-June 2017) and the intervention period (July 2017-March 2018) for the primary outcome.
  • Results: The mean number of combined routine laboratory tests ordered per inpatient day decreased from 1.19 (SD, 0.21) tests to 1.11 (SD, 0.05), a relative reduction of 6.7% (P < 0.0001). Mean cost per case related to laboratory tests decreased from $17.24 in the pre-intervention period to $16.17 in the post-intervention period (relative reduction of 6.2%). This resulted in savings of $26,851 in the intervention year.
  • Conclusion: A laboratory intervention bundle was associated with small reductions in testing and costs. A routine test performed less than once per inpatient day may not be clinically appropriate or possible.

Keywords: utilization; clinical costs; quality improvement; QI intervention; internal medicine; inpatient.

Routine laboratory blood testing is a commonly used diagnostic tool that physicians rely on to provide patient care. Although routine blood testing represents less than 5% of most hospital budgets, routine use and over-reliance on testing among physicians makes it a target of cost-reduction efforts.1-3 A variety of interventions have been proposed to reduce inappropriate laboratory tests, with varying results.1,4-6 Successful interventions include providing physicians with fee data associated with ordered laboratory tests, unbundling panels of tests, and multicomponent interventions.6 We conducted a multifaceted quality improvement study to promote and develop interventions to adopt appropriate blood test ordering practices.

Methods

Setting

This prospective cohort study took place at Mount Sinai Hospital, a 443-bed academic hospital affiliated with the University of Toronto, where more than 2400 learners rotate through annually. The study was approved by the Mount Sinai Hospital Research Ethics Board.

Participants

We included all inpatient admissions to the general internal medicine service between April 2016 and March 2018. Exclusion criteria included a length of stay (LOS) longer than 365 days and admission to a critical care unit. Patients with more than 1 admission were counted as separate hospital inpatient visits.

 

 

Intervention

Based on internal data, we targeted the top 3 most frequently ordered routine blood tests: complete blood count (CBC), creatinine, and electrolytes. Trainee interviews revealed that habit, bundled order sets, and fear of “missing something” contributed to inappropriate routine blood test ordering. Based on these root causes, we used the Model for Improvement to iteratively develop a multimodal intervention that began in July 2016.7,8 This included a change to the computerized provider order entry (CPOE) to nudge clinicians to a restrictive ordering strategy by substituting the “Daily x3” frequency of blood test ordering with a “Daily x1” option on a pick list of order options. Clinicians could still order daily routine blood tests for any specified duration, but would have to do so by manually changing the default setting within the CPOE.

From July 2017 to March 2018, the research team educated residents on appropriate laboratory test ordering and provided audit and feedback data to the clinicians. Diagnostic uncertainty was addressed in teaching sessions. Attending physicians were surveyed on appropriate indications for daily laboratory testing for each of CBC, electrolytes, and creatinine. Appropriate indications (Figure 1) were displayed in visible clinical areas and incorporated into teaching sessions.9

Educational tool displaying appropriate indications for routine daily laboratory testing based on consensus

Clinician teams received real-time performance data on their routine blood test ordering patterns compared with an institutional benchmark. Bar graphs of blood work ordering rates (sum of CBCs, creatinine, and electrolytes ordered for all patients on a given team divided by the total LOS for all patients) were distributed to each internal medicine team via email every 2 weeks (Figure 2).1,10-12

 

Sample of biweekly data distributed to each general internal medicine (GIM) team to illustrate blood work ordering patterns relative to average of all teams

Data Collection and Analysis

Data were extracted from the hospital electronic health record (EHR). The primary outcome was the number of routine blood tests (CBC, creatinine, and electrolytes) ordered per inpatient day. Descriptive statistics were calculated for demographic variables. We used statistical process control (SPC) charts to compare the baseline period (April 2016-June 2017) and the intervention period (July 2017-March 2018) for the primary outcome. SPC charts display process changes over time. Data are plotted in chronological order, with the central line representing the outcome mean, an upper line representing the upper control limit, and a lower line representing the lower control limit. The upper and lower limits were set at 3δ, which correspond to 3 standard deviations above and below the mean. Six successive points above or beyond the mean suggests “special cause variation,” indicating that observed results are unlikely due to secular trends. SPC charts are commonly used quality tools for process improvement as well as research.13-16 These charts were created using QI Macros SPC software for Excel V. 2012.07 (KnowWare International, Denver, CO).

The direct cost of each laboratory test was acquired from the hospital laboratory department. The cost of each laboratory test (CBC = $7.54/test, electrolytes = $2.04/test, creatinine = $1.28/test, in Canadian dollars) was subsequently added together and multiplied by the pre- and post-intervention difference of total blood tests saved per inpatient day and then multiplied by 365 to arrive at an estimated cost savings per year.

 

 

Results

Over the study period, there were 6424 unique patient admissions on the general internal medicine service, with a median LOS of 3.5 days (Table).

Characteristics and Outcomes of Patients Discharged From General Internal Medicine Ward, April 2016 to March 2018

The majority of inpatient visits had at least 1 test of CBC (80%; mean, 3.6 tests/visit), creatinine (79.3%; mean, 3.5 tests/visit), or electrolytes (81.6%; mean, 3.9 tests/visit) completed. In total, 56,767 laboratory tests were ordered.

Following the intervention, there was a reduction in both rates of routine blood test orders and their associated costs, with a shift below the mean. The mean number of tests ordered (combined CBC, creatinine, and electrolytes) per inpatient day decreased from 1.19 (SD, 0.21) in the pre-intervention period to 1.11 (SD, 0.05) in the post-intervention period (P < 0.0001), representing a 6.7% relative reduction (Figure 3). We observed a 6.2% relative reduction in costs per inpatient day, translating to a total savings of $26,851 over 1 year for the intervention period.

Routine blood work ordering rates pre- and post-intervention

Discussion

Our study suggests that a multimodal intervention, including CPOE restrictions, resident education with posters, and audit and feedback strategies, can reduce lab test ordering on general internal medicine wards. This finding is similar to those of previous studies using a similar intervention, although different laboratory tests were targeted.1,2,5,6,10,17

Our study found lower test result reductions than those reported by a previous study, which reported a relative reduction of 17% to 30%,18 and by another investigation that was conducted recently in a similar setting.17 In the latter study, reductions in laboratory testing were mostly found in nonroutine tests, and no significant improvements were noted in CBC, electrolytes, and creatine, the 3 tests we studied over the same duration.17 This may represent a ceiling effect to reducing laboratory testing, and efforts to reduce CBC, electrolytes, and creatinine testing beyond 0.3 to 0.4 tests per inpatient day (or combined 1.16 tests per inpatient day) may not be clinically appropriate or possible. This information can guide institutions to include other areas of overuse based on rates of utilization in order to maximize the benefits from a resource intensive intervention.

There are a number of limitations that merit discussion. First, observational studies do not demonstrate causation; however, to our knowledge, there were no other co-interventions that were being conducted during the study duration. One important note is that our project’s intervention began in July, at which point there are new internal medicine residents beginning their training. As the concept of resource allocation becomes more important, medical schools are spending more time educating students about Choosing Wisely, and, therefore, newer cohorts of residents may be more cognizant of appropriate blood testing. Second, this is a single-center study, limiting generalizability; however, we note that many other centers have reported similar findings. Another limitation is that we do not know whether there were any adverse clinical events associated with blood work ordering that was too restrictive, although informal tracking of STAT laboratory testing remained stable throughout the study period. It is important to ensure that blood work is ordered in moderation and tailored to patients using one’s clinical judgment.

Future Directions

We observed modest reductions in the quantity and costs associated with a quality improvement intervention aimed at reducing routine blood testing. A baseline rate of laboratory testing of less than 1 test per inpatient day may require including other target tests to drive down absolute utilization.

Corresponding author: Christine Soong, MD, MSc, 433-600 University Avenue, Toronto, Ontario, Canada M5G 1X5; Christine.soong@utoronto.ca.

Financial disclosures: None.

References

1. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;178:431.

2. May TA, Clancy M, Critchfield J, et al. Reducing unnecessary inpatient laboratory testing in a teaching hospital. Am J Clin Pathol. 2006;126:200-206.

3. Thavendiranathan P, Bagai A, Ebidia A, et al. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20:520-524.

4. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173:903-908.

5. Attali, M, Barel Y, Somin M, et al. A cost-effective method for reducing the volume of laboratory tests in a university-associated teaching hospital. Mt Sinai J Med. 2006;73:787-794.

6. Faisal A, Andres K, Rind JAK, et al. Reducing the number of unnecessary routine laboratory tests through education of internal medicine residents. Postgrad Med J. 2018;94:716-719.

7. How to Improve. Institute for Healthcare Improvement. 2009. http://www.ihi.org/resources/Pages/HowtoImprove/default.aspx. Accessed June 5, 2019.

8. Langley GL, Moen R, Nolan KM, et al. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco: Jossey-Bass Publishers; 2009.

9. Hicks L. Blood Draws Toolkit. Choosing Wisely Canada. 2017. https://choosingwiselycanada.org/wpcontent/uploads/2017/10/CWC_BloodDraws_Toolkit.pdf. Accessed March 5, 2019.

10. Sadowski BW, Lane AB, Wood SM, et al. High-value, cost-conscious care: iterative systems-based interventions to reduce unnecessary laboratory testing. Am J Med. 2017;130:1112e1-1112e7.

11. Minerowicz C, Abel N, Hunter K, et al. Impact of weekly feedback on test ordering patterns. Am J Manag Care. 2015;21:763-768.

12. Calderon-Margalit R, Mor-Yosef S, et al. An administrative intervention to improve the utilization of laboratory tests within a university hospital. Int J Qual Health Care. 2005;17:243-248.

13. Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12:458-64.

14. American Society for Quality. Control chart. ASM website. https://asq.org/quality-resources/control-chart. Accessed November 5, 2020.

15. American Society for Quality. The 7 Basic Quality Tools For Process Improvement. ASM website. https://asq.org/quality-resources/seven-basic-quality-tools. Accessed November 5, 2020.

16. Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12:458-464.

17. Ambasta A, Ma IWY, Woo S, et al. Impact of an education and multilevel social comparison-based intervention bundle on use of routine blood tests in hospitalised patients at an academic tertiary care hospital: a controlled pre-intervention post-intervention study. BMJ Qual Saf. 2020;29:1-2.

18. Lee VS, Kawamoto K, Hess R, et al. Implementation of a value-driven outcomes program to identify high variability in clinical costs and outcomes and association with reduced cost and improved quality. JAMA. 2016;316:1061-1072.

References

1. Eaton KP, Levy K, Soong C, et al. Evidence-based guidelines to eliminate repetitive laboratory testing. JAMA Intern Med. 2017;178:431.

2. May TA, Clancy M, Critchfield J, et al. Reducing unnecessary inpatient laboratory testing in a teaching hospital. Am J Clin Pathol. 2006;126:200-206.

3. Thavendiranathan P, Bagai A, Ebidia A, et al. Do blood tests cause anemia in hospitalized patients? The effect of diagnostic phlebotomy on hemoglobin and hematocrit levels. J Gen Intern Med. 2005;20:520-524.

4. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173:903-908.

5. Attali, M, Barel Y, Somin M, et al. A cost-effective method for reducing the volume of laboratory tests in a university-associated teaching hospital. Mt Sinai J Med. 2006;73:787-794.

6. Faisal A, Andres K, Rind JAK, et al. Reducing the number of unnecessary routine laboratory tests through education of internal medicine residents. Postgrad Med J. 2018;94:716-719.

7. How to Improve. Institute for Healthcare Improvement. 2009. http://www.ihi.org/resources/Pages/HowtoImprove/default.aspx. Accessed June 5, 2019.

8. Langley GL, Moen R, Nolan KM, et al. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2nd ed. San Francisco: Jossey-Bass Publishers; 2009.

9. Hicks L. Blood Draws Toolkit. Choosing Wisely Canada. 2017. https://choosingwiselycanada.org/wpcontent/uploads/2017/10/CWC_BloodDraws_Toolkit.pdf. Accessed March 5, 2019.

10. Sadowski BW, Lane AB, Wood SM, et al. High-value, cost-conscious care: iterative systems-based interventions to reduce unnecessary laboratory testing. Am J Med. 2017;130:1112e1-1112e7.

11. Minerowicz C, Abel N, Hunter K, et al. Impact of weekly feedback on test ordering patterns. Am J Manag Care. 2015;21:763-768.

12. Calderon-Margalit R, Mor-Yosef S, et al. An administrative intervention to improve the utilization of laboratory tests within a university hospital. Int J Qual Health Care. 2005;17:243-248.

13. Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12:458-64.

14. American Society for Quality. Control chart. ASM website. https://asq.org/quality-resources/control-chart. Accessed November 5, 2020.

15. American Society for Quality. The 7 Basic Quality Tools For Process Improvement. ASM website. https://asq.org/quality-resources/seven-basic-quality-tools. Accessed November 5, 2020.

16. Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12:458-464.

17. Ambasta A, Ma IWY, Woo S, et al. Impact of an education and multilevel social comparison-based intervention bundle on use of routine blood tests in hospitalised patients at an academic tertiary care hospital: a controlled pre-intervention post-intervention study. BMJ Qual Saf. 2020;29:1-2.

18. Lee VS, Kawamoto K, Hess R, et al. Implementation of a value-driven outcomes program to identify high variability in clinical costs and outcomes and association with reduced cost and improved quality. JAMA. 2016;316:1061-1072.

Issue
Journal of Clinical Outcomes Management - 27(6)
Issue
Journal of Clinical Outcomes Management - 27(6)
Page Number
261-264,269
Page Number
261-264,269
Publications
Publications
Topics
Article Type
Display Headline
Reducing Inappropriate Laboratory Testing in the Hospital Setting: How Low Can We Go?
Display Headline
Reducing Inappropriate Laboratory Testing in the Hospital Setting: How Low Can We Go?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Article PDF Media

Comparison of Resident, Advanced Practice Clinician, and Hospitalist Teams in an Academic Medical Center: Association With Clinical Outcomes and Resource Utilization

Article Type
Changed
Display Headline
Comparison of Resident, Advanced Practice Clinician, and Hospitalist Teams in an Academic Medical Center: Association With Clinical Outcomes and Resource Utilization

The Accreditation Council for Graduate Medical Education (ACGME) first mandated residency work hour restrictions in 2003.1 In 2011, revised work hour requirements were issued, further limiting the maximum duration of a shift and extending the duration of time off between scheduled shifts.2 Academic medical centers have been forced to adapt to work hour restrictions, and cuts in funding to research and educational missions have pressured institutions to restructure with a greater focus on high-quality, lower-cost care.3,4 In response, many academic hospitals have added hospitalist teams, or incorporated advanced practice clinicians (APCs) (nurse practitioners [NPs] and physician assistants [PAs]) to accommodate resident physician duty hour restrictions on their inpatient general medicine services.5,6 More recently, the COVID-19 pandemic has created unanticipated physician shortages forcing medical centers to rapidly expand and broaden the scope of their existing APC workforce.7

Several comparisons of clinical outcomes, cost, and patient satisfaction between different combinations of hospitalist-based, resident-based, or APC-based inpatient teams have been reported with conflicting observations.6,8-14 Roy et al reported no significant differences in mortality, length of stay (LOS), or readmissions between PA and resident teams.6 Timmermans et al reported similar cost-effectiveness, LOS, and quality of care between PA and physician teams that included a hybrid of attending only and resident teams.13,14 Alternatively, Singh et al and Iannuzzi et al reported increased LOS among PA teams,10,12 whereas Chin et al observed an increased LOS and reduced 30-day readmissions among hospitalist teams.8 While these observed differences may be attributable to heterogeneous patient populations or institution-specific team structure, the exact reasons remain unknown. Furthermore, understanding the value of alternate staffing models is essential for medical centers to prepare for potential COVID-19 related physician shortages. To our knowledge, no study to date has directly compared outcomes between resident, APC, and hospitalist team structures within an academic medical center.

We believe our institution provides a unique environment to study the differences in inpatient general medicine team structure with respect to quality and efficiency of care delivery. The objective of our study is to directly compare clinical outcomes and resource utilization among three distinct team structures: APC, resident, and solo hospitalist. We hypothesize that clinical outcomes, cost, and utilization of consult services will be similar across all team structures and hospitalist teams will discharge patients earlier than resident and APC teams.

METHODS

Study Design and Setting

We conducted a retrospective observational cohort study at the University of Utah Medical Center, a 548-bed academic medical center in Salt Lake City. An electronic database query was used to identify all patients discharged from the inpatient general internal medicine service between July 1, 2015, and July 1, 2018. Baseline patient characteristics were collected including age, gender, and Charlson comorbidity index (CCI).15 Case-mix index was determined for admissions where a Medicare Severity Diagnosis Related Group (MS-DRG) and corresponding weight was assigned.16,17 Source of admission was collected to identify patients transferred from an outside hospital, typically due to increased medical complexity or need for specialty care not available at the referring center. Time of admission was collected to classify whether a patient was admitted during the day or at night. Length of stay was calculated as the difference between discharge date/time and admission date/time. Discharge order time was collected as a measure of clinician efficiency. The number of consults per admission was determined by the number of different medical or surgical subspecialty services that wrote at least one consultation or progress note after the time of admission and were not the primary service at the time the note was written. The project was reviewed and deemed exempt by the University of Utah Institutional Review Board (IRB 00104884).

Inpatient Care Team Structure

Patients were assigned to one of three cohorts dependent on the assigned treatment team at the time of discharge. The three inpatient team structures were as follows: (1) a “resident team” composed of a senior resident (postgraduate year [PGY] 2 or PGY3) and one to two medical students or one senior resident, two interns (PGY1), and one to two medical students supervised by a hospitalist physician; (2) an “APC team” composed of one to two APCs supervised by a hospitalist physician; and (3) a “hospitalist team” composed of one attending hospitalist independently managing all patients.

Advanced Practice Clinicians

The APC service included 10 APCs (8 PAs and 2 NPs), with a combined workforce of nine APC full-time equivalents during the study period. Their experience ranged from new graduate to 11 years of clinical experience, with an average of 4.2 years. Among the 6 APCs with prior clinical experience, the majority (86%) of their years of clinical experience were within inpatient medicine, oncology, or cardiology. Recognizing the variability in clinical experience, we employed a rigorous onboarding program that entailed an average of 80 hours of didactic sessions including 1:1 teaching of the inpatient Society of Hospital Medicine core lecture series combined with initial intense clinical oversight.18 This program ranged from 2 weeks to 6 weeks depending on the individual APC’s clinical experience, progress, and comfort working independently. This onboarding program has subsequently been formalized into a 1-year APC fellowship that began after the study period concluded.

The degree of autonomy for each APC was individualized based on their clinical experience and ability to recognize limitations such as medical decision-making, clinical knowledge, and effective use of interprofessional team members (eg, peers, nursing, ancillary staff, consultants, and support personnel). Those APCs who demonstrated a sufficient level of clinical competence functioned with a high level of autonomy. During the day, APCs were expected to be the first point of contact for interprofessional team members, to respond to acute clinical changes in a patient’s condition, and to discuss active issues with the supervising attending, all with the majority of medical decision-making, direct patient communication, documentation, and care coordination performed by the APC. An experienced subset of the APC service was responsible for overnight coverage. Nocturnist APCs independently managed all cross-cover issues on patients assigned to APC and hospitalist teams and performed admissions with very little to no direct supervision of the overnight attending physician.

Patient Admission and Redistribution Process

During the study period, resident teams performed all daytime admissions (6 am to 6 pm) on a rotating basis. On any given day, three of four resident teams performed daytime admissions with the fourth team designated as “golden” and free from admitting duties. Patients admitted during the day remained assigned to the resident team for continuity. The APC and hospitalist teams did not accept new admissions during the day. Nighttime admissions (6 pm to 6 am) were performed by a separate team composed of two senior residents, two interns, one APC, occasional APC and medical students, and one supervising attending hospitalist. This team functioned as a single unit. Nighttime admissions were performed in a sequential and rotating fashion (eg, Intern A > Intern B > Resident A > Resident B > APC > student(s) > Intern A > Intern B, etc). Patients admitted overnight were randomly redistributed the following morning, with the majority reassigned to an APC team or hospitalist team in order to offset the workload of the resident teams performing daytime admissions. Following redistribution, a patient would remain assigned to the daytime APC or hospitalist team for the duration of their hospitalization. The redistribution decisions were based on individual team census, without systematic consideration of an individual patient’s diagnosis, medical complexity, socioeconomic status, or perceived quality of learning potential (eg, good teaching case).

Study Outcomes

We divided study outcomes into two categories, clinical outcomes and resource utilization. Clinical outcomes included LOS, unplanned readmission within 30-days, and inpatient mortality and were designed to measure patient-related outcomes as a reflection of the quality of care delivered by different team structures. Resource utilization included discharge order time, discharge time, consults per admission, and total direct cost, which were designed to measure provider-related differences in efficiency and cost of care.

Statistical analysis

Baseline characteristics and unadjusted outcomes are reported as frequency and percent, normally distributed variables as mean with SD, and nonnormally distributed variables as median with interquartile range (IQR). Baseline characteristics and unadjusted outcomes were compared using the chi-square test or the t test, where appropriate. Multivariable regression analysis using generalized linear models with a log link function and gamma distribution was used for continuous outcomes. Multivariable logistic regression was used for binary outcomes.10 Covariates included in regression models were age, gender, CCI, transfer from an outside hospital, and nighttime admission. In a sensitivity analysis, we included MS-DRG weight as a covariate for 85% of hospitalizations in our cohort exclusive of observation stays, and our findings were qualitatively similar (data not reported but available on request). Adjusted continuous outcomes were estimated using marginal effects at the means.19 Due to the sensitivity of cost data and an institutional policy against disclosing cost figures, total direct costs were normalized using the unadjusted median and adjusted mean total direct cost of an admission to an APC team as the normalizing value. A P value cutoff of .05 was used to determine statistical significance. Stata/IC version 16.1 (StataCorp) was used for all analyses.

RESULTS

Study Population

A total of 12,716 hospital admissions were identified during the study period. Of these, 7,943 (62.5%) admissions were assigned to a resident team, 3,519 (27.7%) admissions were assigned to an APC team, and the remaining 1,254 (9.9%) were assigned to a hospitalist team. Baseline patient characteristics are reported in Table 1. Patients admitted to resident teams (mean age [SD], 56.9 [19.1] years) were younger than those admitted to an APC team (58.0 [19.3] years; P = .004) or a hospitalist team (58.2 [19.3] years; P = .026). The case-mix index (mean MS-DRG weight [SD], 1.44 [0.87]) was slightly lower for resident teams than that for APC teams (1.49 [0.90]; P = .025).Resident teams had a significantly lower proportion of night admissions than did APC teams (32.0% vs 49.5%; P < .001) and hospitalist teams (48.6%; P < .001). APC teams were assigned more patients transferred from an outside hospital (19.1%), compared with resident teams (15.0%; P < .001) and hospitalist teams (16.0%; P = .015). No other significant differences were observed in baseline characteristics between cohorts.

Baseline Patient Characteristics

Clinical Outcomes

Unadjusted analysis demonstrated the LOS was similar among resident, APC, and hospitalist teams with a median (IQR) LOS of 2.90 (1.86, 4.26) days, 2.93 (1.89, 4.66) days, and 2.86 (1.84, 4.67) days, respectively. No significant differences were observed in unadjusted 30-day readmissions or inpatient mortality among the team structures (Table 2). Following multivariable adjustment for differences in baseline characteristics, no significant differences were observed in LOS, 30-day readmission, or inpatient mortality among teams (Table 3).

Comparison of Unadjusted Clinical Outcomes and Resource Utilization Among Resident, APC, and Hospitalist Teams

Resource Utilization

In unadjusted comparisons, hospitalist teams were observed to place discharge orders more than 30 minutes earlier than APC teams (median hours after midnight [IQR], 11.20 [9.63, 13.60] vs 11.73 [10.00, 13.87]; P < .001) and 54 minutes earlier than resident teams (12.10 [10.38, 13.90]; P < .001) (Table 2). Consistent with the earlier placement of discharge orders, hospitalist patients were also discharged from the hospital 26 and 32 minutes earlier than APC and resident patients, respectively. APC teams also discharged patients slightly earlier (6 minutes) than resident teams (median hours after midnight [IQR], 14.97 [13.23, 16.72] vs 15.07 [13.42, 16.73]; P = .045). Median consultation use among teams was similar, although statistically significant differences were present. Normalized total direct cost was 8% higher (P < .001) for admissions to APC teams than that for resident teams and 7% higher (P = .008) than that for hospitalist teams in unadjusted analysis (Table 2).

Following multivariable adjustment, the mean differences in discharge order time and discharge time remained significant with hospitalist teams discharging patients an average of 20 to 30 minutes earlier than APC and resident teams (Table 3). Consultant utilization remained significantly different between teams, with APC teams utilizing consultants on average 15% more than hospitalist teams (P < .001) and 7% more than resident teams (P = .001). The differences in total direct costs were not significant after adjusted analysis.

Comparison of Adjusted Clinical Outcomes and Resource Utilization Among Resident, APC, and Hospitalist Teams

DISCUSSION

Many academic medical centers have expanded their workforce with APC or nonteaching hospitalist teams to accommodate the increasing volume of hospital admissions, resident work hour restrictions,1,2 and medical complexity of an aging population. Several hospitals have reported comparative outcomes between different care delivery models, with conflicting results.6,8,10-12 In our study, we directly evaluated three inpatient care delivery models and found that hospitalist teams discharged patients more efficiently and utilized fewer consultants, compared with APC and resident teams. In spite of this improved efficiency, no significant differences were observed in cost or other clinical outcomes.

Our findings are important and further strengthen the evidence supporting the use of APCs on inpatient general medicine services and are of particular interest to academic centers struggling to expand staffing in order to offset the growth in patient volume and reduction in resident workforce. We believe several findings from our study warrant further discussion.

First, although hospitalist teams were able to discharge patients more efficiently, this observation may be influenced by factors of workflow rather than caused by significant disparities in efficiency between provider types (ie, APC vs hospitalist vs resident physician). As with most academic centers, patients assigned to resident teams are presented by house staff to an attending physician who is ultimately responsible for patient care decisions. Therefore, it is conceivable that delays in the discharge process are in part related to the convention of bedside rounding and discussing the care plan prior to discharge.20 In fact, we recognized this as a bottleneck and changed our discharge process for resident teams in June 2017, with a measurable improvement in discharge times. In the absence of this intervention, our observed differences in discharge times among teams may have been even greater.

Second, no significant differences in clinical outcomes were observed in our adjusted analyses, which suggests that a similar quality of care is delivered to patients regardless of team structure, an important observation when considering different staffing models.

Third, we observed a significant increase in consultation use among resident and APC teams, compared with hospitalists. While we are not able to precisely identify the basis for this variation, we believe it could reflect differences in clinical experience, comfort with diagnostic uncertainty, or the unequal distribution of patients transferred from outside hospitals for tertiary care. Interestingly, the greater consultation use did not correlate with higher healthcare costs, a finding recently reported by Stevens et al.21

Fourth, we believe the lack of differences in cost and clinical outcomes among team structures may be of particular interest to academic centers when considering physician burnout, salaries, and clinical education. The relationship between clerical burden, such as completing clinical documentation and computerized physician order entry, has been implicated as a risk factor for physician burnout.22 Incorporating APCs into roles similar to those performed by resident physicians may reduce the clerical burden on hospitalists, thereby reducing the risk of physician burnout. The addition of APCs may also represent opportunities for cost savings for healthcare centers when comparing the median salary of an APC to that of an internal medicine hospitalist.23,24 Moreover, academic hospitalists have been shown to be excellent medical educators and report increased job satisfaction with a variety of duties beyond direct patient care.24,25 Unforeseen benefits of adding APC teams within our institution has been the added teaching opportunities for APCs and APC students, increased collegiality with the APCs, and the creation of an APC fellowship program with a focus on inpatient medicine. Similar postgraduate training programs have been reported and serve as effective models to train APCs for hospital-based practice.26

Lastly, although this project was conceived and completed prior to the COVID-19 pandemic, our observations may be informative for medical centers experiencing a workforce shortage caused by a surge of COVID-19 patients. During a physician shortage we believe our APC team model could be rapidly expanded to accommodate a large influx of patients. This expansion could be accomplished through a single attending physician overseeing multiple APC teams. In this model, the supervising physician would only evaluate the most complex patients with most patients being managed solely by an APC from admission to discharge. Such changes may require temporary suspension of state laws restricting APC independent practice.27,28

Our findings contrast those of previous reports in that we did not observe significant differences in clinical outcomes (ie, LOS, inpatient mortality, and 30-day readmissions) or total direct cost.8,10,21 Other institutions have noted an increased LOS among APC teams and hospitalist teams, compared with resident teams.8,10 Furthermore, Chin et al and Iannuzzi et al reported reductions in healthcare cost for resident teams, whereas our study did not identify significant cost differences among team structures. Although we cannot pinpoint the exact reason(s) for these dissimilarities, it is plausible that unmeasured factors such as institutional differences in APC training, direct physician supervision, admission processes, or inpatient team census may play a role.

Several study limitations should be recognized. First, the retrospective, nonrandomized design is one of the largest limitations of our study. Administrative data was obtained via an electronic query of our data warehouse, and although we aimed to identify as many patient characteristics as possible to adjust for cofounding effects, undetected differences among cohorts may exist. Second, our inpatient admission process may have placed undue burden on resident teams to perform all daytime admissions, inadvertently affecting study outcomes. It is possible the observed benefits of a solo hospitalist team are attributable to the lack of admitting duties rather than inherent advantages of the team structure. If this were the case, we would expect similar benefits among APC teams, which we did not note. Third, the study was performed at a single academic center, which may limit the generalizability of our results. Fourth, it is possible the outcomes are similar among teams because our hospitalist faculty rotate proportionately between the different teams. Lastly, the study was underpowered to detect a significant difference in mortality between hospitalist and APC teams. A post hoc power calculation based on our observed sample and effect sizes estimated 75% power to detect a mortality difference between hospitalists and APCs; other mortality comparisons were adequately powered.

CONCLUSION

We observed similar total direct costs, LOS, 30-day readmission, and inpatient mortality between hospitalist, APC, and resident teams. APC and resident teams utilized more consultants and discharged patient later than hospitalists. Our analysis suggests clinical outcomes are not significantly affected by inpatient team structure, and the addition of general medicine inpatient APC or hospitalist teams represent safe and efficient alternatives to traditional resident teams within an academic medical center.

Disclosures

All authors declare they have no conflicts of interest.

References

1. Report of the Work Group on Resident Duty Hours and the Learning Environment, June 11, 2002. Accreditation Council for Graduate Medical Education; 2003.
2. ACGME Task Force on Quality Care and Professionalism. Philibert I, Amis Steve, eds. The ACGME 2011 Duty Hour Standards: Enhancing Quality of Care, Supervision, and Resident Professional Development. Accreditation Council for Graduate Medical Education; 2011. https://www.acgme.org/Portals/0/PDFs/jgme-monograph[1].pdf
3. Konstam MA, Hill JA, Kovacs RJ, et al. The academic medical system: reinvention to survive the revolution in health care. J Am Coll Cardiol. 2017;69(10):1305-1312. https://doi.org/10.1016/j.jacc.2016.12.024
4. The future of the academic medical center: strategies to avoid a margin meltdown. Health Research Institute. February 2012. https://uofuhealth.utah.edu/hcr/2012/resources/the-future-of-academic-medical-centers.pdf
5. Moote M, Krsek C, Kleinpell R, Todd B. Physician assistant and nurse practitioner utilization in academic medical centers. Am J Med Qual. 2019;34(5):465-472. https://doi.org/ 10.1177/1062860619873216
6. Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361-368. https://doi.org/10.1002/jhm.352
7. Denne E. Behind the scenes at Northwell Health as PAs respond to COVID-19. American Academy of Physician Assistants. May 11, 2020. Accessed May 15, 2020. https://www.aapa.org/news-central/2020/05/behind-the-scenes-at-northwell-heath-as-pas-respond-to-covid-19/
8. Chin DL, Wilson MH, Bang H, Romano PS. Comparing patient outcomes of academician-preceptors, hospitalist-preceptors, and hospitalists on internal medicine services in an academic medical center. J Gen Intern Med. 2014;29(12):1672-1678. https://doi.org/10.1007/s11606-014-2982-y
9. Cowan MJ, Shapiro M, Hays RD, et al. The effect of a multidisciplinary hospitalist/physician and advanced practice nurse collaboration on hospital costs. J Nurs Adm. 2006;36(2):79-85. https://doi.org/10.1097/00005110-200602000-00006
10. Iannuzzi MC, Iannuzzi JC, Holtsbery A, Wright SM, Knohl SJ. Comparing hospitalist-resident to hospitalist-midlevel practitioner team performance on length of stay and direct patient care cost. J Grad Med Educ. 2015;7(1):65-69. https://doi.org/10.4300/jgme-d-14-00234.1
11. Kapu AN, Kleinpell R, Pilon B. Quality and financial impact of adding nurse practitioners to inpatient care teams. J Nurs Adm. 2014;44(2):87-96. https://doi.org/10.1097/nna.0000000000000031
12. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. https://doi.org/10.1002/jhm.826
13. Timmermans MJC, van Vught A, Peters YAS, et al. The impact of the implementation of physician assistants in inpatient care: a multicenter matched-controlled study. PLoS One. 2017;12(8):e0178212. https://doi.org/10.1371/journal.pone.0178212
14. Timmermans MJC, van den Brink GT, van Vught A, et al. The involvement of physician assistants in inpatient care in hospitals in the Netherlands: a cost-effectiveness analysis. BMJ Open. 2017;7(7):e016405. https://doi.org/10.1136/bmjopen-2017-016405
15. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. https://doi.org/10.1016/0021-9681(87)90171-8
16. MS-DRG Classifications and Software. Centers for Medicare & Medicaid Services. 2020. Updated April 28, 2020. Accessed May 5, 2020. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/MS-DRG-Classifications-and-Software
17. Fetter RB, Shin Y, Freeman JL, Averill RF, Thompson JD. Case mix definition by diagnosis-related groups. Med Care. 1980;18(2 Suppl):iii, 1-53.
18. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine--2017 revision: introduction and methodology. J Hosp Med. 2017;12(4):283-287. https://doi.org/10.12788/jhm.2715
19. Williams R. Using the margins command to estimate and interpret adjusted predictions and marginal effects. Stata J. 2012;12(2):308-331. https://doi.org/10.1177%2F1536867X1201200209
20. Goolsarran N, Olowo G, Ling Y, Abbasi S, Taub E, Teressa G. Outcomes of a resident-led early hospital discharge intervention. J Gen Intern Med. 2020;35(2):437-443. https://doi.org/10.1007/s11606-019-05563-w
21. Stevens JP, Hatfield LA, Nyweide DJ, Landon B. Association of variation in consultant use among hospitalist physicians with outcomes among Medicare beneficiaries. JAMA Netw Open. 2020;3(2):e1921750. https://doi.org/10.1001/jamanetworkopen.2019.21750
22. Shanafelt TD, Dyrbye LN, Sinsky C, et al. Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clin Proc. 2016;91(7):836-848. https://doi.org/10.1016/j.mayocp.2016.05.007
23. 2019 AAPA Salary Report. American Academy of PAs. 2019. https://www.aapa.org/shop/salary-report-2019/
24. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB; Society of Hospital Medicine Career Satisfaction Task Force. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907
25. Dalen JE, Ryan KJ, Waterbrook AL, Alpert JS. Hospitalists, medical education, and US health care costs. Am J Med. 2018;131(11):1267-1269. https://doi.org/10.1016/j.amjmed.2018.05.016
26. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. https://doi.org/10.1002/jhm.619
27. Utah Physician Assistant Act. Utah Code. Published 2019. Accessed May 8, 2020. https://le.utah.gov/xcode/Title58/Chapter70A/C58-70a_2019051420190514.pdf
28. Nurse Practice Act. Utah Code. Published 2019. Accessed May 8, 2020. https://le.utah.gov/xcode/Title58/Chapter31B/C58-31b_1800010118000101.pdf

Article PDF
Issue
Journal of Hospital Medicine 15(12)
Topics
Page Number
709-715. Published Online First November 18, 2020
Sections
Article PDF
Article PDF
Related Articles

The Accreditation Council for Graduate Medical Education (ACGME) first mandated residency work hour restrictions in 2003.1 In 2011, revised work hour requirements were issued, further limiting the maximum duration of a shift and extending the duration of time off between scheduled shifts.2 Academic medical centers have been forced to adapt to work hour restrictions, and cuts in funding to research and educational missions have pressured institutions to restructure with a greater focus on high-quality, lower-cost care.3,4 In response, many academic hospitals have added hospitalist teams, or incorporated advanced practice clinicians (APCs) (nurse practitioners [NPs] and physician assistants [PAs]) to accommodate resident physician duty hour restrictions on their inpatient general medicine services.5,6 More recently, the COVID-19 pandemic has created unanticipated physician shortages forcing medical centers to rapidly expand and broaden the scope of their existing APC workforce.7

Several comparisons of clinical outcomes, cost, and patient satisfaction between different combinations of hospitalist-based, resident-based, or APC-based inpatient teams have been reported with conflicting observations.6,8-14 Roy et al reported no significant differences in mortality, length of stay (LOS), or readmissions between PA and resident teams.6 Timmermans et al reported similar cost-effectiveness, LOS, and quality of care between PA and physician teams that included a hybrid of attending only and resident teams.13,14 Alternatively, Singh et al and Iannuzzi et al reported increased LOS among PA teams,10,12 whereas Chin et al observed an increased LOS and reduced 30-day readmissions among hospitalist teams.8 While these observed differences may be attributable to heterogeneous patient populations or institution-specific team structure, the exact reasons remain unknown. Furthermore, understanding the value of alternate staffing models is essential for medical centers to prepare for potential COVID-19 related physician shortages. To our knowledge, no study to date has directly compared outcomes between resident, APC, and hospitalist team structures within an academic medical center.

We believe our institution provides a unique environment to study the differences in inpatient general medicine team structure with respect to quality and efficiency of care delivery. The objective of our study is to directly compare clinical outcomes and resource utilization among three distinct team structures: APC, resident, and solo hospitalist. We hypothesize that clinical outcomes, cost, and utilization of consult services will be similar across all team structures and hospitalist teams will discharge patients earlier than resident and APC teams.

METHODS

Study Design and Setting

We conducted a retrospective observational cohort study at the University of Utah Medical Center, a 548-bed academic medical center in Salt Lake City. An electronic database query was used to identify all patients discharged from the inpatient general internal medicine service between July 1, 2015, and July 1, 2018. Baseline patient characteristics were collected including age, gender, and Charlson comorbidity index (CCI).15 Case-mix index was determined for admissions where a Medicare Severity Diagnosis Related Group (MS-DRG) and corresponding weight was assigned.16,17 Source of admission was collected to identify patients transferred from an outside hospital, typically due to increased medical complexity or need for specialty care not available at the referring center. Time of admission was collected to classify whether a patient was admitted during the day or at night. Length of stay was calculated as the difference between discharge date/time and admission date/time. Discharge order time was collected as a measure of clinician efficiency. The number of consults per admission was determined by the number of different medical or surgical subspecialty services that wrote at least one consultation or progress note after the time of admission and were not the primary service at the time the note was written. The project was reviewed and deemed exempt by the University of Utah Institutional Review Board (IRB 00104884).

Inpatient Care Team Structure

Patients were assigned to one of three cohorts dependent on the assigned treatment team at the time of discharge. The three inpatient team structures were as follows: (1) a “resident team” composed of a senior resident (postgraduate year [PGY] 2 or PGY3) and one to two medical students or one senior resident, two interns (PGY1), and one to two medical students supervised by a hospitalist physician; (2) an “APC team” composed of one to two APCs supervised by a hospitalist physician; and (3) a “hospitalist team” composed of one attending hospitalist independently managing all patients.

Advanced Practice Clinicians

The APC service included 10 APCs (8 PAs and 2 NPs), with a combined workforce of nine APC full-time equivalents during the study period. Their experience ranged from new graduate to 11 years of clinical experience, with an average of 4.2 years. Among the 6 APCs with prior clinical experience, the majority (86%) of their years of clinical experience were within inpatient medicine, oncology, or cardiology. Recognizing the variability in clinical experience, we employed a rigorous onboarding program that entailed an average of 80 hours of didactic sessions including 1:1 teaching of the inpatient Society of Hospital Medicine core lecture series combined with initial intense clinical oversight.18 This program ranged from 2 weeks to 6 weeks depending on the individual APC’s clinical experience, progress, and comfort working independently. This onboarding program has subsequently been formalized into a 1-year APC fellowship that began after the study period concluded.

The degree of autonomy for each APC was individualized based on their clinical experience and ability to recognize limitations such as medical decision-making, clinical knowledge, and effective use of interprofessional team members (eg, peers, nursing, ancillary staff, consultants, and support personnel). Those APCs who demonstrated a sufficient level of clinical competence functioned with a high level of autonomy. During the day, APCs were expected to be the first point of contact for interprofessional team members, to respond to acute clinical changes in a patient’s condition, and to discuss active issues with the supervising attending, all with the majority of medical decision-making, direct patient communication, documentation, and care coordination performed by the APC. An experienced subset of the APC service was responsible for overnight coverage. Nocturnist APCs independently managed all cross-cover issues on patients assigned to APC and hospitalist teams and performed admissions with very little to no direct supervision of the overnight attending physician.

Patient Admission and Redistribution Process

During the study period, resident teams performed all daytime admissions (6 am to 6 pm) on a rotating basis. On any given day, three of four resident teams performed daytime admissions with the fourth team designated as “golden” and free from admitting duties. Patients admitted during the day remained assigned to the resident team for continuity. The APC and hospitalist teams did not accept new admissions during the day. Nighttime admissions (6 pm to 6 am) were performed by a separate team composed of two senior residents, two interns, one APC, occasional APC and medical students, and one supervising attending hospitalist. This team functioned as a single unit. Nighttime admissions were performed in a sequential and rotating fashion (eg, Intern A > Intern B > Resident A > Resident B > APC > student(s) > Intern A > Intern B, etc). Patients admitted overnight were randomly redistributed the following morning, with the majority reassigned to an APC team or hospitalist team in order to offset the workload of the resident teams performing daytime admissions. Following redistribution, a patient would remain assigned to the daytime APC or hospitalist team for the duration of their hospitalization. The redistribution decisions were based on individual team census, without systematic consideration of an individual patient’s diagnosis, medical complexity, socioeconomic status, or perceived quality of learning potential (eg, good teaching case).

Study Outcomes

We divided study outcomes into two categories, clinical outcomes and resource utilization. Clinical outcomes included LOS, unplanned readmission within 30-days, and inpatient mortality and were designed to measure patient-related outcomes as a reflection of the quality of care delivered by different team structures. Resource utilization included discharge order time, discharge time, consults per admission, and total direct cost, which were designed to measure provider-related differences in efficiency and cost of care.

Statistical analysis

Baseline characteristics and unadjusted outcomes are reported as frequency and percent, normally distributed variables as mean with SD, and nonnormally distributed variables as median with interquartile range (IQR). Baseline characteristics and unadjusted outcomes were compared using the chi-square test or the t test, where appropriate. Multivariable regression analysis using generalized linear models with a log link function and gamma distribution was used for continuous outcomes. Multivariable logistic regression was used for binary outcomes.10 Covariates included in regression models were age, gender, CCI, transfer from an outside hospital, and nighttime admission. In a sensitivity analysis, we included MS-DRG weight as a covariate for 85% of hospitalizations in our cohort exclusive of observation stays, and our findings were qualitatively similar (data not reported but available on request). Adjusted continuous outcomes were estimated using marginal effects at the means.19 Due to the sensitivity of cost data and an institutional policy against disclosing cost figures, total direct costs were normalized using the unadjusted median and adjusted mean total direct cost of an admission to an APC team as the normalizing value. A P value cutoff of .05 was used to determine statistical significance. Stata/IC version 16.1 (StataCorp) was used for all analyses.

RESULTS

Study Population

A total of 12,716 hospital admissions were identified during the study period. Of these, 7,943 (62.5%) admissions were assigned to a resident team, 3,519 (27.7%) admissions were assigned to an APC team, and the remaining 1,254 (9.9%) were assigned to a hospitalist team. Baseline patient characteristics are reported in Table 1. Patients admitted to resident teams (mean age [SD], 56.9 [19.1] years) were younger than those admitted to an APC team (58.0 [19.3] years; P = .004) or a hospitalist team (58.2 [19.3] years; P = .026). The case-mix index (mean MS-DRG weight [SD], 1.44 [0.87]) was slightly lower for resident teams than that for APC teams (1.49 [0.90]; P = .025).Resident teams had a significantly lower proportion of night admissions than did APC teams (32.0% vs 49.5%; P < .001) and hospitalist teams (48.6%; P < .001). APC teams were assigned more patients transferred from an outside hospital (19.1%), compared with resident teams (15.0%; P < .001) and hospitalist teams (16.0%; P = .015). No other significant differences were observed in baseline characteristics between cohorts.

Baseline Patient Characteristics

Clinical Outcomes

Unadjusted analysis demonstrated the LOS was similar among resident, APC, and hospitalist teams with a median (IQR) LOS of 2.90 (1.86, 4.26) days, 2.93 (1.89, 4.66) days, and 2.86 (1.84, 4.67) days, respectively. No significant differences were observed in unadjusted 30-day readmissions or inpatient mortality among the team structures (Table 2). Following multivariable adjustment for differences in baseline characteristics, no significant differences were observed in LOS, 30-day readmission, or inpatient mortality among teams (Table 3).

Comparison of Unadjusted Clinical Outcomes and Resource Utilization Among Resident, APC, and Hospitalist Teams

Resource Utilization

In unadjusted comparisons, hospitalist teams were observed to place discharge orders more than 30 minutes earlier than APC teams (median hours after midnight [IQR], 11.20 [9.63, 13.60] vs 11.73 [10.00, 13.87]; P < .001) and 54 minutes earlier than resident teams (12.10 [10.38, 13.90]; P < .001) (Table 2). Consistent with the earlier placement of discharge orders, hospitalist patients were also discharged from the hospital 26 and 32 minutes earlier than APC and resident patients, respectively. APC teams also discharged patients slightly earlier (6 minutes) than resident teams (median hours after midnight [IQR], 14.97 [13.23, 16.72] vs 15.07 [13.42, 16.73]; P = .045). Median consultation use among teams was similar, although statistically significant differences were present. Normalized total direct cost was 8% higher (P < .001) for admissions to APC teams than that for resident teams and 7% higher (P = .008) than that for hospitalist teams in unadjusted analysis (Table 2).

Following multivariable adjustment, the mean differences in discharge order time and discharge time remained significant with hospitalist teams discharging patients an average of 20 to 30 minutes earlier than APC and resident teams (Table 3). Consultant utilization remained significantly different between teams, with APC teams utilizing consultants on average 15% more than hospitalist teams (P < .001) and 7% more than resident teams (P = .001). The differences in total direct costs were not significant after adjusted analysis.

Comparison of Adjusted Clinical Outcomes and Resource Utilization Among Resident, APC, and Hospitalist Teams

DISCUSSION

Many academic medical centers have expanded their workforce with APC or nonteaching hospitalist teams to accommodate the increasing volume of hospital admissions, resident work hour restrictions,1,2 and medical complexity of an aging population. Several hospitals have reported comparative outcomes between different care delivery models, with conflicting results.6,8,10-12 In our study, we directly evaluated three inpatient care delivery models and found that hospitalist teams discharged patients more efficiently and utilized fewer consultants, compared with APC and resident teams. In spite of this improved efficiency, no significant differences were observed in cost or other clinical outcomes.

Our findings are important and further strengthen the evidence supporting the use of APCs on inpatient general medicine services and are of particular interest to academic centers struggling to expand staffing in order to offset the growth in patient volume and reduction in resident workforce. We believe several findings from our study warrant further discussion.

First, although hospitalist teams were able to discharge patients more efficiently, this observation may be influenced by factors of workflow rather than caused by significant disparities in efficiency between provider types (ie, APC vs hospitalist vs resident physician). As with most academic centers, patients assigned to resident teams are presented by house staff to an attending physician who is ultimately responsible for patient care decisions. Therefore, it is conceivable that delays in the discharge process are in part related to the convention of bedside rounding and discussing the care plan prior to discharge.20 In fact, we recognized this as a bottleneck and changed our discharge process for resident teams in June 2017, with a measurable improvement in discharge times. In the absence of this intervention, our observed differences in discharge times among teams may have been even greater.

Second, no significant differences in clinical outcomes were observed in our adjusted analyses, which suggests that a similar quality of care is delivered to patients regardless of team structure, an important observation when considering different staffing models.

Third, we observed a significant increase in consultation use among resident and APC teams, compared with hospitalists. While we are not able to precisely identify the basis for this variation, we believe it could reflect differences in clinical experience, comfort with diagnostic uncertainty, or the unequal distribution of patients transferred from outside hospitals for tertiary care. Interestingly, the greater consultation use did not correlate with higher healthcare costs, a finding recently reported by Stevens et al.21

Fourth, we believe the lack of differences in cost and clinical outcomes among team structures may be of particular interest to academic centers when considering physician burnout, salaries, and clinical education. The relationship between clerical burden, such as completing clinical documentation and computerized physician order entry, has been implicated as a risk factor for physician burnout.22 Incorporating APCs into roles similar to those performed by resident physicians may reduce the clerical burden on hospitalists, thereby reducing the risk of physician burnout. The addition of APCs may also represent opportunities for cost savings for healthcare centers when comparing the median salary of an APC to that of an internal medicine hospitalist.23,24 Moreover, academic hospitalists have been shown to be excellent medical educators and report increased job satisfaction with a variety of duties beyond direct patient care.24,25 Unforeseen benefits of adding APC teams within our institution has been the added teaching opportunities for APCs and APC students, increased collegiality with the APCs, and the creation of an APC fellowship program with a focus on inpatient medicine. Similar postgraduate training programs have been reported and serve as effective models to train APCs for hospital-based practice.26

Lastly, although this project was conceived and completed prior to the COVID-19 pandemic, our observations may be informative for medical centers experiencing a workforce shortage caused by a surge of COVID-19 patients. During a physician shortage we believe our APC team model could be rapidly expanded to accommodate a large influx of patients. This expansion could be accomplished through a single attending physician overseeing multiple APC teams. In this model, the supervising physician would only evaluate the most complex patients with most patients being managed solely by an APC from admission to discharge. Such changes may require temporary suspension of state laws restricting APC independent practice.27,28

Our findings contrast those of previous reports in that we did not observe significant differences in clinical outcomes (ie, LOS, inpatient mortality, and 30-day readmissions) or total direct cost.8,10,21 Other institutions have noted an increased LOS among APC teams and hospitalist teams, compared with resident teams.8,10 Furthermore, Chin et al and Iannuzzi et al reported reductions in healthcare cost for resident teams, whereas our study did not identify significant cost differences among team structures. Although we cannot pinpoint the exact reason(s) for these dissimilarities, it is plausible that unmeasured factors such as institutional differences in APC training, direct physician supervision, admission processes, or inpatient team census may play a role.

Several study limitations should be recognized. First, the retrospective, nonrandomized design is one of the largest limitations of our study. Administrative data was obtained via an electronic query of our data warehouse, and although we aimed to identify as many patient characteristics as possible to adjust for cofounding effects, undetected differences among cohorts may exist. Second, our inpatient admission process may have placed undue burden on resident teams to perform all daytime admissions, inadvertently affecting study outcomes. It is possible the observed benefits of a solo hospitalist team are attributable to the lack of admitting duties rather than inherent advantages of the team structure. If this were the case, we would expect similar benefits among APC teams, which we did not note. Third, the study was performed at a single academic center, which may limit the generalizability of our results. Fourth, it is possible the outcomes are similar among teams because our hospitalist faculty rotate proportionately between the different teams. Lastly, the study was underpowered to detect a significant difference in mortality between hospitalist and APC teams. A post hoc power calculation based on our observed sample and effect sizes estimated 75% power to detect a mortality difference between hospitalists and APCs; other mortality comparisons were adequately powered.

CONCLUSION

We observed similar total direct costs, LOS, 30-day readmission, and inpatient mortality between hospitalist, APC, and resident teams. APC and resident teams utilized more consultants and discharged patient later than hospitalists. Our analysis suggests clinical outcomes are not significantly affected by inpatient team structure, and the addition of general medicine inpatient APC or hospitalist teams represent safe and efficient alternatives to traditional resident teams within an academic medical center.

Disclosures

All authors declare they have no conflicts of interest.

The Accreditation Council for Graduate Medical Education (ACGME) first mandated residency work hour restrictions in 2003.1 In 2011, revised work hour requirements were issued, further limiting the maximum duration of a shift and extending the duration of time off between scheduled shifts.2 Academic medical centers have been forced to adapt to work hour restrictions, and cuts in funding to research and educational missions have pressured institutions to restructure with a greater focus on high-quality, lower-cost care.3,4 In response, many academic hospitals have added hospitalist teams, or incorporated advanced practice clinicians (APCs) (nurse practitioners [NPs] and physician assistants [PAs]) to accommodate resident physician duty hour restrictions on their inpatient general medicine services.5,6 More recently, the COVID-19 pandemic has created unanticipated physician shortages forcing medical centers to rapidly expand and broaden the scope of their existing APC workforce.7

Several comparisons of clinical outcomes, cost, and patient satisfaction between different combinations of hospitalist-based, resident-based, or APC-based inpatient teams have been reported with conflicting observations.6,8-14 Roy et al reported no significant differences in mortality, length of stay (LOS), or readmissions between PA and resident teams.6 Timmermans et al reported similar cost-effectiveness, LOS, and quality of care between PA and physician teams that included a hybrid of attending only and resident teams.13,14 Alternatively, Singh et al and Iannuzzi et al reported increased LOS among PA teams,10,12 whereas Chin et al observed an increased LOS and reduced 30-day readmissions among hospitalist teams.8 While these observed differences may be attributable to heterogeneous patient populations or institution-specific team structure, the exact reasons remain unknown. Furthermore, understanding the value of alternate staffing models is essential for medical centers to prepare for potential COVID-19 related physician shortages. To our knowledge, no study to date has directly compared outcomes between resident, APC, and hospitalist team structures within an academic medical center.

We believe our institution provides a unique environment to study the differences in inpatient general medicine team structure with respect to quality and efficiency of care delivery. The objective of our study is to directly compare clinical outcomes and resource utilization among three distinct team structures: APC, resident, and solo hospitalist. We hypothesize that clinical outcomes, cost, and utilization of consult services will be similar across all team structures and hospitalist teams will discharge patients earlier than resident and APC teams.

METHODS

Study Design and Setting

We conducted a retrospective observational cohort study at the University of Utah Medical Center, a 548-bed academic medical center in Salt Lake City. An electronic database query was used to identify all patients discharged from the inpatient general internal medicine service between July 1, 2015, and July 1, 2018. Baseline patient characteristics were collected including age, gender, and Charlson comorbidity index (CCI).15 Case-mix index was determined for admissions where a Medicare Severity Diagnosis Related Group (MS-DRG) and corresponding weight was assigned.16,17 Source of admission was collected to identify patients transferred from an outside hospital, typically due to increased medical complexity or need for specialty care not available at the referring center. Time of admission was collected to classify whether a patient was admitted during the day or at night. Length of stay was calculated as the difference between discharge date/time and admission date/time. Discharge order time was collected as a measure of clinician efficiency. The number of consults per admission was determined by the number of different medical or surgical subspecialty services that wrote at least one consultation or progress note after the time of admission and were not the primary service at the time the note was written. The project was reviewed and deemed exempt by the University of Utah Institutional Review Board (IRB 00104884).

Inpatient Care Team Structure

Patients were assigned to one of three cohorts dependent on the assigned treatment team at the time of discharge. The three inpatient team structures were as follows: (1) a “resident team” composed of a senior resident (postgraduate year [PGY] 2 or PGY3) and one to two medical students or one senior resident, two interns (PGY1), and one to two medical students supervised by a hospitalist physician; (2) an “APC team” composed of one to two APCs supervised by a hospitalist physician; and (3) a “hospitalist team” composed of one attending hospitalist independently managing all patients.

Advanced Practice Clinicians

The APC service included 10 APCs (8 PAs and 2 NPs), with a combined workforce of nine APC full-time equivalents during the study period. Their experience ranged from new graduate to 11 years of clinical experience, with an average of 4.2 years. Among the 6 APCs with prior clinical experience, the majority (86%) of their years of clinical experience were within inpatient medicine, oncology, or cardiology. Recognizing the variability in clinical experience, we employed a rigorous onboarding program that entailed an average of 80 hours of didactic sessions including 1:1 teaching of the inpatient Society of Hospital Medicine core lecture series combined with initial intense clinical oversight.18 This program ranged from 2 weeks to 6 weeks depending on the individual APC’s clinical experience, progress, and comfort working independently. This onboarding program has subsequently been formalized into a 1-year APC fellowship that began after the study period concluded.

The degree of autonomy for each APC was individualized based on their clinical experience and ability to recognize limitations such as medical decision-making, clinical knowledge, and effective use of interprofessional team members (eg, peers, nursing, ancillary staff, consultants, and support personnel). Those APCs who demonstrated a sufficient level of clinical competence functioned with a high level of autonomy. During the day, APCs were expected to be the first point of contact for interprofessional team members, to respond to acute clinical changes in a patient’s condition, and to discuss active issues with the supervising attending, all with the majority of medical decision-making, direct patient communication, documentation, and care coordination performed by the APC. An experienced subset of the APC service was responsible for overnight coverage. Nocturnist APCs independently managed all cross-cover issues on patients assigned to APC and hospitalist teams and performed admissions with very little to no direct supervision of the overnight attending physician.

Patient Admission and Redistribution Process

During the study period, resident teams performed all daytime admissions (6 am to 6 pm) on a rotating basis. On any given day, three of four resident teams performed daytime admissions with the fourth team designated as “golden” and free from admitting duties. Patients admitted during the day remained assigned to the resident team for continuity. The APC and hospitalist teams did not accept new admissions during the day. Nighttime admissions (6 pm to 6 am) were performed by a separate team composed of two senior residents, two interns, one APC, occasional APC and medical students, and one supervising attending hospitalist. This team functioned as a single unit. Nighttime admissions were performed in a sequential and rotating fashion (eg, Intern A > Intern B > Resident A > Resident B > APC > student(s) > Intern A > Intern B, etc). Patients admitted overnight were randomly redistributed the following morning, with the majority reassigned to an APC team or hospitalist team in order to offset the workload of the resident teams performing daytime admissions. Following redistribution, a patient would remain assigned to the daytime APC or hospitalist team for the duration of their hospitalization. The redistribution decisions were based on individual team census, without systematic consideration of an individual patient’s diagnosis, medical complexity, socioeconomic status, or perceived quality of learning potential (eg, good teaching case).

Study Outcomes

We divided study outcomes into two categories, clinical outcomes and resource utilization. Clinical outcomes included LOS, unplanned readmission within 30-days, and inpatient mortality and were designed to measure patient-related outcomes as a reflection of the quality of care delivered by different team structures. Resource utilization included discharge order time, discharge time, consults per admission, and total direct cost, which were designed to measure provider-related differences in efficiency and cost of care.

Statistical analysis

Baseline characteristics and unadjusted outcomes are reported as frequency and percent, normally distributed variables as mean with SD, and nonnormally distributed variables as median with interquartile range (IQR). Baseline characteristics and unadjusted outcomes were compared using the chi-square test or the t test, where appropriate. Multivariable regression analysis using generalized linear models with a log link function and gamma distribution was used for continuous outcomes. Multivariable logistic regression was used for binary outcomes.10 Covariates included in regression models were age, gender, CCI, transfer from an outside hospital, and nighttime admission. In a sensitivity analysis, we included MS-DRG weight as a covariate for 85% of hospitalizations in our cohort exclusive of observation stays, and our findings were qualitatively similar (data not reported but available on request). Adjusted continuous outcomes were estimated using marginal effects at the means.19 Due to the sensitivity of cost data and an institutional policy against disclosing cost figures, total direct costs were normalized using the unadjusted median and adjusted mean total direct cost of an admission to an APC team as the normalizing value. A P value cutoff of .05 was used to determine statistical significance. Stata/IC version 16.1 (StataCorp) was used for all analyses.

RESULTS

Study Population

A total of 12,716 hospital admissions were identified during the study period. Of these, 7,943 (62.5%) admissions were assigned to a resident team, 3,519 (27.7%) admissions were assigned to an APC team, and the remaining 1,254 (9.9%) were assigned to a hospitalist team. Baseline patient characteristics are reported in Table 1. Patients admitted to resident teams (mean age [SD], 56.9 [19.1] years) were younger than those admitted to an APC team (58.0 [19.3] years; P = .004) or a hospitalist team (58.2 [19.3] years; P = .026). The case-mix index (mean MS-DRG weight [SD], 1.44 [0.87]) was slightly lower for resident teams than that for APC teams (1.49 [0.90]; P = .025).Resident teams had a significantly lower proportion of night admissions than did APC teams (32.0% vs 49.5%; P < .001) and hospitalist teams (48.6%; P < .001). APC teams were assigned more patients transferred from an outside hospital (19.1%), compared with resident teams (15.0%; P < .001) and hospitalist teams (16.0%; P = .015). No other significant differences were observed in baseline characteristics between cohorts.

Baseline Patient Characteristics

Clinical Outcomes

Unadjusted analysis demonstrated the LOS was similar among resident, APC, and hospitalist teams with a median (IQR) LOS of 2.90 (1.86, 4.26) days, 2.93 (1.89, 4.66) days, and 2.86 (1.84, 4.67) days, respectively. No significant differences were observed in unadjusted 30-day readmissions or inpatient mortality among the team structures (Table 2). Following multivariable adjustment for differences in baseline characteristics, no significant differences were observed in LOS, 30-day readmission, or inpatient mortality among teams (Table 3).

Comparison of Unadjusted Clinical Outcomes and Resource Utilization Among Resident, APC, and Hospitalist Teams

Resource Utilization

In unadjusted comparisons, hospitalist teams were observed to place discharge orders more than 30 minutes earlier than APC teams (median hours after midnight [IQR], 11.20 [9.63, 13.60] vs 11.73 [10.00, 13.87]; P < .001) and 54 minutes earlier than resident teams (12.10 [10.38, 13.90]; P < .001) (Table 2). Consistent with the earlier placement of discharge orders, hospitalist patients were also discharged from the hospital 26 and 32 minutes earlier than APC and resident patients, respectively. APC teams also discharged patients slightly earlier (6 minutes) than resident teams (median hours after midnight [IQR], 14.97 [13.23, 16.72] vs 15.07 [13.42, 16.73]; P = .045). Median consultation use among teams was similar, although statistically significant differences were present. Normalized total direct cost was 8% higher (P < .001) for admissions to APC teams than that for resident teams and 7% higher (P = .008) than that for hospitalist teams in unadjusted analysis (Table 2).

Following multivariable adjustment, the mean differences in discharge order time and discharge time remained significant with hospitalist teams discharging patients an average of 20 to 30 minutes earlier than APC and resident teams (Table 3). Consultant utilization remained significantly different between teams, with APC teams utilizing consultants on average 15% more than hospitalist teams (P < .001) and 7% more than resident teams (P = .001). The differences in total direct costs were not significant after adjusted analysis.

Comparison of Adjusted Clinical Outcomes and Resource Utilization Among Resident, APC, and Hospitalist Teams

DISCUSSION

Many academic medical centers have expanded their workforce with APC or nonteaching hospitalist teams to accommodate the increasing volume of hospital admissions, resident work hour restrictions,1,2 and medical complexity of an aging population. Several hospitals have reported comparative outcomes between different care delivery models, with conflicting results.6,8,10-12 In our study, we directly evaluated three inpatient care delivery models and found that hospitalist teams discharged patients more efficiently and utilized fewer consultants, compared with APC and resident teams. In spite of this improved efficiency, no significant differences were observed in cost or other clinical outcomes.

Our findings are important and further strengthen the evidence supporting the use of APCs on inpatient general medicine services and are of particular interest to academic centers struggling to expand staffing in order to offset the growth in patient volume and reduction in resident workforce. We believe several findings from our study warrant further discussion.

First, although hospitalist teams were able to discharge patients more efficiently, this observation may be influenced by factors of workflow rather than caused by significant disparities in efficiency between provider types (ie, APC vs hospitalist vs resident physician). As with most academic centers, patients assigned to resident teams are presented by house staff to an attending physician who is ultimately responsible for patient care decisions. Therefore, it is conceivable that delays in the discharge process are in part related to the convention of bedside rounding and discussing the care plan prior to discharge.20 In fact, we recognized this as a bottleneck and changed our discharge process for resident teams in June 2017, with a measurable improvement in discharge times. In the absence of this intervention, our observed differences in discharge times among teams may have been even greater.

Second, no significant differences in clinical outcomes were observed in our adjusted analyses, which suggests that a similar quality of care is delivered to patients regardless of team structure, an important observation when considering different staffing models.

Third, we observed a significant increase in consultation use among resident and APC teams, compared with hospitalists. While we are not able to precisely identify the basis for this variation, we believe it could reflect differences in clinical experience, comfort with diagnostic uncertainty, or the unequal distribution of patients transferred from outside hospitals for tertiary care. Interestingly, the greater consultation use did not correlate with higher healthcare costs, a finding recently reported by Stevens et al.21

Fourth, we believe the lack of differences in cost and clinical outcomes among team structures may be of particular interest to academic centers when considering physician burnout, salaries, and clinical education. The relationship between clerical burden, such as completing clinical documentation and computerized physician order entry, has been implicated as a risk factor for physician burnout.22 Incorporating APCs into roles similar to those performed by resident physicians may reduce the clerical burden on hospitalists, thereby reducing the risk of physician burnout. The addition of APCs may also represent opportunities for cost savings for healthcare centers when comparing the median salary of an APC to that of an internal medicine hospitalist.23,24 Moreover, academic hospitalists have been shown to be excellent medical educators and report increased job satisfaction with a variety of duties beyond direct patient care.24,25 Unforeseen benefits of adding APC teams within our institution has been the added teaching opportunities for APCs and APC students, increased collegiality with the APCs, and the creation of an APC fellowship program with a focus on inpatient medicine. Similar postgraduate training programs have been reported and serve as effective models to train APCs for hospital-based practice.26

Lastly, although this project was conceived and completed prior to the COVID-19 pandemic, our observations may be informative for medical centers experiencing a workforce shortage caused by a surge of COVID-19 patients. During a physician shortage we believe our APC team model could be rapidly expanded to accommodate a large influx of patients. This expansion could be accomplished through a single attending physician overseeing multiple APC teams. In this model, the supervising physician would only evaluate the most complex patients with most patients being managed solely by an APC from admission to discharge. Such changes may require temporary suspension of state laws restricting APC independent practice.27,28

Our findings contrast those of previous reports in that we did not observe significant differences in clinical outcomes (ie, LOS, inpatient mortality, and 30-day readmissions) or total direct cost.8,10,21 Other institutions have noted an increased LOS among APC teams and hospitalist teams, compared with resident teams.8,10 Furthermore, Chin et al and Iannuzzi et al reported reductions in healthcare cost for resident teams, whereas our study did not identify significant cost differences among team structures. Although we cannot pinpoint the exact reason(s) for these dissimilarities, it is plausible that unmeasured factors such as institutional differences in APC training, direct physician supervision, admission processes, or inpatient team census may play a role.

Several study limitations should be recognized. First, the retrospective, nonrandomized design is one of the largest limitations of our study. Administrative data was obtained via an electronic query of our data warehouse, and although we aimed to identify as many patient characteristics as possible to adjust for cofounding effects, undetected differences among cohorts may exist. Second, our inpatient admission process may have placed undue burden on resident teams to perform all daytime admissions, inadvertently affecting study outcomes. It is possible the observed benefits of a solo hospitalist team are attributable to the lack of admitting duties rather than inherent advantages of the team structure. If this were the case, we would expect similar benefits among APC teams, which we did not note. Third, the study was performed at a single academic center, which may limit the generalizability of our results. Fourth, it is possible the outcomes are similar among teams because our hospitalist faculty rotate proportionately between the different teams. Lastly, the study was underpowered to detect a significant difference in mortality between hospitalist and APC teams. A post hoc power calculation based on our observed sample and effect sizes estimated 75% power to detect a mortality difference between hospitalists and APCs; other mortality comparisons were adequately powered.

CONCLUSION

We observed similar total direct costs, LOS, 30-day readmission, and inpatient mortality between hospitalist, APC, and resident teams. APC and resident teams utilized more consultants and discharged patient later than hospitalists. Our analysis suggests clinical outcomes are not significantly affected by inpatient team structure, and the addition of general medicine inpatient APC or hospitalist teams represent safe and efficient alternatives to traditional resident teams within an academic medical center.

Disclosures

All authors declare they have no conflicts of interest.

References

1. Report of the Work Group on Resident Duty Hours and the Learning Environment, June 11, 2002. Accreditation Council for Graduate Medical Education; 2003.
2. ACGME Task Force on Quality Care and Professionalism. Philibert I, Amis Steve, eds. The ACGME 2011 Duty Hour Standards: Enhancing Quality of Care, Supervision, and Resident Professional Development. Accreditation Council for Graduate Medical Education; 2011. https://www.acgme.org/Portals/0/PDFs/jgme-monograph[1].pdf
3. Konstam MA, Hill JA, Kovacs RJ, et al. The academic medical system: reinvention to survive the revolution in health care. J Am Coll Cardiol. 2017;69(10):1305-1312. https://doi.org/10.1016/j.jacc.2016.12.024
4. The future of the academic medical center: strategies to avoid a margin meltdown. Health Research Institute. February 2012. https://uofuhealth.utah.edu/hcr/2012/resources/the-future-of-academic-medical-centers.pdf
5. Moote M, Krsek C, Kleinpell R, Todd B. Physician assistant and nurse practitioner utilization in academic medical centers. Am J Med Qual. 2019;34(5):465-472. https://doi.org/ 10.1177/1062860619873216
6. Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361-368. https://doi.org/10.1002/jhm.352
7. Denne E. Behind the scenes at Northwell Health as PAs respond to COVID-19. American Academy of Physician Assistants. May 11, 2020. Accessed May 15, 2020. https://www.aapa.org/news-central/2020/05/behind-the-scenes-at-northwell-heath-as-pas-respond-to-covid-19/
8. Chin DL, Wilson MH, Bang H, Romano PS. Comparing patient outcomes of academician-preceptors, hospitalist-preceptors, and hospitalists on internal medicine services in an academic medical center. J Gen Intern Med. 2014;29(12):1672-1678. https://doi.org/10.1007/s11606-014-2982-y
9. Cowan MJ, Shapiro M, Hays RD, et al. The effect of a multidisciplinary hospitalist/physician and advanced practice nurse collaboration on hospital costs. J Nurs Adm. 2006;36(2):79-85. https://doi.org/10.1097/00005110-200602000-00006
10. Iannuzzi MC, Iannuzzi JC, Holtsbery A, Wright SM, Knohl SJ. Comparing hospitalist-resident to hospitalist-midlevel practitioner team performance on length of stay and direct patient care cost. J Grad Med Educ. 2015;7(1):65-69. https://doi.org/10.4300/jgme-d-14-00234.1
11. Kapu AN, Kleinpell R, Pilon B. Quality and financial impact of adding nurse practitioners to inpatient care teams. J Nurs Adm. 2014;44(2):87-96. https://doi.org/10.1097/nna.0000000000000031
12. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. https://doi.org/10.1002/jhm.826
13. Timmermans MJC, van Vught A, Peters YAS, et al. The impact of the implementation of physician assistants in inpatient care: a multicenter matched-controlled study. PLoS One. 2017;12(8):e0178212. https://doi.org/10.1371/journal.pone.0178212
14. Timmermans MJC, van den Brink GT, van Vught A, et al. The involvement of physician assistants in inpatient care in hospitals in the Netherlands: a cost-effectiveness analysis. BMJ Open. 2017;7(7):e016405. https://doi.org/10.1136/bmjopen-2017-016405
15. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. https://doi.org/10.1016/0021-9681(87)90171-8
16. MS-DRG Classifications and Software. Centers for Medicare & Medicaid Services. 2020. Updated April 28, 2020. Accessed May 5, 2020. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/MS-DRG-Classifications-and-Software
17. Fetter RB, Shin Y, Freeman JL, Averill RF, Thompson JD. Case mix definition by diagnosis-related groups. Med Care. 1980;18(2 Suppl):iii, 1-53.
18. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine--2017 revision: introduction and methodology. J Hosp Med. 2017;12(4):283-287. https://doi.org/10.12788/jhm.2715
19. Williams R. Using the margins command to estimate and interpret adjusted predictions and marginal effects. Stata J. 2012;12(2):308-331. https://doi.org/10.1177%2F1536867X1201200209
20. Goolsarran N, Olowo G, Ling Y, Abbasi S, Taub E, Teressa G. Outcomes of a resident-led early hospital discharge intervention. J Gen Intern Med. 2020;35(2):437-443. https://doi.org/10.1007/s11606-019-05563-w
21. Stevens JP, Hatfield LA, Nyweide DJ, Landon B. Association of variation in consultant use among hospitalist physicians with outcomes among Medicare beneficiaries. JAMA Netw Open. 2020;3(2):e1921750. https://doi.org/10.1001/jamanetworkopen.2019.21750
22. Shanafelt TD, Dyrbye LN, Sinsky C, et al. Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clin Proc. 2016;91(7):836-848. https://doi.org/10.1016/j.mayocp.2016.05.007
23. 2019 AAPA Salary Report. American Academy of PAs. 2019. https://www.aapa.org/shop/salary-report-2019/
24. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB; Society of Hospital Medicine Career Satisfaction Task Force. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907
25. Dalen JE, Ryan KJ, Waterbrook AL, Alpert JS. Hospitalists, medical education, and US health care costs. Am J Med. 2018;131(11):1267-1269. https://doi.org/10.1016/j.amjmed.2018.05.016
26. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. https://doi.org/10.1002/jhm.619
27. Utah Physician Assistant Act. Utah Code. Published 2019. Accessed May 8, 2020. https://le.utah.gov/xcode/Title58/Chapter70A/C58-70a_2019051420190514.pdf
28. Nurse Practice Act. Utah Code. Published 2019. Accessed May 8, 2020. https://le.utah.gov/xcode/Title58/Chapter31B/C58-31b_1800010118000101.pdf

References

1. Report of the Work Group on Resident Duty Hours and the Learning Environment, June 11, 2002. Accreditation Council for Graduate Medical Education; 2003.
2. ACGME Task Force on Quality Care and Professionalism. Philibert I, Amis Steve, eds. The ACGME 2011 Duty Hour Standards: Enhancing Quality of Care, Supervision, and Resident Professional Development. Accreditation Council for Graduate Medical Education; 2011. https://www.acgme.org/Portals/0/PDFs/jgme-monograph[1].pdf
3. Konstam MA, Hill JA, Kovacs RJ, et al. The academic medical system: reinvention to survive the revolution in health care. J Am Coll Cardiol. 2017;69(10):1305-1312. https://doi.org/10.1016/j.jacc.2016.12.024
4. The future of the academic medical center: strategies to avoid a margin meltdown. Health Research Institute. February 2012. https://uofuhealth.utah.edu/hcr/2012/resources/the-future-of-academic-medical-centers.pdf
5. Moote M, Krsek C, Kleinpell R, Todd B. Physician assistant and nurse practitioner utilization in academic medical centers. Am J Med Qual. 2019;34(5):465-472. https://doi.org/ 10.1177/1062860619873216
6. Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361-368. https://doi.org/10.1002/jhm.352
7. Denne E. Behind the scenes at Northwell Health as PAs respond to COVID-19. American Academy of Physician Assistants. May 11, 2020. Accessed May 15, 2020. https://www.aapa.org/news-central/2020/05/behind-the-scenes-at-northwell-heath-as-pas-respond-to-covid-19/
8. Chin DL, Wilson MH, Bang H, Romano PS. Comparing patient outcomes of academician-preceptors, hospitalist-preceptors, and hospitalists on internal medicine services in an academic medical center. J Gen Intern Med. 2014;29(12):1672-1678. https://doi.org/10.1007/s11606-014-2982-y
9. Cowan MJ, Shapiro M, Hays RD, et al. The effect of a multidisciplinary hospitalist/physician and advanced practice nurse collaboration on hospital costs. J Nurs Adm. 2006;36(2):79-85. https://doi.org/10.1097/00005110-200602000-00006
10. Iannuzzi MC, Iannuzzi JC, Holtsbery A, Wright SM, Knohl SJ. Comparing hospitalist-resident to hospitalist-midlevel practitioner team performance on length of stay and direct patient care cost. J Grad Med Educ. 2015;7(1):65-69. https://doi.org/10.4300/jgme-d-14-00234.1
11. Kapu AN, Kleinpell R, Pilon B. Quality and financial impact of adding nurse practitioners to inpatient care teams. J Nurs Adm. 2014;44(2):87-96. https://doi.org/10.1097/nna.0000000000000031
12. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. https://doi.org/10.1002/jhm.826
13. Timmermans MJC, van Vught A, Peters YAS, et al. The impact of the implementation of physician assistants in inpatient care: a multicenter matched-controlled study. PLoS One. 2017;12(8):e0178212. https://doi.org/10.1371/journal.pone.0178212
14. Timmermans MJC, van den Brink GT, van Vught A, et al. The involvement of physician assistants in inpatient care in hospitals in the Netherlands: a cost-effectiveness analysis. BMJ Open. 2017;7(7):e016405. https://doi.org/10.1136/bmjopen-2017-016405
15. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383. https://doi.org/10.1016/0021-9681(87)90171-8
16. MS-DRG Classifications and Software. Centers for Medicare & Medicaid Services. 2020. Updated April 28, 2020. Accessed May 5, 2020. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/MS-DRG-Classifications-and-Software
17. Fetter RB, Shin Y, Freeman JL, Averill RF, Thompson JD. Case mix definition by diagnosis-related groups. Med Care. 1980;18(2 Suppl):iii, 1-53.
18. Nichani S, Crocker J, Fitterman N, Lukela M. Updating the core competencies in hospital medicine--2017 revision: introduction and methodology. J Hosp Med. 2017;12(4):283-287. https://doi.org/10.12788/jhm.2715
19. Williams R. Using the margins command to estimate and interpret adjusted predictions and marginal effects. Stata J. 2012;12(2):308-331. https://doi.org/10.1177%2F1536867X1201200209
20. Goolsarran N, Olowo G, Ling Y, Abbasi S, Taub E, Teressa G. Outcomes of a resident-led early hospital discharge intervention. J Gen Intern Med. 2020;35(2):437-443. https://doi.org/10.1007/s11606-019-05563-w
21. Stevens JP, Hatfield LA, Nyweide DJ, Landon B. Association of variation in consultant use among hospitalist physicians with outcomes among Medicare beneficiaries. JAMA Netw Open. 2020;3(2):e1921750. https://doi.org/10.1001/jamanetworkopen.2019.21750
22. Shanafelt TD, Dyrbye LN, Sinsky C, et al. Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clin Proc. 2016;91(7):836-848. https://doi.org/10.1016/j.mayocp.2016.05.007
23. 2019 AAPA Salary Report. American Academy of PAs. 2019. https://www.aapa.org/shop/salary-report-2019/
24. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB; Society of Hospital Medicine Career Satisfaction Task Force. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907
25. Dalen JE, Ryan KJ, Waterbrook AL, Alpert JS. Hospitalists, medical education, and US health care costs. Am J Med. 2018;131(11):1267-1269. https://doi.org/10.1016/j.amjmed.2018.05.016
26. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. https://doi.org/10.1002/jhm.619
27. Utah Physician Assistant Act. Utah Code. Published 2019. Accessed May 8, 2020. https://le.utah.gov/xcode/Title58/Chapter70A/C58-70a_2019051420190514.pdf
28. Nurse Practice Act. Utah Code. Published 2019. Accessed May 8, 2020. https://le.utah.gov/xcode/Title58/Chapter31B/C58-31b_1800010118000101.pdf

Issue
Journal of Hospital Medicine 15(12)
Issue
Journal of Hospital Medicine 15(12)
Page Number
709-715. Published Online First November 18, 2020
Page Number
709-715. Published Online First November 18, 2020
Topics
Article Type
Display Headline
Comparison of Resident, Advanced Practice Clinician, and Hospitalist Teams in an Academic Medical Center: Association With Clinical Outcomes and Resource Utilization
Display Headline
Comparison of Resident, Advanced Practice Clinician, and Hospitalist Teams in an Academic Medical Center: Association With Clinical Outcomes and Resource Utilization
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Stacy A Johnson, MD; Email: stacy.a.johnson@hsc.utah.edu; Telephone: 801-581-7822.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Article PDF Media

Performance of Pediatric Readmission Measures

Article Type
Changed

Readmission rates are frequently used as a hospital quality metric, with use including payment incentive at the hospital level,1 specific condition quality measurement,2 balancing measures for quality improvement projects,3-5 transition success,6,7 and use in public hospital rankings.8 Currently, four methods are commonly used to evaluate pediatric readmissions, each with strengths and limitations, including the following (Appendix Table 1):

1. All-cause readmissions: A measure of any readmission within a given time period regardless of the reason for readmission.9

2. Unplanned readmission/time flag: A measure intended to identify unplanned readmissions. This measure relies on time designations within the electronic health record. The time between hospital registration and admission is calculated, and if the readmission is registered more than 24 hours prior to admission, the readmission is considered planned.10 Hereafter, this measure will be referred to as the time flag measure.

3. Pediatric all-condition readmission (PACR): A measure intended to identify unplanned readmission through the exclusion of certain procedures and diagnoses.11

4. Potentially preventable readmission (PPR): A method to identify preventable readmissions based on a proprietary algorithm developed by 3M Health Information Systems.12,13

While all four of these measures are used to assess quality, there is little known about these measures’ ability to exclude planned readmissions and identify only preventable pediatric readmission, which conceptually is most relevant to the quality of care. However, many of these measures were not intended to capture preventability, but instead capture the related issue of whether the readmission was planned. Therefore, we sought to evaluate the four readmission measures as they relate to both preventability and unplanned status as determined through medical record review with multidisciplinary care provider input.

METHODS

As part of a hospital-wide readmission reduction quality improvement collaborative at a free-standing tertiary care children’s hospital, clinicians from hospital medicine, cardiology, neonatology, and neurology teams reviewed 30-day readmissions using a standardized abstraction tool. All readmission events (observation or inpatient encounter) after any discharge (observation or inpatient encounter) from eligible units were reviewed; therefore, each hospitalization was a potential index hospitalization. We classified the preventability of each readmission with use of a previously described Likert scale with high interrater reliability.14 For these analyses, readmissions were considered preventable if the reviewing team rated them as either “more likely preventable” or “preventable in most circumstances.” Each readmission was also evaluated as planned or unplanned. Methods for readmission review and classification are in the Appendix.

We included all readmissions between July 2014 and June 2016. We compared the medical record review classifications with the assessments from each of the four measures of pediatric readmission. We calculated sensitivity and specificity for both outcomes (planned/unplanned and preventable/not preventable) for all four measures. For standardization of discussion, we categorized description of measure performance as “very poor” as less than 50%, “poor” between 50%-75%, “fair” as 75%-85%, “good” as 85%-90%, “very good” as 90%-95% and excellent as greater than 95%. We also calculated positive and negative predictive value (PPV and NPV) over plausible ranges of prevalence using the sensitivity and specificity of each comparison (Appendix).

Of note, certain exclusions are outlined by the PACR and PPR algorithms. The PACR evaluates only readmission events that occur in children younger than 18 years. The PPR algorithm does not assign preventability if either the index or readmission event is classified as an observation stay or if it is part of a larger chain of readmissions.

RESULTS

Among 30-day readmissions considered, 1,643 were eligible for medical record review; 1,125 reviews were completed by the clinical teams (68.5%). The median time to readmission was 7 days (interquartile range [IQR], 4-18). Most children were non-Hispanic White (71%) or Black (20%). The median age at hospitalization was 2.3 years (IQR 0.4-12.1). Most children had Medicaid (56%) or private (41%) insurance. Most of the reviews were performed in cardiology (43%) and hospital medicine (37%) with patients in neurology (13%) and neonatology (7%) constituting the remaining reviews. Uncontrolled advancement of chronic disease was the most common readmission category on medical record review (25.1%), followed by unrelated readmission (20.7%), scheduled readmission (20.4%), and progression of acute disease (16.6%) (Appendix Table 2).

Assessment of Preventable and Unplanned Readmissions

On multidisciplinary medical record review, most readmissions were classified as not preventable (84.5%). Specifically, 64% were not preventable and unplanned; 20% were deemed not preventable and planned. Only 15% were classified as unplanned and preventable and 1% as planned and preventable (Appendix Figure: Population A/B).

Matching Chart Review to the Four Algorithms

All 1,125 readmissions were assessed by the all-cause and time flag readmission measures (Appendix Figure: Population A/B). After applying algorithm exclusions (details in Appendix), only 804 of the 1,125 (71.5%) reviewed readmissions matched for PACR readmission comparison (Appendix Figure: Population C); 487 of the 1,125 (43.3%) of the reviewed readmissions matched for PPR comparison (Appendix Figure: Population D).

All-Cause

Because all-cause determines only if a readmission occurs, the measure is by definition 100% sensitive and 0% specific in both assessment of preventability and unplanned readmission (Table: Section A).

 Sensitivity and Specificity of Preventable and Unplanned Readmission Metrics

Time Flag

The time flag measure identified 80% (866/1,112) of the readmissions as unplanned. This measure had very good sensitivity but very poor specificity in identifying preventable readmissions, which corresponded to very poor PPV and good to excellent NPV. In terms of identifying unplanned readmissions, the time flag measure had excellent sensitivity and very good specificity, which corresponded to very good to excellent PPV and good to very good NPV (Table: Section B).

PACR

The PACR algorithm identified 75% (599/796) of readmissions as unplanned. The PACR has good sensitivity but very poor specificity in identifying preventable readmissions, which corresponded to very poor PPV and fair to very good NPV. In terms of identifying unplanned readmissions, the PACR had fair sensitivity but poor specificity, which corresponded to fair PPV and poor NPV (Table: Section C).

PPR

The PPR algorithm identified 53% (257/487) of admissions as potentially preventable. The PPR algorithm had poor sensitivity and specificity in identifying preventable readmissions, which corresponded to very poor PPV and fair to very good NPV. In terms of identifying unplanned readmissions, the PPR algorithm had poor sensitivity and fair specificity in identifying unplanned readmissions, which corresponded to fair to good PPV and very poor to poor NPV (Table: Section D).

Evaluation of Excluded Readmission Events

Because both the PACR and PPR had large numbers of algorithm exclusions, we describe the preventability and unplanned assessment of the excluded readmission events. Both algorithms excluded preventable events. Of the 321 readmissions excluded by the PACR algorithm, 13.4% were classified as preventable by chart review. Likewise, 14.9% of 638 readmissions excluded by PPR were classified as preventable by chart review.

DISCUSSION

The ability to accurately capture preventable pediatric readmission is a goal for hospital quality experts and health policymakers alike. Of the four commonly used readmission measures to assess readmission, only PPR is designed to focus on preventability. Unfortunately, none of these four measures is adequately sensitive or specific to identify preventable readmissions; all measures had very poor PPV for preventability. Of the four measures, the time flag measure had the best sensitivity, specificity, PPV, and NPV for identifying unplanned readmissions.

The overall percentage of unplanned readmissions identified by both the time flag and by PACR measures match the overall percentage of unplanned readmissions identified in chart review: The time flag measure identified 80% of admissions as unplanned versus 79% identified by chart review (Appendix Figure: Population A/B); PACR classified 75% as unplanned versus 81% identified by chart review for PACR-eligible readmissions (Appendix Figure: Population C). In contrast, the PPR algorithm classified many more readmissions as potentially preventable (53%) than were identified by chart review at only 16% (Appendix Figure: Population D). The PACR and PPR algorithms also exclude a significant number of readmissions that are unplanned and a smaller, but not trivial, number of readmissions that are preventable; these exclusions limit their accuracy.

The ability to apply these four measures in real time during a hospitalization varies by metric. Two of the measures, the all-cause and time flag, can be applied during a readmission event, which is appealing for quality improvement initiatives. These measures allow for notification of providers that a current hospitalization is a readmission event, which allows providers the opportunity to learn from these events as they occur (Appendix Table 1). While “unplanned” is not the same as “potentially preventable,” almost all potentially preventable readmissions are unplanned; therefore, accurately identifying unplanned readmissions is more beneficial than all-cause. Additionally, a low all-cause readmission rate can be indicative of poor access to scheduled procedures. Nevertheless, all-cause readmission is sometimes used to measure quality.1,8 While the time flag measure may be more useful for quality improvement initiatives and hospital providers, it relies on hospital registration time, which is not widely available in administrative data sources and, therefore, has limited usefulness to policymakers.

Both PACR and PPR require administrative claims analysis, which is appealing from a policy standpoint. However, the reliance on claims data means the inclusion/exclusion of events can occur only retrospectively, which limits the usefulness of these measures in learning and intervening in real time. When the two measures are compared, PACR offers better sensitivity and PPR offers better specificity with regard to identifying unplanned readmission. The PPR software overcalls preventable readmissions, identifying more readmissions as preventable than there actually are. Nevertheless, Medicaid in several states uses PPR for payment incentive.1,15-17 Given the poor performance of PPR in assessing both preventable and unplanned pediatric readmission, the use of this measure as a quality metric should be limited.

This study should be considered in the context of several limitations. Because the assessment of preventability was determined as part of a learning quality improvement collaborative and not as a planned research endeavor, not all readmission reviews were completed nor were other existent tools18 that allow for preventability assessment via more structured medical record review used. Second, we reviewed cases only from certain clinical services, which would limit generalizability of these findings to all pediatric admissions. However, given the low sensitivity and specificity of some of the metrics, we would not anticipate that the addition of other types of admissions would improve the sensitivity and specificity enough to ensure reliability. Third, while we relied on an established method to determine preventability, prior work has demonstrated that additional information gathered from families may change preventability.19 Finally, due to the exclusions required by the PPR and PACR algorithms, not all readmission events were reviewed. However, these exclusions reflect the actual specifications of use for both measures.

CONCLUSION

The PPR software has poor fidelity in identifying preventable and unplanned pediatric readmission; this finding has broad policy implications given how widely it is used by state Medicaid offices to assess financial penalties. Among the four pediatric readmission measures used, the time flag metric best identifies unplanned readmissions.

Disclosures

The authors have no conflicts of interest or financial relationships relevant to this article to disclose.

Funding

Dr Auger’s research is supported by a grant from the Agency for Healthcare Research and Quality (1K08HS204735-01A1). The project described was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health, under Award Number 5UL1TR001425-04. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Files
References

1. State Medicaid Payment Policies for Inpatient Hospital Services. Medicaid and CHIP Payment and Access Commission; December 2018. Accessed June 1, 2019. https://www.macpac.gov/publication/macpac-inpatient-hospital-payment-landscapes/
2. Mangione-Smith R, Zhou C, Williams DJ, et al. Pediatric Respiratory Illness Measurement System (PRIMES) scores and outcomes. Pediatrics. 2019;144(2):e20190242. https://doi.org/10.1542/peds.2019-0242
3. Biondi EA, McCulloh R, Staggs VS, et al. Reducing Variability in the Infant Sepsis Evaluation (REVISE): a national quality initiative. Pediatrics. 2019;144(3):e20182201. https://doi.org/10.1542/peds.2018-2201
4. Statile AM, Schondelmeyer AC, Thomson JE, et al. Improving discharge efficiency in medically complex pediatric patients. Pediatrics. 2016;138(2):e20153832. https://doi.org/10.1542/peds.2015-3832
5. White CM, Statile AM, White DL, et al. Using quality improvement to optimise paediatric discharge efficiency. BMJ Qual Saf. 2014;23(5):428-436. https://doi.org/10.1136/bmjqs-2013-002556
6. Auger KA, Simmons JM, Tubbs-Cooley HL, et al; H20 Trial Study Group. Postdischarge nurse home visits and reuse: the Hospital to Home Outcomes (H2O) trial. Pediatrics. 2018;142(1):e20173919. https://doi.org/10.1542/peds.2017-3919
7. Auger KA, Shah SS, Tubbs-Cooley HL, et al. Effects of a 1-time nurse-led telephone call after pediatric discharge: the H2O II randomized clinical trial. JAMA Pediatr. 2018;172(9):e181482. https://doi.org/10.1001/jamapediatrics.2018.1482
8. Olmsted MG, Powell R, Murphy J, Bell Denise, Stanley M, Sanchz R. Methodology: U.S. News & World Report Best Children’s Hospitals 2019-20. U.S. News & World Report; June 17, 2019. Accessed June 16, 2020. https://www.usnews.com/static/documents/health/best-hospitals/BCH_Methodology_2019-20.pdf
9. Bardach NS, Vittinghoff E, Asteria-Peñaloza R, et al. Measuring hospital quality using pediatric readmission and revisit rates. Pediatrics. 2013;132(3):429-436. https://doi.org/10.1542/peds.2012-3527
10. Auger KA, Mueller EL, Weinberg SH, et al. A validated method for identifying unplanned pediatric readmission. J Pediatr. 2016;170:105-12.e102. https://doi.org/10.1016/j.jpeds.2015.11.051
11. Readmissions-Content. Boston Children’s Hospital. Accessed April 8, 2019. http://www.childrenshospital.org/research-and-innovation/research/centers/center-of-excellence-for-pediatric-quality-measurement-cepqm/cepqm-measures/pediatric-readmissions/content
12. Gay JC, Agrawal R, Auger KA, et al. Rates and impact of potentially preventable readmissions at children’s hospitals. J Pediatr. 2015;166(3):613-9.e5. https://doi.org/10.1016/j.jpeds.2014.10.052
13. Auger KA, Teufel RJ, Harris JM, et al. Children’s hospital characteristics and readmission metrics. Pediatrics. 2017;139(2):e20161720. https://doi.org/10.1542/peds.2016-1720
14. Hain PD, Gay JC, Berutti TW, Whitney GM, Wang W, Saville BR. Preventability of early readmissions at a children’s hospital. Pediatrics. 2013;131(1):e171-e181. https://doi.org/10.1542/peds.2012-0820
15. Potentially Preventable Events. Texas Health and Human Services. Accessed May 19, 2019. https://hhs.texas.gov/about-hhs/process-improvement/medicaid-chip-quality-efficiency-improvement/potentially-preventable-events
16. Potentially Preventable Readmissions. New York State Department of Health. Accessed May 28, 2019. https://regs.health.ny.gov/sites/default/files/pdf/recently_adopted_regulations/2011-02-23_potentially_preventable_readmissions.pdf
17. Potentially Preventable Readmissions Policy. Illinois Department of Healthcare and Family Services. Accessed May 28, 2019. https://www.illinois.gov/hfs/SiteCollectionDocuments/PPR_Overview.pdf
18. Jonas JA, Devon EP, Ronan JC, et al. Determining preventability of pediatric readmissions using fault tree analysis. J Hosp Med. 2016;11(5):329-335. https://doi.org/10.1002/jhm.2555
19. Toomey SL, Peltz A, Loren S, et al. Potentially preventable 30-day hospital readmissions at a children’s hospital. Pediatrics. 2016;138(2):e20154182. https://doi.org/10.1542/peds.2015-4182

Article PDF
Issue
Journal of Hospital Medicine 15(12)
Topics
Page Number
723-726. Published Online First November 18, 2020
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Readmission rates are frequently used as a hospital quality metric, with use including payment incentive at the hospital level,1 specific condition quality measurement,2 balancing measures for quality improvement projects,3-5 transition success,6,7 and use in public hospital rankings.8 Currently, four methods are commonly used to evaluate pediatric readmissions, each with strengths and limitations, including the following (Appendix Table 1):

1. All-cause readmissions: A measure of any readmission within a given time period regardless of the reason for readmission.9

2. Unplanned readmission/time flag: A measure intended to identify unplanned readmissions. This measure relies on time designations within the electronic health record. The time between hospital registration and admission is calculated, and if the readmission is registered more than 24 hours prior to admission, the readmission is considered planned.10 Hereafter, this measure will be referred to as the time flag measure.

3. Pediatric all-condition readmission (PACR): A measure intended to identify unplanned readmission through the exclusion of certain procedures and diagnoses.11

4. Potentially preventable readmission (PPR): A method to identify preventable readmissions based on a proprietary algorithm developed by 3M Health Information Systems.12,13

While all four of these measures are used to assess quality, there is little known about these measures’ ability to exclude planned readmissions and identify only preventable pediatric readmission, which conceptually is most relevant to the quality of care. However, many of these measures were not intended to capture preventability, but instead capture the related issue of whether the readmission was planned. Therefore, we sought to evaluate the four readmission measures as they relate to both preventability and unplanned status as determined through medical record review with multidisciplinary care provider input.

METHODS

As part of a hospital-wide readmission reduction quality improvement collaborative at a free-standing tertiary care children’s hospital, clinicians from hospital medicine, cardiology, neonatology, and neurology teams reviewed 30-day readmissions using a standardized abstraction tool. All readmission events (observation or inpatient encounter) after any discharge (observation or inpatient encounter) from eligible units were reviewed; therefore, each hospitalization was a potential index hospitalization. We classified the preventability of each readmission with use of a previously described Likert scale with high interrater reliability.14 For these analyses, readmissions were considered preventable if the reviewing team rated them as either “more likely preventable” or “preventable in most circumstances.” Each readmission was also evaluated as planned or unplanned. Methods for readmission review and classification are in the Appendix.

We included all readmissions between July 2014 and June 2016. We compared the medical record review classifications with the assessments from each of the four measures of pediatric readmission. We calculated sensitivity and specificity for both outcomes (planned/unplanned and preventable/not preventable) for all four measures. For standardization of discussion, we categorized description of measure performance as “very poor” as less than 50%, “poor” between 50%-75%, “fair” as 75%-85%, “good” as 85%-90%, “very good” as 90%-95% and excellent as greater than 95%. We also calculated positive and negative predictive value (PPV and NPV) over plausible ranges of prevalence using the sensitivity and specificity of each comparison (Appendix).

Of note, certain exclusions are outlined by the PACR and PPR algorithms. The PACR evaluates only readmission events that occur in children younger than 18 years. The PPR algorithm does not assign preventability if either the index or readmission event is classified as an observation stay or if it is part of a larger chain of readmissions.

RESULTS

Among 30-day readmissions considered, 1,643 were eligible for medical record review; 1,125 reviews were completed by the clinical teams (68.5%). The median time to readmission was 7 days (interquartile range [IQR], 4-18). Most children were non-Hispanic White (71%) or Black (20%). The median age at hospitalization was 2.3 years (IQR 0.4-12.1). Most children had Medicaid (56%) or private (41%) insurance. Most of the reviews were performed in cardiology (43%) and hospital medicine (37%) with patients in neurology (13%) and neonatology (7%) constituting the remaining reviews. Uncontrolled advancement of chronic disease was the most common readmission category on medical record review (25.1%), followed by unrelated readmission (20.7%), scheduled readmission (20.4%), and progression of acute disease (16.6%) (Appendix Table 2).

Assessment of Preventable and Unplanned Readmissions

On multidisciplinary medical record review, most readmissions were classified as not preventable (84.5%). Specifically, 64% were not preventable and unplanned; 20% were deemed not preventable and planned. Only 15% were classified as unplanned and preventable and 1% as planned and preventable (Appendix Figure: Population A/B).

Matching Chart Review to the Four Algorithms

All 1,125 readmissions were assessed by the all-cause and time flag readmission measures (Appendix Figure: Population A/B). After applying algorithm exclusions (details in Appendix), only 804 of the 1,125 (71.5%) reviewed readmissions matched for PACR readmission comparison (Appendix Figure: Population C); 487 of the 1,125 (43.3%) of the reviewed readmissions matched for PPR comparison (Appendix Figure: Population D).

All-Cause

Because all-cause determines only if a readmission occurs, the measure is by definition 100% sensitive and 0% specific in both assessment of preventability and unplanned readmission (Table: Section A).

 Sensitivity and Specificity of Preventable and Unplanned Readmission Metrics

Time Flag

The time flag measure identified 80% (866/1,112) of the readmissions as unplanned. This measure had very good sensitivity but very poor specificity in identifying preventable readmissions, which corresponded to very poor PPV and good to excellent NPV. In terms of identifying unplanned readmissions, the time flag measure had excellent sensitivity and very good specificity, which corresponded to very good to excellent PPV and good to very good NPV (Table: Section B).

PACR

The PACR algorithm identified 75% (599/796) of readmissions as unplanned. The PACR has good sensitivity but very poor specificity in identifying preventable readmissions, which corresponded to very poor PPV and fair to very good NPV. In terms of identifying unplanned readmissions, the PACR had fair sensitivity but poor specificity, which corresponded to fair PPV and poor NPV (Table: Section C).

PPR

The PPR algorithm identified 53% (257/487) of admissions as potentially preventable. The PPR algorithm had poor sensitivity and specificity in identifying preventable readmissions, which corresponded to very poor PPV and fair to very good NPV. In terms of identifying unplanned readmissions, the PPR algorithm had poor sensitivity and fair specificity in identifying unplanned readmissions, which corresponded to fair to good PPV and very poor to poor NPV (Table: Section D).

Evaluation of Excluded Readmission Events

Because both the PACR and PPR had large numbers of algorithm exclusions, we describe the preventability and unplanned assessment of the excluded readmission events. Both algorithms excluded preventable events. Of the 321 readmissions excluded by the PACR algorithm, 13.4% were classified as preventable by chart review. Likewise, 14.9% of 638 readmissions excluded by PPR were classified as preventable by chart review.

DISCUSSION

The ability to accurately capture preventable pediatric readmission is a goal for hospital quality experts and health policymakers alike. Of the four commonly used readmission measures to assess readmission, only PPR is designed to focus on preventability. Unfortunately, none of these four measures is adequately sensitive or specific to identify preventable readmissions; all measures had very poor PPV for preventability. Of the four measures, the time flag measure had the best sensitivity, specificity, PPV, and NPV for identifying unplanned readmissions.

The overall percentage of unplanned readmissions identified by both the time flag and by PACR measures match the overall percentage of unplanned readmissions identified in chart review: The time flag measure identified 80% of admissions as unplanned versus 79% identified by chart review (Appendix Figure: Population A/B); PACR classified 75% as unplanned versus 81% identified by chart review for PACR-eligible readmissions (Appendix Figure: Population C). In contrast, the PPR algorithm classified many more readmissions as potentially preventable (53%) than were identified by chart review at only 16% (Appendix Figure: Population D). The PACR and PPR algorithms also exclude a significant number of readmissions that are unplanned and a smaller, but not trivial, number of readmissions that are preventable; these exclusions limit their accuracy.

The ability to apply these four measures in real time during a hospitalization varies by metric. Two of the measures, the all-cause and time flag, can be applied during a readmission event, which is appealing for quality improvement initiatives. These measures allow for notification of providers that a current hospitalization is a readmission event, which allows providers the opportunity to learn from these events as they occur (Appendix Table 1). While “unplanned” is not the same as “potentially preventable,” almost all potentially preventable readmissions are unplanned; therefore, accurately identifying unplanned readmissions is more beneficial than all-cause. Additionally, a low all-cause readmission rate can be indicative of poor access to scheduled procedures. Nevertheless, all-cause readmission is sometimes used to measure quality.1,8 While the time flag measure may be more useful for quality improvement initiatives and hospital providers, it relies on hospital registration time, which is not widely available in administrative data sources and, therefore, has limited usefulness to policymakers.

Both PACR and PPR require administrative claims analysis, which is appealing from a policy standpoint. However, the reliance on claims data means the inclusion/exclusion of events can occur only retrospectively, which limits the usefulness of these measures in learning and intervening in real time. When the two measures are compared, PACR offers better sensitivity and PPR offers better specificity with regard to identifying unplanned readmission. The PPR software overcalls preventable readmissions, identifying more readmissions as preventable than there actually are. Nevertheless, Medicaid in several states uses PPR for payment incentive.1,15-17 Given the poor performance of PPR in assessing both preventable and unplanned pediatric readmission, the use of this measure as a quality metric should be limited.

This study should be considered in the context of several limitations. Because the assessment of preventability was determined as part of a learning quality improvement collaborative and not as a planned research endeavor, not all readmission reviews were completed nor were other existent tools18 that allow for preventability assessment via more structured medical record review used. Second, we reviewed cases only from certain clinical services, which would limit generalizability of these findings to all pediatric admissions. However, given the low sensitivity and specificity of some of the metrics, we would not anticipate that the addition of other types of admissions would improve the sensitivity and specificity enough to ensure reliability. Third, while we relied on an established method to determine preventability, prior work has demonstrated that additional information gathered from families may change preventability.19 Finally, due to the exclusions required by the PPR and PACR algorithms, not all readmission events were reviewed. However, these exclusions reflect the actual specifications of use for both measures.

CONCLUSION

The PPR software has poor fidelity in identifying preventable and unplanned pediatric readmission; this finding has broad policy implications given how widely it is used by state Medicaid offices to assess financial penalties. Among the four pediatric readmission measures used, the time flag metric best identifies unplanned readmissions.

Disclosures

The authors have no conflicts of interest or financial relationships relevant to this article to disclose.

Funding

Dr Auger’s research is supported by a grant from the Agency for Healthcare Research and Quality (1K08HS204735-01A1). The project described was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health, under Award Number 5UL1TR001425-04. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Readmission rates are frequently used as a hospital quality metric, with use including payment incentive at the hospital level,1 specific condition quality measurement,2 balancing measures for quality improvement projects,3-5 transition success,6,7 and use in public hospital rankings.8 Currently, four methods are commonly used to evaluate pediatric readmissions, each with strengths and limitations, including the following (Appendix Table 1):

1. All-cause readmissions: A measure of any readmission within a given time period regardless of the reason for readmission.9

2. Unplanned readmission/time flag: A measure intended to identify unplanned readmissions. This measure relies on time designations within the electronic health record. The time between hospital registration and admission is calculated, and if the readmission is registered more than 24 hours prior to admission, the readmission is considered planned.10 Hereafter, this measure will be referred to as the time flag measure.

3. Pediatric all-condition readmission (PACR): A measure intended to identify unplanned readmission through the exclusion of certain procedures and diagnoses.11

4. Potentially preventable readmission (PPR): A method to identify preventable readmissions based on a proprietary algorithm developed by 3M Health Information Systems.12,13

While all four of these measures are used to assess quality, there is little known about these measures’ ability to exclude planned readmissions and identify only preventable pediatric readmission, which conceptually is most relevant to the quality of care. However, many of these measures were not intended to capture preventability, but instead capture the related issue of whether the readmission was planned. Therefore, we sought to evaluate the four readmission measures as they relate to both preventability and unplanned status as determined through medical record review with multidisciplinary care provider input.

METHODS

As part of a hospital-wide readmission reduction quality improvement collaborative at a free-standing tertiary care children’s hospital, clinicians from hospital medicine, cardiology, neonatology, and neurology teams reviewed 30-day readmissions using a standardized abstraction tool. All readmission events (observation or inpatient encounter) after any discharge (observation or inpatient encounter) from eligible units were reviewed; therefore, each hospitalization was a potential index hospitalization. We classified the preventability of each readmission with use of a previously described Likert scale with high interrater reliability.14 For these analyses, readmissions were considered preventable if the reviewing team rated them as either “more likely preventable” or “preventable in most circumstances.” Each readmission was also evaluated as planned or unplanned. Methods for readmission review and classification are in the Appendix.

We included all readmissions between July 2014 and June 2016. We compared the medical record review classifications with the assessments from each of the four measures of pediatric readmission. We calculated sensitivity and specificity for both outcomes (planned/unplanned and preventable/not preventable) for all four measures. For standardization of discussion, we categorized description of measure performance as “very poor” as less than 50%, “poor” between 50%-75%, “fair” as 75%-85%, “good” as 85%-90%, “very good” as 90%-95% and excellent as greater than 95%. We also calculated positive and negative predictive value (PPV and NPV) over plausible ranges of prevalence using the sensitivity and specificity of each comparison (Appendix).

Of note, certain exclusions are outlined by the PACR and PPR algorithms. The PACR evaluates only readmission events that occur in children younger than 18 years. The PPR algorithm does not assign preventability if either the index or readmission event is classified as an observation stay or if it is part of a larger chain of readmissions.

RESULTS

Among 30-day readmissions considered, 1,643 were eligible for medical record review; 1,125 reviews were completed by the clinical teams (68.5%). The median time to readmission was 7 days (interquartile range [IQR], 4-18). Most children were non-Hispanic White (71%) or Black (20%). The median age at hospitalization was 2.3 years (IQR 0.4-12.1). Most children had Medicaid (56%) or private (41%) insurance. Most of the reviews were performed in cardiology (43%) and hospital medicine (37%) with patients in neurology (13%) and neonatology (7%) constituting the remaining reviews. Uncontrolled advancement of chronic disease was the most common readmission category on medical record review (25.1%), followed by unrelated readmission (20.7%), scheduled readmission (20.4%), and progression of acute disease (16.6%) (Appendix Table 2).

Assessment of Preventable and Unplanned Readmissions

On multidisciplinary medical record review, most readmissions were classified as not preventable (84.5%). Specifically, 64% were not preventable and unplanned; 20% were deemed not preventable and planned. Only 15% were classified as unplanned and preventable and 1% as planned and preventable (Appendix Figure: Population A/B).

Matching Chart Review to the Four Algorithms

All 1,125 readmissions were assessed by the all-cause and time flag readmission measures (Appendix Figure: Population A/B). After applying algorithm exclusions (details in Appendix), only 804 of the 1,125 (71.5%) reviewed readmissions matched for PACR readmission comparison (Appendix Figure: Population C); 487 of the 1,125 (43.3%) of the reviewed readmissions matched for PPR comparison (Appendix Figure: Population D).

All-Cause

Because all-cause determines only if a readmission occurs, the measure is by definition 100% sensitive and 0% specific in both assessment of preventability and unplanned readmission (Table: Section A).

 Sensitivity and Specificity of Preventable and Unplanned Readmission Metrics

Time Flag

The time flag measure identified 80% (866/1,112) of the readmissions as unplanned. This measure had very good sensitivity but very poor specificity in identifying preventable readmissions, which corresponded to very poor PPV and good to excellent NPV. In terms of identifying unplanned readmissions, the time flag measure had excellent sensitivity and very good specificity, which corresponded to very good to excellent PPV and good to very good NPV (Table: Section B).

PACR

The PACR algorithm identified 75% (599/796) of readmissions as unplanned. The PACR has good sensitivity but very poor specificity in identifying preventable readmissions, which corresponded to very poor PPV and fair to very good NPV. In terms of identifying unplanned readmissions, the PACR had fair sensitivity but poor specificity, which corresponded to fair PPV and poor NPV (Table: Section C).

PPR

The PPR algorithm identified 53% (257/487) of admissions as potentially preventable. The PPR algorithm had poor sensitivity and specificity in identifying preventable readmissions, which corresponded to very poor PPV and fair to very good NPV. In terms of identifying unplanned readmissions, the PPR algorithm had poor sensitivity and fair specificity in identifying unplanned readmissions, which corresponded to fair to good PPV and very poor to poor NPV (Table: Section D).

Evaluation of Excluded Readmission Events

Because both the PACR and PPR had large numbers of algorithm exclusions, we describe the preventability and unplanned assessment of the excluded readmission events. Both algorithms excluded preventable events. Of the 321 readmissions excluded by the PACR algorithm, 13.4% were classified as preventable by chart review. Likewise, 14.9% of 638 readmissions excluded by PPR were classified as preventable by chart review.

DISCUSSION

The ability to accurately capture preventable pediatric readmission is a goal for hospital quality experts and health policymakers alike. Of the four commonly used readmission measures to assess readmission, only PPR is designed to focus on preventability. Unfortunately, none of these four measures is adequately sensitive or specific to identify preventable readmissions; all measures had very poor PPV for preventability. Of the four measures, the time flag measure had the best sensitivity, specificity, PPV, and NPV for identifying unplanned readmissions.

The overall percentage of unplanned readmissions identified by both the time flag and by PACR measures match the overall percentage of unplanned readmissions identified in chart review: The time flag measure identified 80% of admissions as unplanned versus 79% identified by chart review (Appendix Figure: Population A/B); PACR classified 75% as unplanned versus 81% identified by chart review for PACR-eligible readmissions (Appendix Figure: Population C). In contrast, the PPR algorithm classified many more readmissions as potentially preventable (53%) than were identified by chart review at only 16% (Appendix Figure: Population D). The PACR and PPR algorithms also exclude a significant number of readmissions that are unplanned and a smaller, but not trivial, number of readmissions that are preventable; these exclusions limit their accuracy.

The ability to apply these four measures in real time during a hospitalization varies by metric. Two of the measures, the all-cause and time flag, can be applied during a readmission event, which is appealing for quality improvement initiatives. These measures allow for notification of providers that a current hospitalization is a readmission event, which allows providers the opportunity to learn from these events as they occur (Appendix Table 1). While “unplanned” is not the same as “potentially preventable,” almost all potentially preventable readmissions are unplanned; therefore, accurately identifying unplanned readmissions is more beneficial than all-cause. Additionally, a low all-cause readmission rate can be indicative of poor access to scheduled procedures. Nevertheless, all-cause readmission is sometimes used to measure quality.1,8 While the time flag measure may be more useful for quality improvement initiatives and hospital providers, it relies on hospital registration time, which is not widely available in administrative data sources and, therefore, has limited usefulness to policymakers.

Both PACR and PPR require administrative claims analysis, which is appealing from a policy standpoint. However, the reliance on claims data means the inclusion/exclusion of events can occur only retrospectively, which limits the usefulness of these measures in learning and intervening in real time. When the two measures are compared, PACR offers better sensitivity and PPR offers better specificity with regard to identifying unplanned readmission. The PPR software overcalls preventable readmissions, identifying more readmissions as preventable than there actually are. Nevertheless, Medicaid in several states uses PPR for payment incentive.1,15-17 Given the poor performance of PPR in assessing both preventable and unplanned pediatric readmission, the use of this measure as a quality metric should be limited.

This study should be considered in the context of several limitations. Because the assessment of preventability was determined as part of a learning quality improvement collaborative and not as a planned research endeavor, not all readmission reviews were completed nor were other existent tools18 that allow for preventability assessment via more structured medical record review used. Second, we reviewed cases only from certain clinical services, which would limit generalizability of these findings to all pediatric admissions. However, given the low sensitivity and specificity of some of the metrics, we would not anticipate that the addition of other types of admissions would improve the sensitivity and specificity enough to ensure reliability. Third, while we relied on an established method to determine preventability, prior work has demonstrated that additional information gathered from families may change preventability.19 Finally, due to the exclusions required by the PPR and PACR algorithms, not all readmission events were reviewed. However, these exclusions reflect the actual specifications of use for both measures.

CONCLUSION

The PPR software has poor fidelity in identifying preventable and unplanned pediatric readmission; this finding has broad policy implications given how widely it is used by state Medicaid offices to assess financial penalties. Among the four pediatric readmission measures used, the time flag metric best identifies unplanned readmissions.

Disclosures

The authors have no conflicts of interest or financial relationships relevant to this article to disclose.

Funding

Dr Auger’s research is supported by a grant from the Agency for Healthcare Research and Quality (1K08HS204735-01A1). The project described was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health, under Award Number 5UL1TR001425-04. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

References

1. State Medicaid Payment Policies for Inpatient Hospital Services. Medicaid and CHIP Payment and Access Commission; December 2018. Accessed June 1, 2019. https://www.macpac.gov/publication/macpac-inpatient-hospital-payment-landscapes/
2. Mangione-Smith R, Zhou C, Williams DJ, et al. Pediatric Respiratory Illness Measurement System (PRIMES) scores and outcomes. Pediatrics. 2019;144(2):e20190242. https://doi.org/10.1542/peds.2019-0242
3. Biondi EA, McCulloh R, Staggs VS, et al. Reducing Variability in the Infant Sepsis Evaluation (REVISE): a national quality initiative. Pediatrics. 2019;144(3):e20182201. https://doi.org/10.1542/peds.2018-2201
4. Statile AM, Schondelmeyer AC, Thomson JE, et al. Improving discharge efficiency in medically complex pediatric patients. Pediatrics. 2016;138(2):e20153832. https://doi.org/10.1542/peds.2015-3832
5. White CM, Statile AM, White DL, et al. Using quality improvement to optimise paediatric discharge efficiency. BMJ Qual Saf. 2014;23(5):428-436. https://doi.org/10.1136/bmjqs-2013-002556
6. Auger KA, Simmons JM, Tubbs-Cooley HL, et al; H20 Trial Study Group. Postdischarge nurse home visits and reuse: the Hospital to Home Outcomes (H2O) trial. Pediatrics. 2018;142(1):e20173919. https://doi.org/10.1542/peds.2017-3919
7. Auger KA, Shah SS, Tubbs-Cooley HL, et al. Effects of a 1-time nurse-led telephone call after pediatric discharge: the H2O II randomized clinical trial. JAMA Pediatr. 2018;172(9):e181482. https://doi.org/10.1001/jamapediatrics.2018.1482
8. Olmsted MG, Powell R, Murphy J, Bell Denise, Stanley M, Sanchz R. Methodology: U.S. News & World Report Best Children’s Hospitals 2019-20. U.S. News & World Report; June 17, 2019. Accessed June 16, 2020. https://www.usnews.com/static/documents/health/best-hospitals/BCH_Methodology_2019-20.pdf
9. Bardach NS, Vittinghoff E, Asteria-Peñaloza R, et al. Measuring hospital quality using pediatric readmission and revisit rates. Pediatrics. 2013;132(3):429-436. https://doi.org/10.1542/peds.2012-3527
10. Auger KA, Mueller EL, Weinberg SH, et al. A validated method for identifying unplanned pediatric readmission. J Pediatr. 2016;170:105-12.e102. https://doi.org/10.1016/j.jpeds.2015.11.051
11. Readmissions-Content. Boston Children’s Hospital. Accessed April 8, 2019. http://www.childrenshospital.org/research-and-innovation/research/centers/center-of-excellence-for-pediatric-quality-measurement-cepqm/cepqm-measures/pediatric-readmissions/content
12. Gay JC, Agrawal R, Auger KA, et al. Rates and impact of potentially preventable readmissions at children’s hospitals. J Pediatr. 2015;166(3):613-9.e5. https://doi.org/10.1016/j.jpeds.2014.10.052
13. Auger KA, Teufel RJ, Harris JM, et al. Children’s hospital characteristics and readmission metrics. Pediatrics. 2017;139(2):e20161720. https://doi.org/10.1542/peds.2016-1720
14. Hain PD, Gay JC, Berutti TW, Whitney GM, Wang W, Saville BR. Preventability of early readmissions at a children’s hospital. Pediatrics. 2013;131(1):e171-e181. https://doi.org/10.1542/peds.2012-0820
15. Potentially Preventable Events. Texas Health and Human Services. Accessed May 19, 2019. https://hhs.texas.gov/about-hhs/process-improvement/medicaid-chip-quality-efficiency-improvement/potentially-preventable-events
16. Potentially Preventable Readmissions. New York State Department of Health. Accessed May 28, 2019. https://regs.health.ny.gov/sites/default/files/pdf/recently_adopted_regulations/2011-02-23_potentially_preventable_readmissions.pdf
17. Potentially Preventable Readmissions Policy. Illinois Department of Healthcare and Family Services. Accessed May 28, 2019. https://www.illinois.gov/hfs/SiteCollectionDocuments/PPR_Overview.pdf
18. Jonas JA, Devon EP, Ronan JC, et al. Determining preventability of pediatric readmissions using fault tree analysis. J Hosp Med. 2016;11(5):329-335. https://doi.org/10.1002/jhm.2555
19. Toomey SL, Peltz A, Loren S, et al. Potentially preventable 30-day hospital readmissions at a children’s hospital. Pediatrics. 2016;138(2):e20154182. https://doi.org/10.1542/peds.2015-4182

References

1. State Medicaid Payment Policies for Inpatient Hospital Services. Medicaid and CHIP Payment and Access Commission; December 2018. Accessed June 1, 2019. https://www.macpac.gov/publication/macpac-inpatient-hospital-payment-landscapes/
2. Mangione-Smith R, Zhou C, Williams DJ, et al. Pediatric Respiratory Illness Measurement System (PRIMES) scores and outcomes. Pediatrics. 2019;144(2):e20190242. https://doi.org/10.1542/peds.2019-0242
3. Biondi EA, McCulloh R, Staggs VS, et al. Reducing Variability in the Infant Sepsis Evaluation (REVISE): a national quality initiative. Pediatrics. 2019;144(3):e20182201. https://doi.org/10.1542/peds.2018-2201
4. Statile AM, Schondelmeyer AC, Thomson JE, et al. Improving discharge efficiency in medically complex pediatric patients. Pediatrics. 2016;138(2):e20153832. https://doi.org/10.1542/peds.2015-3832
5. White CM, Statile AM, White DL, et al. Using quality improvement to optimise paediatric discharge efficiency. BMJ Qual Saf. 2014;23(5):428-436. https://doi.org/10.1136/bmjqs-2013-002556
6. Auger KA, Simmons JM, Tubbs-Cooley HL, et al; H20 Trial Study Group. Postdischarge nurse home visits and reuse: the Hospital to Home Outcomes (H2O) trial. Pediatrics. 2018;142(1):e20173919. https://doi.org/10.1542/peds.2017-3919
7. Auger KA, Shah SS, Tubbs-Cooley HL, et al. Effects of a 1-time nurse-led telephone call after pediatric discharge: the H2O II randomized clinical trial. JAMA Pediatr. 2018;172(9):e181482. https://doi.org/10.1001/jamapediatrics.2018.1482
8. Olmsted MG, Powell R, Murphy J, Bell Denise, Stanley M, Sanchz R. Methodology: U.S. News & World Report Best Children’s Hospitals 2019-20. U.S. News & World Report; June 17, 2019. Accessed June 16, 2020. https://www.usnews.com/static/documents/health/best-hospitals/BCH_Methodology_2019-20.pdf
9. Bardach NS, Vittinghoff E, Asteria-Peñaloza R, et al. Measuring hospital quality using pediatric readmission and revisit rates. Pediatrics. 2013;132(3):429-436. https://doi.org/10.1542/peds.2012-3527
10. Auger KA, Mueller EL, Weinberg SH, et al. A validated method for identifying unplanned pediatric readmission. J Pediatr. 2016;170:105-12.e102. https://doi.org/10.1016/j.jpeds.2015.11.051
11. Readmissions-Content. Boston Children’s Hospital. Accessed April 8, 2019. http://www.childrenshospital.org/research-and-innovation/research/centers/center-of-excellence-for-pediatric-quality-measurement-cepqm/cepqm-measures/pediatric-readmissions/content
12. Gay JC, Agrawal R, Auger KA, et al. Rates and impact of potentially preventable readmissions at children’s hospitals. J Pediatr. 2015;166(3):613-9.e5. https://doi.org/10.1016/j.jpeds.2014.10.052
13. Auger KA, Teufel RJ, Harris JM, et al. Children’s hospital characteristics and readmission metrics. Pediatrics. 2017;139(2):e20161720. https://doi.org/10.1542/peds.2016-1720
14. Hain PD, Gay JC, Berutti TW, Whitney GM, Wang W, Saville BR. Preventability of early readmissions at a children’s hospital. Pediatrics. 2013;131(1):e171-e181. https://doi.org/10.1542/peds.2012-0820
15. Potentially Preventable Events. Texas Health and Human Services. Accessed May 19, 2019. https://hhs.texas.gov/about-hhs/process-improvement/medicaid-chip-quality-efficiency-improvement/potentially-preventable-events
16. Potentially Preventable Readmissions. New York State Department of Health. Accessed May 28, 2019. https://regs.health.ny.gov/sites/default/files/pdf/recently_adopted_regulations/2011-02-23_potentially_preventable_readmissions.pdf
17. Potentially Preventable Readmissions Policy. Illinois Department of Healthcare and Family Services. Accessed May 28, 2019. https://www.illinois.gov/hfs/SiteCollectionDocuments/PPR_Overview.pdf
18. Jonas JA, Devon EP, Ronan JC, et al. Determining preventability of pediatric readmissions using fault tree analysis. J Hosp Med. 2016;11(5):329-335. https://doi.org/10.1002/jhm.2555
19. Toomey SL, Peltz A, Loren S, et al. Potentially preventable 30-day hospital readmissions at a children’s hospital. Pediatrics. 2016;138(2):e20154182. https://doi.org/10.1542/peds.2015-4182

Issue
Journal of Hospital Medicine 15(12)
Issue
Journal of Hospital Medicine 15(12)
Page Number
723-726. Published Online First November 18, 2020
Page Number
723-726. Published Online First November 18, 2020
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Katherine A Auger, MD; Email: katherine.auger@cchmc.org; Telephone: 513-803-8092; Twitter: @KathyAugerpeds.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Article PDF Media
Media Files

Healthcare Resource Utilization Following a Discharge Against Medical Advice: An Analysis of Commercially Insured Adults

Article Type
Changed

Discharges against medical advice (DAMAs), in which a patient leaves the hospital prior to a physician-recommended endpoint, represent approximately 1% to 2% of inpatient discharges in the United States.1 When compared with routine discharges, a DAMA is associated with adverse clinical consequences, including an increased risk of all-cause mortality.2,3 Additionally, due to incomplete care, a DAMA may result in increased healthcare resource utilization (HcRU), including the use of inpatient, emergency department (ED), and outpatient services in the postdischarge period. Quantifying these relationships can provide important information regarding an individual’s healthcare-seeking behavior following a DAMA.

Prior literature has focused on the association between a DAMA and the risk of inpatient readmission. Relative to routine discharges, a DAMA is associated with a 1.5 to 2 times increased risk of a 30-day readmission.3-9 However, these estimates are based on mixed-payer populations primarily composed (65%-80%) of individuals with public (Medicaid, Medicare) or no insurance. Further, they do not differentiate this association by payer type. It is unclear if prior results apply to commercially insured adults. These individuals represent a small but nonnegligible proportion (19%) of all DAMAs in the United States.10 Quantifying relationships among commercially insured adults can help advance our understanding of readmission patterns in the DAMA population.

There is limited evidence regarding the relationship between a DAMA and outpatient HcRU in the postdischarge period. Use of ED services after a DAMA has been explored only in specific disease populations such as asthma.4 Additionally, prior studies have reported a reduced frequency in the receipt of medication prescriptions and outpatient follow-up plans among individuals with a DAMA at the time of discharge.11,12 Whether these practices translate to altered patterns of postdischarge prescription drug fills or use of outpatient services is not known.

To address these substantive gaps in the literature, the present study evaluates the association between a DAMA and all-cause HcRU in the postdischarge period among commercially insured adults. We examined HcRU across all points of service including inpatient readmissions, ED visits, physician office visits, nonphysician outpatient encounters, and prescription drug fills. These results can serve as a benchmark for comparison to future studies on DAMAs among publicly insured or uninsured individuals. Furthermore, such knowledge can help providers, payers, and policy planners make evidence-based decisions regarding postdischarge healthcare delivery.

METHODS

Data Source

This retrospective study used a 10% random sample of enrollees in the IQVIA PharMetrics® Plus database (purchased by University of Maryland, Baltimore, under license from IQVIA). The database is composed of fully adjudicated claims and enrollment information from over 70 contributing US health plans and self-insured employer groups for over 140 million unique enrollees from 2006 onward. The enrollee population is generally representative of the commercially insured population that is younger than 65 years of age (with a subset of commercial Medicare and Medicaid) with respect to age and gender.

The database allows longitudinal follow-up for individuals using three files: medical claims, pharmacy claims, and insurance eligibility. The average length of enrollment is 39 months. The claims data represent payments to providers for services rendered to individuals covered by health plans. The medical claims file contains information on diagnostic and therapeutic services rendered in the inpatient and outpatient settings. The pharmacy claims file captures data on prescription drugs dispensed in retail and mail-order settings. The eligibility file contains demographic and insurance eligibility information for individuals.

Study Population

We identified all individuals aged 18 to 64 years with an inpatient admission record between January 1, 2007, and December 31, 2015. All individuals with continuous medical and prescription drug coverage from 6 months prior to the hospital admission date (baseline period) through 30 days following the discharge date (follow-up period) were included. Inpatient admissions with a missing discharge disposition or those that resulted in in-hospital death, discharge to a short-term hospital, skilled nursing facility, intermediate care facility, or any other type of facility were not considered for analysis. Only the first eligible inpatient admission was considered for analysis.

Main Predictor Variable

Individuals with a DAMA were analyzed as the case group. A DAMA was identified using the “Patient Status Code” variable, which represents the discharge disposition of each individual. Individuals who were discharged to home/self-care or discharged to a home health organization formed the control group (hereafter referred to as routine discharge).

Demographic, Clinical, and Hospitalization Characteristics

An individual’s age, sex, and region of residence were determined at the date of hospital admission. The Elixhauser algorithm was used to categorize comorbid conditions (as scores of 0, 1-2, ≥3 depending on number of comorbidities) based on International Classification of Diseases, Ninth Revision, Clinical Modification, diagnosis codes during the baseline period.13,14 The following characteristics of each individual’s eligible inpatient admission were captured: year, timing (weekday or weekend), length of stay (LOS, measured in days), and receipt of a surgical procedure.

Outcomes

All-cause HcRU was identified during the 30-day postdischarge period. Specifically, we identified inpatient readmissions, ED visits, physician office visits, nonphysician outpatient encounters (for example, pathology, radiology, outpatient surgical services), and prescription drug fills. Binary variables (yes or no) were created for inpatient readmissions and ED visits while the remaining HcRU categories (ie, physician office visits, nonphysician outpatient encounters, and prescription drug fills) were analyzed as count variables. In the sensitivity analyses, we provide results for HcRU outcomes among a subgroup of individuals who had at least 90 days of continuous medical and prescription drug benefits following the hospital discharge.

Statistical Analysis

Descriptive Analysis

Measures of interest were reported using summary statistics depending on the nature of the variable. Continuous variables were described using t tests, and categorical variables were described using chi-square tests.

Propensity Score Matching

Cases and controls were matched using a 1:1 greedy matching algorithm based on propensity scores.15 We developed propensity scores based on confounders that we hypothesized would be associated with a DAMA and postdischarge HcRU. The propensity score model included the following variables: age, sex, region of residence, Elixhauser comorbidity index score, year of admission, timing of admission, LOS, and presence of any surgical procedure during the inpatient admission. The best match between cases and controls was determined based on the absolute difference in their propensity scores, which allowed for a maximal caliper width of 0.2 of the standard deviation of the logit of the propensity score.16 A standardized difference value of less than 0.1 was used to assess balance in baseline patient and hospital characteristics between cases and controls consistent with prior literature.17,18 Proportions and balance, as measured by standardized differences between baseline covariates across cases and controls in the matched sample, are displayed in tabular format (Appendix Table 1).

Healthcare Resource Utilization

We estimated the adjusted odds ratio (AOR) using a logistic regression model. The AOR quantified the association between a DAMA and the prevalence of all-cause inpatient readmissions and ED visits during the 30-day postdischarge period. We estimated incident rate ratios (IRR) for count outcomes. Given the large number of individuals with no physician office visits, nonphysician outpatient encounters, or prescription drug fills, we estimated model parameters for IRRs using a finite mixture negative binomial hurdle model.19 We considered the data to represent a mixture of a constant distribution (which always generates zero counts) and a zero-truncated distribution (which always generates nonzero counts). The finite mixture count models include two outcomes: the mixing probabilities and the count distribution. The mixing probabilities quantify the probability that an observation for the HcRU category will be drawn from either the constant distribution (with mass at zero) or the count distribution. Conditional on having positive values, a zero-truncated generalized linear model (GLM) governs the count variable. Compared with other GLM specifications (eg, Poisson, negative binomial, zero-inflated), the negative binomial hurdle model presented the best-fitting model across several information criteria statistics (Appendix Figures 1-3 and Appendix Tables 2-4).

The GLM results provided IRR for the counts of HcRU. Ratios were interpreted as evidence of increased HcRU (IRR ≥ 1.0) or decreased HcRU (IRR < 1.0) among individuals with a DAMA compared with those discharged routinely. For all HcRU analyses, we reported results for the matched sample. All analyses were conducted using SAS version 9.4 (SAS Institute), and statistical significance was determined at α= .05. The study received the University of Maryland, Baltimore, Institutional Review Board approval (HP-00081497).

RESULTS

The unmatched sample included 457,530 individuals, of whom 0.5% had a DAMA. A consort diagram illustrating cohort inclusion and exclusion criteria is presented in Appendix Figure 4. Demographic, clinical, and inpatient admission characteristics of the unmatched sample and for subgroups defined by discharge status are displayed in Table 1. In the unmatched sample, the median age at admission was higher for individuals with a DAMA than it was for those discharged routinely (43 vs 42 years, respectively), and the proportion of males was higher among those with a DAMA (58.4% vs 33.1%). There were statistically significant differences based on the geographic region of residence and the comorbidity burden across both groups. The median LOS was shorter (1 day vs 2 days), the proportion of weekend admissions was higher (22.2% vs 16.3%), and the proportion of inpatient surgical procedures was lower (12.9% vs 59.2%) among those with a DAMA compared with that among those with routine discharges. The propensity score-matched sample included 2,245 cases and 2,245 controls (Appendix Table 1). Standardized differences for all baseline factors were less than 0.1, indicating that cases and controls were matched on the included baseline factors.

Demographic, Clinical, and Hospitalization Characteristics of the Unmatched Sample

Summary Statistics: Proportions and Counts

Across the DAMA and routine discharge groups, the proportion of individuals with a 30-day inpatient readmission was similar (19.5% vs 18.7%; P = .47), whereas the proportion with an ED visit was higher (18.6% vs 9.1%; P < .01). There were no differences in the median number of inpatient readmissions (median, 0) and ED visits (median, 0) across both groups. Individuals with a DAMA and those discharged routinely displayed similar median counts of 30-day physician office (median, 1) and nonphysician outpatient encounters (median, 1) (Table 2). Individuals with a DAMA displayed a lower median number of prescription drug fills (median, 2 vs 3) than that among those with a routine discharge (Table 2).

Summary Statistics for HcRU During the 30-day Postdischarge Period

Main Analysis: Thirty-Day Healthcare Resource Utilization

The associations between a DAMA and 30-day inpatient readmissions and ED visits based on the matched sample are presented in Table 3. Individuals with a DAMA had increased odds for an ED visit (AOR, 2.28; 95% CI, 1.90-2.72) but no significant difference in the odds of a 30-day inpatient readmission (AOR, 1.06; 95% CI, 0.91-1.23) compared with those discharged routinely.

Adjusted Odds Ratios for Binary Outcomes During 30-Day Postdischarge Period

The association between a DAMA and count HcRU outcomes is presented in Table 4. Compared with those discharged routinely, individuals with a DAMA displayed no significant difference in rates for physician office visits (IRR, 1.01; 95% CI, 0.91-1.11), nonphysician outpatient encounters (IRR, 0.89; 95% CI, 0.78-1.00), and prescription drug fills (IRR, 1.03; 95% CI, 0.97-1.09) during the 30-day postdischarge period.

Adjusted IRR for Count Outcomes During 30-Day Postdischarge Period

Sensitivity Analysis: Ninety-Day Healthcare Resource Utilization

Relative to those discharged routinely, individuals with a DAMA had statistically significant increased odds of 90-day inpatient readmissions (AOR, 1.18; 95% CI, 1.02-1.36), odds of ED visits (AOR, 2.16; 95% CI, 1.85-2.51), and rates of prescription drug fills (IRR, 1.32; 95% CI, 1.29-1.35). No statistically significant differences were observed in the rates of physician office visits and nonphysician outpatient encounters across both groups.

DISCUSSION

In this commercially insured sample of working age individuals, we identified an association between a DAMA and the likelihood and intensity of postdischarge HcRU. The direction of the association varied across categories of HcRU and the duration of follow-up. A DAMA was associated with increased odds of 30-day ED visits but not 30-day readmissions compared with routine discharges. No significant differences were observed in the rates of 30-day physician office visits, nonphysician outpatient encounters, and prescription drug fills across both groups. To our knowledge, this is the first study on DAMAs that examines postdischarge HcRU outside the inpatient setting.

The 0.5% prevalence of DAMAs in our study was lower than the approximate 1% to 2% value that is typically reported in the literature. Prior studies have typically reported results based on mixed-payer populations.3-10 These mixed-payer populations include publicly insured (Medicare or Medicaid) or uninsured stays, which account for a disproportionate share of all DAMAs. In contrast, commercially insured stays account for the lowest proportion of all DAMAs.10 Similar to prior literature,5 the DAMA group in our study was younger, had a higher proportion of males, had a higher comorbidity burden, and had a shorter LOS than the routinely discharged group.

We observed a greater likelihood of ED utilization after a DAMA. Similar findings have been reported, which may indicate that patients with a DAMA receive inadequate treatment at the time of discharge and may require further acute treatment. For example, a prior study reported that, after a DAMA, individuals with asthma were four times more likely to have an ED visit within 14 days compared with those discharged routinely.4

Contrary to prior findings,3-9 we found no significant difference in the odds of a 30-day inpatient readmission across the DAMA and routine discharge groups, which may be attributable to differences in the populations studied. Those previous studies used mixed payer populations and did not differentiate results by payer type. The mixed payer populations in these studies were older (mean ages, 55 years and above) and had an increased comorbidity burden compared with our commercially insured population. Furthermore, some of these studies were either limited to single sites,8 single state hospital systems,3,4,9 or focused on specific medical populations.3,4,6-9 Our national sample of commercially insured adults is considerably younger, with a mean age of 43 years. Thirty days may be too brief to observe enough inpatient readmissions for the purpose of comparative analyses. This is suggested by our results, which indicated that there is an association between DAMA and 90-day inpatient readmission. Additionally, nonsignificant findings for 30-day inpatient readmissions may also be due to the small sample size of the DAMA group in our study, which may have limited robust statistical inference. Future studies in a larger population of commercially insured individuals with a DAMA are required to confirm these findings.

Nonsignificant differences in the rates of 30-day physician office visits, nonphysician outpatient encounters, and prescription drug fills across both groups may explain the null association with 30-day inpatient readmissions. Prior literature on specific medical populations or individuals with general hospital admissions report that early outpatient follow-up can help prevent 30-day readmissions.20-25 In our sample, we observed similar rates of outpatient follow-up across the DAMA and routinely discharged groups. Prior studies based on single hospital sites have reported that, at the time of discharge, a lower proportion of individuals with a DAMA received medication prescriptions and outpatient follow-up plans compared with those discharged routinely.11,12 In contrast, we evaluated prescription drug fills and outpatient visits during the postdischarge period, which may explain the difference in findings.

The present study has several strengths. To the best of our knowledge, our study represents the first and largest retrospective analysis of DAMAs in a national sample of commercially insured adults. In addition to a large generalizable sample, we examine HcRU after a DAMA across major points of service over a longitudinal postdischarge period. Our results provide a comprehensive understanding of utilization outcomes in this population including those outside the inpatient setting, which has been the focus of prior literature. These findings can help guide the implementation of appropriate patient- and system-level interventions to optimize DAMA prevention and mitigate the associated utilization burden on the healthcare system in the postdischarge period.26,27

Our findings should be interpreted with certain limitations in mind. First, this study used data based on a commercially insured sample of patients and may not be generalizable to publicly insured or uninsured samples. Second, like prior DAMA studies that used the Nationwide Readmissions Database instead,5-7 our study was unable to account for individual-level factors such as race, marital status, family social support, income, health literacy, and activation in self-care. Further, given the limitations of our data, we were unable to control for hospital characteristics such as bed size, urban-rural designation, teaching status, and control (eg, private or government ownership). Despite the use of propensity score methods to balance both comparison groups on observable sources of confounding, we cannot rule out the possibility of residual confounding. Lastly, due to a lack of data on postdischarge mortality outcomes, we could not control for competing risk of death in our analysis. However, in a population with an average age of 43 years, we did not expect high or differential 30- or 90-day postdischarge mortality rates across both groups.

Our findings suggest several important directions for future research. First, it will be useful to examine these associations among publicly insured and uninsured samples in which a DAMA is more prevalent and in which the associations with HcRU may be more pronounced than they are in the commercially insured population. Secondly, future research should identify subgroups of DAMA patients with an increased propensity for postdischarge HcRU. This can help in the design of individualized outpatient follow-up plans that address patient-specific medical and social needs. Finally, our findings highlight the need for education, practice guidelines, and suitable interventions to help providers in the prevention and management of a DAMA.

CONCLUSION

Using data from a commercially insured population, we identified associations between a DAMA and postdischarge HcRU. The associations differed by category of HcRU. We identified a positive association with the likelihood of ED utilization but no association with the likelihood of 30-day inpatient readmission or general outpatient utilization. Our results indicate that the examination of inpatient readmissions after a DAMA should not be considered in isolation. The identification of the full range of outpatient and inpatient HcRU after a DAMA in a broad population of patients can improve our understanding of outcomes following a DAMA and support appropriate system-level interventions designed to reduce their prevalence.

Acknowledgments

The statements, findings, conclusions, views, and opinions contained and expressed in this manuscript are based in part on data obtained under license from IQVIA. Source: IQVIA PharMetrics® Plus January 2006 – December 2015, IQVIA. All Rights Reserved. The statements, findings, conclusions, views, and opinions contained and expressed herein are not necessarily those of IQVIA or any of its affiliated or subsidiary entities.

Disclosures

Dr Onukwugha reports grants from Bayer Healthcare Pharmaceuticals, grants from Pfizer, Inc, and personal fees from Novo Nordisk outside the submitted work. The other authors have nothing to disclose. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the US Department of Veterans Affairs, the U.S. Government, or the VA National Center for Ethics in Health Care.

Funding

The authors acknowledge the support of the University of Maryland, Baltimore Institute for Clinical & Translational Research (ICTR) through the ICTR Voucher Program.

Files
References

1. Alfandre DJ. “I’m going home”: discharges against medical advice. Mayo Clin Proc. 2009;84(3):255-260. https://doi.org/10.4065/84.3.255
2. Garland A, Ramsey CD, Fransoo R, et al. Rates of readmission and death associated with leaving hospital against medical advice: a population-based study. CMAJ. 2013;185(14):1207-1214. https://doi.org/10.1503/cmaj.130029
3. Fiscella K, Meldrum S, Barnett S. Hospital discharge against advice after myocardial infarction: deaths and readmissions. Am J Med. 2007;120(12):1047-1053. https://doi.org/10.1016/j.amjmed.2007.08.024
4. Baptist AP, Warrier I, Arora R, Ager J, Massanari RM. Hospitalized patients with asthma who leave against medical advice: characteristics, reasons, and outcomes. J Allergy Clin Immunol. 2007;119(4):924-929. https://doi.org/10.1016/j.jaci.2006.11.695
5. Kumar N. Burden of 30-day readmissions associated with discharge against medical advice among inpatients in the United States. Am J Med. 2019;132(6):708-717.e4. https://doi.org/10.1016/j.amjmed.2019.01.023
6. Kwok CS, Walsh MN, Volgman A, et al. Discharge against medical advice after hospitalisation for acute myocardial infarction. Heart. 2019;105(4):315-321. https://doi.org/10.1136/heartjnl-2018-313671
7. Patel B, Prousi G, Shah M, et al. Thirty-day readmission rate in acute heart failure patients discharged against medical advice in a matched cohort study. Mayo Clin Proc. 2018;93(10):1397-1403. https://doi.org/10.1016/j.mayocp.2018.04.023
8. Southern WN, Nahvi S, Arnsten JH. Increased risk of mortality and readmission among patients discharged against medical advice. Am J Med. 2012;125(6):594-602. https://doi.org/10.1016/j.amjmed.2011.12.017
9. Onukwugha E, Mullins D, Loh FE, Saunders E, Shaya FT, Weir MR. Readmissions after unauthorized discharges in the cardiovascular setting. Med Care. 2011;49(2):215-224. https://doi.org/10.1097/mlr.0b013e31820192a5
10. Stranges E, Wier L, Merrill CT, Steiner C. Hospitalizations in which Patients Leave the Hospital against Medical Advice (AMA), 2007. HCUP Statistical Brief #78. Healthcare Cost and Utilization Project, Agency for Healthcare Research and Quality; August 2009. Accessed 04/07 2020.http://www.hcup-us.ahrq.gov/reports/statbriefs/sb78.pdf
11. Edwards J, Markert R, Bricker D. Discharge against medical advice: how often do we intervene? J Hosp Med. 2013;8(10):574-577. https://doi.org/10.1002/jhm.2087
12. Stearns CR, Bakamjian A, Sattar S, Weintraub MR. Discharges against medical advice at a county hospital: provider perceptions and practice. J Hosp Med. 2017;12(1):11-17. https://doi.org/10.1002/jhm.2672
13. Garland A, Fransoo R, Olafson K, et al. The Epidemiology and Outcomes of Critical Illness in Manitoba. Manitoba Centre for Health Policy; April 2012. Accessed April 7, 2020. http://mchp-appserv.cpe.umanitoba.ca/reference/MCHP_ICU_Report_WEB_(20120403).pdf
14. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. https://doi.org/10.1097/00005650-199801000-00004
15. Austin PC. A comparison of 12 algorithms for matching on the propensity score. Stat Med. 2014;33(6):1057-1069. https://doi.org/10.1002/sim.6004
16. Austin PC. Optimal caliper widths for propensity‐score matching when estimating differences in means and differences in proportions in observational studies. Pharm Stat. 2011;10(2):150-161. https://doi.org/10.1002/pst.433
17. Austin PC, Mamdani MM. A comparison of propensity score methods: a case‐study estimating the effectiveness of post‐AMI statin use. Stat Med. 2006;25(12):2084-2106. https://doi.org/10.1002/sim.2328
18. Normand ST, Landrum MB, Guadagnoli E, et al. Validating recommendations for coronary angiography following acute myocardial infarction in the elderly: a matched analysis using propensity scores. J Clin Epidemiol. 2001;54(4):387-398. https://doi.org/10.1016/s0895-4356(00)00321-8
19. Mullahy J. Specification and testing of some modified count data models. J Econometrics. 1986;33(3):341-365. https://doi.org/10.1016/0304-4076(86)90002-3
20. Halasyamani L, Kripalani S, Coleman E, et al. Transition of care for hospitalized elderly patients—development of a discharge checklist for hospitalists. J Hosp Med. 2006;1(6):354-360. https://doi.org/10.1002/jhm.129
21. Hernandez AF, Greiner MA, Fonarow GC, et al. Relationship between early physician follow-up and 30-day readmission among Medicare beneficiaries hospitalized for heart failure. JAMA. 2010;303(17):1716-1722. https://doi.org/10.1001/jama.2010.533
22. Leschke J, Panepinto JA, Nimmer M, Hoffmann RG, Yan K, Brousseau DC. Outpatient follow‐up and rehospitalizations for sickle cell disease patients. Pediatr Blood Cancer. 2012;58(3):406-409. https://doi.org/10.1002/pbc.23140
23. Misky GJ, Wald HL, Coleman EA. Post‐hospitalization transitions: Examining the effects of timing of primary care provider follow‐up. J Hosp Med. 2010;5(7):392-397. https://doi.org/10.1002/jhm.666
24. Muus K, Knudson A, Klug MG, Gokun J, Sarrazin M, Kaboli P. Effect of post-discharge follow-up care on re-admissions among US veterans with congestive heart failure: a rural-urban comparison. Rural Remote Health. 2010;10(2):1447.https://doi.org/10.22605/RRH1447
25. Ryan J, Kang S, Dolacky S, Ingrassia J, Ganeshan R. Change in readmissions and follow-up visits as part of a heart failure readmission quality improvement initiative. Am J Med. 2013;126(11):989-994.e1. https://doi.org/10.1016/j.amjmed.2013.06.027
26. Alfandre D. Improving quality in against medical advice discharges—more empirical evidence, enhanced professional education, and directed systems changes. J Hosp Med. 2017;12(1):59-60. https://doi.org/10.1002/jhm.2678
27. Nagarajan M, Offurum AI, Gulati M, Onukwugha E. Discharges Against Medical Advice: Prevalence, Predictors, and Populations. In: Alfandre D, ed. Against‐Medical‐Advice Discharges from the Hospital. Springer; 2018:11-29.

Article PDF
Issue
Journal of Hospital Medicine 15(12)
Topics
Page Number
716-722. Published Online First November 18, 2020
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Discharges against medical advice (DAMAs), in which a patient leaves the hospital prior to a physician-recommended endpoint, represent approximately 1% to 2% of inpatient discharges in the United States.1 When compared with routine discharges, a DAMA is associated with adverse clinical consequences, including an increased risk of all-cause mortality.2,3 Additionally, due to incomplete care, a DAMA may result in increased healthcare resource utilization (HcRU), including the use of inpatient, emergency department (ED), and outpatient services in the postdischarge period. Quantifying these relationships can provide important information regarding an individual’s healthcare-seeking behavior following a DAMA.

Prior literature has focused on the association between a DAMA and the risk of inpatient readmission. Relative to routine discharges, a DAMA is associated with a 1.5 to 2 times increased risk of a 30-day readmission.3-9 However, these estimates are based on mixed-payer populations primarily composed (65%-80%) of individuals with public (Medicaid, Medicare) or no insurance. Further, they do not differentiate this association by payer type. It is unclear if prior results apply to commercially insured adults. These individuals represent a small but nonnegligible proportion (19%) of all DAMAs in the United States.10 Quantifying relationships among commercially insured adults can help advance our understanding of readmission patterns in the DAMA population.

There is limited evidence regarding the relationship between a DAMA and outpatient HcRU in the postdischarge period. Use of ED services after a DAMA has been explored only in specific disease populations such as asthma.4 Additionally, prior studies have reported a reduced frequency in the receipt of medication prescriptions and outpatient follow-up plans among individuals with a DAMA at the time of discharge.11,12 Whether these practices translate to altered patterns of postdischarge prescription drug fills or use of outpatient services is not known.

To address these substantive gaps in the literature, the present study evaluates the association between a DAMA and all-cause HcRU in the postdischarge period among commercially insured adults. We examined HcRU across all points of service including inpatient readmissions, ED visits, physician office visits, nonphysician outpatient encounters, and prescription drug fills. These results can serve as a benchmark for comparison to future studies on DAMAs among publicly insured or uninsured individuals. Furthermore, such knowledge can help providers, payers, and policy planners make evidence-based decisions regarding postdischarge healthcare delivery.

METHODS

Data Source

This retrospective study used a 10% random sample of enrollees in the IQVIA PharMetrics® Plus database (purchased by University of Maryland, Baltimore, under license from IQVIA). The database is composed of fully adjudicated claims and enrollment information from over 70 contributing US health plans and self-insured employer groups for over 140 million unique enrollees from 2006 onward. The enrollee population is generally representative of the commercially insured population that is younger than 65 years of age (with a subset of commercial Medicare and Medicaid) with respect to age and gender.

The database allows longitudinal follow-up for individuals using three files: medical claims, pharmacy claims, and insurance eligibility. The average length of enrollment is 39 months. The claims data represent payments to providers for services rendered to individuals covered by health plans. The medical claims file contains information on diagnostic and therapeutic services rendered in the inpatient and outpatient settings. The pharmacy claims file captures data on prescription drugs dispensed in retail and mail-order settings. The eligibility file contains demographic and insurance eligibility information for individuals.

Study Population

We identified all individuals aged 18 to 64 years with an inpatient admission record between January 1, 2007, and December 31, 2015. All individuals with continuous medical and prescription drug coverage from 6 months prior to the hospital admission date (baseline period) through 30 days following the discharge date (follow-up period) were included. Inpatient admissions with a missing discharge disposition or those that resulted in in-hospital death, discharge to a short-term hospital, skilled nursing facility, intermediate care facility, or any other type of facility were not considered for analysis. Only the first eligible inpatient admission was considered for analysis.

Main Predictor Variable

Individuals with a DAMA were analyzed as the case group. A DAMA was identified using the “Patient Status Code” variable, which represents the discharge disposition of each individual. Individuals who were discharged to home/self-care or discharged to a home health organization formed the control group (hereafter referred to as routine discharge).

Demographic, Clinical, and Hospitalization Characteristics

An individual’s age, sex, and region of residence were determined at the date of hospital admission. The Elixhauser algorithm was used to categorize comorbid conditions (as scores of 0, 1-2, ≥3 depending on number of comorbidities) based on International Classification of Diseases, Ninth Revision, Clinical Modification, diagnosis codes during the baseline period.13,14 The following characteristics of each individual’s eligible inpatient admission were captured: year, timing (weekday or weekend), length of stay (LOS, measured in days), and receipt of a surgical procedure.

Outcomes

All-cause HcRU was identified during the 30-day postdischarge period. Specifically, we identified inpatient readmissions, ED visits, physician office visits, nonphysician outpatient encounters (for example, pathology, radiology, outpatient surgical services), and prescription drug fills. Binary variables (yes or no) were created for inpatient readmissions and ED visits while the remaining HcRU categories (ie, physician office visits, nonphysician outpatient encounters, and prescription drug fills) were analyzed as count variables. In the sensitivity analyses, we provide results for HcRU outcomes among a subgroup of individuals who had at least 90 days of continuous medical and prescription drug benefits following the hospital discharge.

Statistical Analysis

Descriptive Analysis

Measures of interest were reported using summary statistics depending on the nature of the variable. Continuous variables were described using t tests, and categorical variables were described using chi-square tests.

Propensity Score Matching

Cases and controls were matched using a 1:1 greedy matching algorithm based on propensity scores.15 We developed propensity scores based on confounders that we hypothesized would be associated with a DAMA and postdischarge HcRU. The propensity score model included the following variables: age, sex, region of residence, Elixhauser comorbidity index score, year of admission, timing of admission, LOS, and presence of any surgical procedure during the inpatient admission. The best match between cases and controls was determined based on the absolute difference in their propensity scores, which allowed for a maximal caliper width of 0.2 of the standard deviation of the logit of the propensity score.16 A standardized difference value of less than 0.1 was used to assess balance in baseline patient and hospital characteristics between cases and controls consistent with prior literature.17,18 Proportions and balance, as measured by standardized differences between baseline covariates across cases and controls in the matched sample, are displayed in tabular format (Appendix Table 1).

Healthcare Resource Utilization

We estimated the adjusted odds ratio (AOR) using a logistic regression model. The AOR quantified the association between a DAMA and the prevalence of all-cause inpatient readmissions and ED visits during the 30-day postdischarge period. We estimated incident rate ratios (IRR) for count outcomes. Given the large number of individuals with no physician office visits, nonphysician outpatient encounters, or prescription drug fills, we estimated model parameters for IRRs using a finite mixture negative binomial hurdle model.19 We considered the data to represent a mixture of a constant distribution (which always generates zero counts) and a zero-truncated distribution (which always generates nonzero counts). The finite mixture count models include two outcomes: the mixing probabilities and the count distribution. The mixing probabilities quantify the probability that an observation for the HcRU category will be drawn from either the constant distribution (with mass at zero) or the count distribution. Conditional on having positive values, a zero-truncated generalized linear model (GLM) governs the count variable. Compared with other GLM specifications (eg, Poisson, negative binomial, zero-inflated), the negative binomial hurdle model presented the best-fitting model across several information criteria statistics (Appendix Figures 1-3 and Appendix Tables 2-4).

The GLM results provided IRR for the counts of HcRU. Ratios were interpreted as evidence of increased HcRU (IRR ≥ 1.0) or decreased HcRU (IRR < 1.0) among individuals with a DAMA compared with those discharged routinely. For all HcRU analyses, we reported results for the matched sample. All analyses were conducted using SAS version 9.4 (SAS Institute), and statistical significance was determined at α= .05. The study received the University of Maryland, Baltimore, Institutional Review Board approval (HP-00081497).

RESULTS

The unmatched sample included 457,530 individuals, of whom 0.5% had a DAMA. A consort diagram illustrating cohort inclusion and exclusion criteria is presented in Appendix Figure 4. Demographic, clinical, and inpatient admission characteristics of the unmatched sample and for subgroups defined by discharge status are displayed in Table 1. In the unmatched sample, the median age at admission was higher for individuals with a DAMA than it was for those discharged routinely (43 vs 42 years, respectively), and the proportion of males was higher among those with a DAMA (58.4% vs 33.1%). There were statistically significant differences based on the geographic region of residence and the comorbidity burden across both groups. The median LOS was shorter (1 day vs 2 days), the proportion of weekend admissions was higher (22.2% vs 16.3%), and the proportion of inpatient surgical procedures was lower (12.9% vs 59.2%) among those with a DAMA compared with that among those with routine discharges. The propensity score-matched sample included 2,245 cases and 2,245 controls (Appendix Table 1). Standardized differences for all baseline factors were less than 0.1, indicating that cases and controls were matched on the included baseline factors.

Demographic, Clinical, and Hospitalization Characteristics of the Unmatched Sample

Summary Statistics: Proportions and Counts

Across the DAMA and routine discharge groups, the proportion of individuals with a 30-day inpatient readmission was similar (19.5% vs 18.7%; P = .47), whereas the proportion with an ED visit was higher (18.6% vs 9.1%; P < .01). There were no differences in the median number of inpatient readmissions (median, 0) and ED visits (median, 0) across both groups. Individuals with a DAMA and those discharged routinely displayed similar median counts of 30-day physician office (median, 1) and nonphysician outpatient encounters (median, 1) (Table 2). Individuals with a DAMA displayed a lower median number of prescription drug fills (median, 2 vs 3) than that among those with a routine discharge (Table 2).

Summary Statistics for HcRU During the 30-day Postdischarge Period

Main Analysis: Thirty-Day Healthcare Resource Utilization

The associations between a DAMA and 30-day inpatient readmissions and ED visits based on the matched sample are presented in Table 3. Individuals with a DAMA had increased odds for an ED visit (AOR, 2.28; 95% CI, 1.90-2.72) but no significant difference in the odds of a 30-day inpatient readmission (AOR, 1.06; 95% CI, 0.91-1.23) compared with those discharged routinely.

Adjusted Odds Ratios for Binary Outcomes During 30-Day Postdischarge Period

The association between a DAMA and count HcRU outcomes is presented in Table 4. Compared with those discharged routinely, individuals with a DAMA displayed no significant difference in rates for physician office visits (IRR, 1.01; 95% CI, 0.91-1.11), nonphysician outpatient encounters (IRR, 0.89; 95% CI, 0.78-1.00), and prescription drug fills (IRR, 1.03; 95% CI, 0.97-1.09) during the 30-day postdischarge period.

Adjusted IRR for Count Outcomes During 30-Day Postdischarge Period

Sensitivity Analysis: Ninety-Day Healthcare Resource Utilization

Relative to those discharged routinely, individuals with a DAMA had statistically significant increased odds of 90-day inpatient readmissions (AOR, 1.18; 95% CI, 1.02-1.36), odds of ED visits (AOR, 2.16; 95% CI, 1.85-2.51), and rates of prescription drug fills (IRR, 1.32; 95% CI, 1.29-1.35). No statistically significant differences were observed in the rates of physician office visits and nonphysician outpatient encounters across both groups.

DISCUSSION

In this commercially insured sample of working age individuals, we identified an association between a DAMA and the likelihood and intensity of postdischarge HcRU. The direction of the association varied across categories of HcRU and the duration of follow-up. A DAMA was associated with increased odds of 30-day ED visits but not 30-day readmissions compared with routine discharges. No significant differences were observed in the rates of 30-day physician office visits, nonphysician outpatient encounters, and prescription drug fills across both groups. To our knowledge, this is the first study on DAMAs that examines postdischarge HcRU outside the inpatient setting.

The 0.5% prevalence of DAMAs in our study was lower than the approximate 1% to 2% value that is typically reported in the literature. Prior studies have typically reported results based on mixed-payer populations.3-10 These mixed-payer populations include publicly insured (Medicare or Medicaid) or uninsured stays, which account for a disproportionate share of all DAMAs. In contrast, commercially insured stays account for the lowest proportion of all DAMAs.10 Similar to prior literature,5 the DAMA group in our study was younger, had a higher proportion of males, had a higher comorbidity burden, and had a shorter LOS than the routinely discharged group.

We observed a greater likelihood of ED utilization after a DAMA. Similar findings have been reported, which may indicate that patients with a DAMA receive inadequate treatment at the time of discharge and may require further acute treatment. For example, a prior study reported that, after a DAMA, individuals with asthma were four times more likely to have an ED visit within 14 days compared with those discharged routinely.4

Contrary to prior findings,3-9 we found no significant difference in the odds of a 30-day inpatient readmission across the DAMA and routine discharge groups, which may be attributable to differences in the populations studied. Those previous studies used mixed payer populations and did not differentiate results by payer type. The mixed payer populations in these studies were older (mean ages, 55 years and above) and had an increased comorbidity burden compared with our commercially insured population. Furthermore, some of these studies were either limited to single sites,8 single state hospital systems,3,4,9 or focused on specific medical populations.3,4,6-9 Our national sample of commercially insured adults is considerably younger, with a mean age of 43 years. Thirty days may be too brief to observe enough inpatient readmissions for the purpose of comparative analyses. This is suggested by our results, which indicated that there is an association between DAMA and 90-day inpatient readmission. Additionally, nonsignificant findings for 30-day inpatient readmissions may also be due to the small sample size of the DAMA group in our study, which may have limited robust statistical inference. Future studies in a larger population of commercially insured individuals with a DAMA are required to confirm these findings.

Nonsignificant differences in the rates of 30-day physician office visits, nonphysician outpatient encounters, and prescription drug fills across both groups may explain the null association with 30-day inpatient readmissions. Prior literature on specific medical populations or individuals with general hospital admissions report that early outpatient follow-up can help prevent 30-day readmissions.20-25 In our sample, we observed similar rates of outpatient follow-up across the DAMA and routinely discharged groups. Prior studies based on single hospital sites have reported that, at the time of discharge, a lower proportion of individuals with a DAMA received medication prescriptions and outpatient follow-up plans compared with those discharged routinely.11,12 In contrast, we evaluated prescription drug fills and outpatient visits during the postdischarge period, which may explain the difference in findings.

The present study has several strengths. To the best of our knowledge, our study represents the first and largest retrospective analysis of DAMAs in a national sample of commercially insured adults. In addition to a large generalizable sample, we examine HcRU after a DAMA across major points of service over a longitudinal postdischarge period. Our results provide a comprehensive understanding of utilization outcomes in this population including those outside the inpatient setting, which has been the focus of prior literature. These findings can help guide the implementation of appropriate patient- and system-level interventions to optimize DAMA prevention and mitigate the associated utilization burden on the healthcare system in the postdischarge period.26,27

Our findings should be interpreted with certain limitations in mind. First, this study used data based on a commercially insured sample of patients and may not be generalizable to publicly insured or uninsured samples. Second, like prior DAMA studies that used the Nationwide Readmissions Database instead,5-7 our study was unable to account for individual-level factors such as race, marital status, family social support, income, health literacy, and activation in self-care. Further, given the limitations of our data, we were unable to control for hospital characteristics such as bed size, urban-rural designation, teaching status, and control (eg, private or government ownership). Despite the use of propensity score methods to balance both comparison groups on observable sources of confounding, we cannot rule out the possibility of residual confounding. Lastly, due to a lack of data on postdischarge mortality outcomes, we could not control for competing risk of death in our analysis. However, in a population with an average age of 43 years, we did not expect high or differential 30- or 90-day postdischarge mortality rates across both groups.

Our findings suggest several important directions for future research. First, it will be useful to examine these associations among publicly insured and uninsured samples in which a DAMA is more prevalent and in which the associations with HcRU may be more pronounced than they are in the commercially insured population. Secondly, future research should identify subgroups of DAMA patients with an increased propensity for postdischarge HcRU. This can help in the design of individualized outpatient follow-up plans that address patient-specific medical and social needs. Finally, our findings highlight the need for education, practice guidelines, and suitable interventions to help providers in the prevention and management of a DAMA.

CONCLUSION

Using data from a commercially insured population, we identified associations between a DAMA and postdischarge HcRU. The associations differed by category of HcRU. We identified a positive association with the likelihood of ED utilization but no association with the likelihood of 30-day inpatient readmission or general outpatient utilization. Our results indicate that the examination of inpatient readmissions after a DAMA should not be considered in isolation. The identification of the full range of outpatient and inpatient HcRU after a DAMA in a broad population of patients can improve our understanding of outcomes following a DAMA and support appropriate system-level interventions designed to reduce their prevalence.

Acknowledgments

The statements, findings, conclusions, views, and opinions contained and expressed in this manuscript are based in part on data obtained under license from IQVIA. Source: IQVIA PharMetrics® Plus January 2006 – December 2015, IQVIA. All Rights Reserved. The statements, findings, conclusions, views, and opinions contained and expressed herein are not necessarily those of IQVIA or any of its affiliated or subsidiary entities.

Disclosures

Dr Onukwugha reports grants from Bayer Healthcare Pharmaceuticals, grants from Pfizer, Inc, and personal fees from Novo Nordisk outside the submitted work. The other authors have nothing to disclose. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the US Department of Veterans Affairs, the U.S. Government, or the VA National Center for Ethics in Health Care.

Funding

The authors acknowledge the support of the University of Maryland, Baltimore Institute for Clinical & Translational Research (ICTR) through the ICTR Voucher Program.

Discharges against medical advice (DAMAs), in which a patient leaves the hospital prior to a physician-recommended endpoint, represent approximately 1% to 2% of inpatient discharges in the United States.1 When compared with routine discharges, a DAMA is associated with adverse clinical consequences, including an increased risk of all-cause mortality.2,3 Additionally, due to incomplete care, a DAMA may result in increased healthcare resource utilization (HcRU), including the use of inpatient, emergency department (ED), and outpatient services in the postdischarge period. Quantifying these relationships can provide important information regarding an individual’s healthcare-seeking behavior following a DAMA.

Prior literature has focused on the association between a DAMA and the risk of inpatient readmission. Relative to routine discharges, a DAMA is associated with a 1.5 to 2 times increased risk of a 30-day readmission.3-9 However, these estimates are based on mixed-payer populations primarily composed (65%-80%) of individuals with public (Medicaid, Medicare) or no insurance. Further, they do not differentiate this association by payer type. It is unclear if prior results apply to commercially insured adults. These individuals represent a small but nonnegligible proportion (19%) of all DAMAs in the United States.10 Quantifying relationships among commercially insured adults can help advance our understanding of readmission patterns in the DAMA population.

There is limited evidence regarding the relationship between a DAMA and outpatient HcRU in the postdischarge period. Use of ED services after a DAMA has been explored only in specific disease populations such as asthma.4 Additionally, prior studies have reported a reduced frequency in the receipt of medication prescriptions and outpatient follow-up plans among individuals with a DAMA at the time of discharge.11,12 Whether these practices translate to altered patterns of postdischarge prescription drug fills or use of outpatient services is not known.

To address these substantive gaps in the literature, the present study evaluates the association between a DAMA and all-cause HcRU in the postdischarge period among commercially insured adults. We examined HcRU across all points of service including inpatient readmissions, ED visits, physician office visits, nonphysician outpatient encounters, and prescription drug fills. These results can serve as a benchmark for comparison to future studies on DAMAs among publicly insured or uninsured individuals. Furthermore, such knowledge can help providers, payers, and policy planners make evidence-based decisions regarding postdischarge healthcare delivery.

METHODS

Data Source

This retrospective study used a 10% random sample of enrollees in the IQVIA PharMetrics® Plus database (purchased by University of Maryland, Baltimore, under license from IQVIA). The database is composed of fully adjudicated claims and enrollment information from over 70 contributing US health plans and self-insured employer groups for over 140 million unique enrollees from 2006 onward. The enrollee population is generally representative of the commercially insured population that is younger than 65 years of age (with a subset of commercial Medicare and Medicaid) with respect to age and gender.

The database allows longitudinal follow-up for individuals using three files: medical claims, pharmacy claims, and insurance eligibility. The average length of enrollment is 39 months. The claims data represent payments to providers for services rendered to individuals covered by health plans. The medical claims file contains information on diagnostic and therapeutic services rendered in the inpatient and outpatient settings. The pharmacy claims file captures data on prescription drugs dispensed in retail and mail-order settings. The eligibility file contains demographic and insurance eligibility information for individuals.

Study Population

We identified all individuals aged 18 to 64 years with an inpatient admission record between January 1, 2007, and December 31, 2015. All individuals with continuous medical and prescription drug coverage from 6 months prior to the hospital admission date (baseline period) through 30 days following the discharge date (follow-up period) were included. Inpatient admissions with a missing discharge disposition or those that resulted in in-hospital death, discharge to a short-term hospital, skilled nursing facility, intermediate care facility, or any other type of facility were not considered for analysis. Only the first eligible inpatient admission was considered for analysis.

Main Predictor Variable

Individuals with a DAMA were analyzed as the case group. A DAMA was identified using the “Patient Status Code” variable, which represents the discharge disposition of each individual. Individuals who were discharged to home/self-care or discharged to a home health organization formed the control group (hereafter referred to as routine discharge).

Demographic, Clinical, and Hospitalization Characteristics

An individual’s age, sex, and region of residence were determined at the date of hospital admission. The Elixhauser algorithm was used to categorize comorbid conditions (as scores of 0, 1-2, ≥3 depending on number of comorbidities) based on International Classification of Diseases, Ninth Revision, Clinical Modification, diagnosis codes during the baseline period.13,14 The following characteristics of each individual’s eligible inpatient admission were captured: year, timing (weekday or weekend), length of stay (LOS, measured in days), and receipt of a surgical procedure.

Outcomes

All-cause HcRU was identified during the 30-day postdischarge period. Specifically, we identified inpatient readmissions, ED visits, physician office visits, nonphysician outpatient encounters (for example, pathology, radiology, outpatient surgical services), and prescription drug fills. Binary variables (yes or no) were created for inpatient readmissions and ED visits while the remaining HcRU categories (ie, physician office visits, nonphysician outpatient encounters, and prescription drug fills) were analyzed as count variables. In the sensitivity analyses, we provide results for HcRU outcomes among a subgroup of individuals who had at least 90 days of continuous medical and prescription drug benefits following the hospital discharge.

Statistical Analysis

Descriptive Analysis

Measures of interest were reported using summary statistics depending on the nature of the variable. Continuous variables were described using t tests, and categorical variables were described using chi-square tests.

Propensity Score Matching

Cases and controls were matched using a 1:1 greedy matching algorithm based on propensity scores.15 We developed propensity scores based on confounders that we hypothesized would be associated with a DAMA and postdischarge HcRU. The propensity score model included the following variables: age, sex, region of residence, Elixhauser comorbidity index score, year of admission, timing of admission, LOS, and presence of any surgical procedure during the inpatient admission. The best match between cases and controls was determined based on the absolute difference in their propensity scores, which allowed for a maximal caliper width of 0.2 of the standard deviation of the logit of the propensity score.16 A standardized difference value of less than 0.1 was used to assess balance in baseline patient and hospital characteristics between cases and controls consistent with prior literature.17,18 Proportions and balance, as measured by standardized differences between baseline covariates across cases and controls in the matched sample, are displayed in tabular format (Appendix Table 1).

Healthcare Resource Utilization

We estimated the adjusted odds ratio (AOR) using a logistic regression model. The AOR quantified the association between a DAMA and the prevalence of all-cause inpatient readmissions and ED visits during the 30-day postdischarge period. We estimated incident rate ratios (IRR) for count outcomes. Given the large number of individuals with no physician office visits, nonphysician outpatient encounters, or prescription drug fills, we estimated model parameters for IRRs using a finite mixture negative binomial hurdle model.19 We considered the data to represent a mixture of a constant distribution (which always generates zero counts) and a zero-truncated distribution (which always generates nonzero counts). The finite mixture count models include two outcomes: the mixing probabilities and the count distribution. The mixing probabilities quantify the probability that an observation for the HcRU category will be drawn from either the constant distribution (with mass at zero) or the count distribution. Conditional on having positive values, a zero-truncated generalized linear model (GLM) governs the count variable. Compared with other GLM specifications (eg, Poisson, negative binomial, zero-inflated), the negative binomial hurdle model presented the best-fitting model across several information criteria statistics (Appendix Figures 1-3 and Appendix Tables 2-4).

The GLM results provided IRR for the counts of HcRU. Ratios were interpreted as evidence of increased HcRU (IRR ≥ 1.0) or decreased HcRU (IRR < 1.0) among individuals with a DAMA compared with those discharged routinely. For all HcRU analyses, we reported results for the matched sample. All analyses were conducted using SAS version 9.4 (SAS Institute), and statistical significance was determined at α= .05. The study received the University of Maryland, Baltimore, Institutional Review Board approval (HP-00081497).

RESULTS

The unmatched sample included 457,530 individuals, of whom 0.5% had a DAMA. A consort diagram illustrating cohort inclusion and exclusion criteria is presented in Appendix Figure 4. Demographic, clinical, and inpatient admission characteristics of the unmatched sample and for subgroups defined by discharge status are displayed in Table 1. In the unmatched sample, the median age at admission was higher for individuals with a DAMA than it was for those discharged routinely (43 vs 42 years, respectively), and the proportion of males was higher among those with a DAMA (58.4% vs 33.1%). There were statistically significant differences based on the geographic region of residence and the comorbidity burden across both groups. The median LOS was shorter (1 day vs 2 days), the proportion of weekend admissions was higher (22.2% vs 16.3%), and the proportion of inpatient surgical procedures was lower (12.9% vs 59.2%) among those with a DAMA compared with that among those with routine discharges. The propensity score-matched sample included 2,245 cases and 2,245 controls (Appendix Table 1). Standardized differences for all baseline factors were less than 0.1, indicating that cases and controls were matched on the included baseline factors.

Demographic, Clinical, and Hospitalization Characteristics of the Unmatched Sample

Summary Statistics: Proportions and Counts

Across the DAMA and routine discharge groups, the proportion of individuals with a 30-day inpatient readmission was similar (19.5% vs 18.7%; P = .47), whereas the proportion with an ED visit was higher (18.6% vs 9.1%; P < .01). There were no differences in the median number of inpatient readmissions (median, 0) and ED visits (median, 0) across both groups. Individuals with a DAMA and those discharged routinely displayed similar median counts of 30-day physician office (median, 1) and nonphysician outpatient encounters (median, 1) (Table 2). Individuals with a DAMA displayed a lower median number of prescription drug fills (median, 2 vs 3) than that among those with a routine discharge (Table 2).

Summary Statistics for HcRU During the 30-day Postdischarge Period

Main Analysis: Thirty-Day Healthcare Resource Utilization

The associations between a DAMA and 30-day inpatient readmissions and ED visits based on the matched sample are presented in Table 3. Individuals with a DAMA had increased odds for an ED visit (AOR, 2.28; 95% CI, 1.90-2.72) but no significant difference in the odds of a 30-day inpatient readmission (AOR, 1.06; 95% CI, 0.91-1.23) compared with those discharged routinely.

Adjusted Odds Ratios for Binary Outcomes During 30-Day Postdischarge Period

The association between a DAMA and count HcRU outcomes is presented in Table 4. Compared with those discharged routinely, individuals with a DAMA displayed no significant difference in rates for physician office visits (IRR, 1.01; 95% CI, 0.91-1.11), nonphysician outpatient encounters (IRR, 0.89; 95% CI, 0.78-1.00), and prescription drug fills (IRR, 1.03; 95% CI, 0.97-1.09) during the 30-day postdischarge period.

Adjusted IRR for Count Outcomes During 30-Day Postdischarge Period

Sensitivity Analysis: Ninety-Day Healthcare Resource Utilization

Relative to those discharged routinely, individuals with a DAMA had statistically significant increased odds of 90-day inpatient readmissions (AOR, 1.18; 95% CI, 1.02-1.36), odds of ED visits (AOR, 2.16; 95% CI, 1.85-2.51), and rates of prescription drug fills (IRR, 1.32; 95% CI, 1.29-1.35). No statistically significant differences were observed in the rates of physician office visits and nonphysician outpatient encounters across both groups.

DISCUSSION

In this commercially insured sample of working age individuals, we identified an association between a DAMA and the likelihood and intensity of postdischarge HcRU. The direction of the association varied across categories of HcRU and the duration of follow-up. A DAMA was associated with increased odds of 30-day ED visits but not 30-day readmissions compared with routine discharges. No significant differences were observed in the rates of 30-day physician office visits, nonphysician outpatient encounters, and prescription drug fills across both groups. To our knowledge, this is the first study on DAMAs that examines postdischarge HcRU outside the inpatient setting.

The 0.5% prevalence of DAMAs in our study was lower than the approximate 1% to 2% value that is typically reported in the literature. Prior studies have typically reported results based on mixed-payer populations.3-10 These mixed-payer populations include publicly insured (Medicare or Medicaid) or uninsured stays, which account for a disproportionate share of all DAMAs. In contrast, commercially insured stays account for the lowest proportion of all DAMAs.10 Similar to prior literature,5 the DAMA group in our study was younger, had a higher proportion of males, had a higher comorbidity burden, and had a shorter LOS than the routinely discharged group.

We observed a greater likelihood of ED utilization after a DAMA. Similar findings have been reported, which may indicate that patients with a DAMA receive inadequate treatment at the time of discharge and may require further acute treatment. For example, a prior study reported that, after a DAMA, individuals with asthma were four times more likely to have an ED visit within 14 days compared with those discharged routinely.4

Contrary to prior findings,3-9 we found no significant difference in the odds of a 30-day inpatient readmission across the DAMA and routine discharge groups, which may be attributable to differences in the populations studied. Those previous studies used mixed payer populations and did not differentiate results by payer type. The mixed payer populations in these studies were older (mean ages, 55 years and above) and had an increased comorbidity burden compared with our commercially insured population. Furthermore, some of these studies were either limited to single sites,8 single state hospital systems,3,4,9 or focused on specific medical populations.3,4,6-9 Our national sample of commercially insured adults is considerably younger, with a mean age of 43 years. Thirty days may be too brief to observe enough inpatient readmissions for the purpose of comparative analyses. This is suggested by our results, which indicated that there is an association between DAMA and 90-day inpatient readmission. Additionally, nonsignificant findings for 30-day inpatient readmissions may also be due to the small sample size of the DAMA group in our study, which may have limited robust statistical inference. Future studies in a larger population of commercially insured individuals with a DAMA are required to confirm these findings.

Nonsignificant differences in the rates of 30-day physician office visits, nonphysician outpatient encounters, and prescription drug fills across both groups may explain the null association with 30-day inpatient readmissions. Prior literature on specific medical populations or individuals with general hospital admissions report that early outpatient follow-up can help prevent 30-day readmissions.20-25 In our sample, we observed similar rates of outpatient follow-up across the DAMA and routinely discharged groups. Prior studies based on single hospital sites have reported that, at the time of discharge, a lower proportion of individuals with a DAMA received medication prescriptions and outpatient follow-up plans compared with those discharged routinely.11,12 In contrast, we evaluated prescription drug fills and outpatient visits during the postdischarge period, which may explain the difference in findings.

The present study has several strengths. To the best of our knowledge, our study represents the first and largest retrospective analysis of DAMAs in a national sample of commercially insured adults. In addition to a large generalizable sample, we examine HcRU after a DAMA across major points of service over a longitudinal postdischarge period. Our results provide a comprehensive understanding of utilization outcomes in this population including those outside the inpatient setting, which has been the focus of prior literature. These findings can help guide the implementation of appropriate patient- and system-level interventions to optimize DAMA prevention and mitigate the associated utilization burden on the healthcare system in the postdischarge period.26,27

Our findings should be interpreted with certain limitations in mind. First, this study used data based on a commercially insured sample of patients and may not be generalizable to publicly insured or uninsured samples. Second, like prior DAMA studies that used the Nationwide Readmissions Database instead,5-7 our study was unable to account for individual-level factors such as race, marital status, family social support, income, health literacy, and activation in self-care. Further, given the limitations of our data, we were unable to control for hospital characteristics such as bed size, urban-rural designation, teaching status, and control (eg, private or government ownership). Despite the use of propensity score methods to balance both comparison groups on observable sources of confounding, we cannot rule out the possibility of residual confounding. Lastly, due to a lack of data on postdischarge mortality outcomes, we could not control for competing risk of death in our analysis. However, in a population with an average age of 43 years, we did not expect high or differential 30- or 90-day postdischarge mortality rates across both groups.

Our findings suggest several important directions for future research. First, it will be useful to examine these associations among publicly insured and uninsured samples in which a DAMA is more prevalent and in which the associations with HcRU may be more pronounced than they are in the commercially insured population. Secondly, future research should identify subgroups of DAMA patients with an increased propensity for postdischarge HcRU. This can help in the design of individualized outpatient follow-up plans that address patient-specific medical and social needs. Finally, our findings highlight the need for education, practice guidelines, and suitable interventions to help providers in the prevention and management of a DAMA.

CONCLUSION

Using data from a commercially insured population, we identified associations between a DAMA and postdischarge HcRU. The associations differed by category of HcRU. We identified a positive association with the likelihood of ED utilization but no association with the likelihood of 30-day inpatient readmission or general outpatient utilization. Our results indicate that the examination of inpatient readmissions after a DAMA should not be considered in isolation. The identification of the full range of outpatient and inpatient HcRU after a DAMA in a broad population of patients can improve our understanding of outcomes following a DAMA and support appropriate system-level interventions designed to reduce their prevalence.

Acknowledgments

The statements, findings, conclusions, views, and opinions contained and expressed in this manuscript are based in part on data obtained under license from IQVIA. Source: IQVIA PharMetrics® Plus January 2006 – December 2015, IQVIA. All Rights Reserved. The statements, findings, conclusions, views, and opinions contained and expressed herein are not necessarily those of IQVIA or any of its affiliated or subsidiary entities.

Disclosures

Dr Onukwugha reports grants from Bayer Healthcare Pharmaceuticals, grants from Pfizer, Inc, and personal fees from Novo Nordisk outside the submitted work. The other authors have nothing to disclose. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the US Department of Veterans Affairs, the U.S. Government, or the VA National Center for Ethics in Health Care.

Funding

The authors acknowledge the support of the University of Maryland, Baltimore Institute for Clinical & Translational Research (ICTR) through the ICTR Voucher Program.

References

1. Alfandre DJ. “I’m going home”: discharges against medical advice. Mayo Clin Proc. 2009;84(3):255-260. https://doi.org/10.4065/84.3.255
2. Garland A, Ramsey CD, Fransoo R, et al. Rates of readmission and death associated with leaving hospital against medical advice: a population-based study. CMAJ. 2013;185(14):1207-1214. https://doi.org/10.1503/cmaj.130029
3. Fiscella K, Meldrum S, Barnett S. Hospital discharge against advice after myocardial infarction: deaths and readmissions. Am J Med. 2007;120(12):1047-1053. https://doi.org/10.1016/j.amjmed.2007.08.024
4. Baptist AP, Warrier I, Arora R, Ager J, Massanari RM. Hospitalized patients with asthma who leave against medical advice: characteristics, reasons, and outcomes. J Allergy Clin Immunol. 2007;119(4):924-929. https://doi.org/10.1016/j.jaci.2006.11.695
5. Kumar N. Burden of 30-day readmissions associated with discharge against medical advice among inpatients in the United States. Am J Med. 2019;132(6):708-717.e4. https://doi.org/10.1016/j.amjmed.2019.01.023
6. Kwok CS, Walsh MN, Volgman A, et al. Discharge against medical advice after hospitalisation for acute myocardial infarction. Heart. 2019;105(4):315-321. https://doi.org/10.1136/heartjnl-2018-313671
7. Patel B, Prousi G, Shah M, et al. Thirty-day readmission rate in acute heart failure patients discharged against medical advice in a matched cohort study. Mayo Clin Proc. 2018;93(10):1397-1403. https://doi.org/10.1016/j.mayocp.2018.04.023
8. Southern WN, Nahvi S, Arnsten JH. Increased risk of mortality and readmission among patients discharged against medical advice. Am J Med. 2012;125(6):594-602. https://doi.org/10.1016/j.amjmed.2011.12.017
9. Onukwugha E, Mullins D, Loh FE, Saunders E, Shaya FT, Weir MR. Readmissions after unauthorized discharges in the cardiovascular setting. Med Care. 2011;49(2):215-224. https://doi.org/10.1097/mlr.0b013e31820192a5
10. Stranges E, Wier L, Merrill CT, Steiner C. Hospitalizations in which Patients Leave the Hospital against Medical Advice (AMA), 2007. HCUP Statistical Brief #78. Healthcare Cost and Utilization Project, Agency for Healthcare Research and Quality; August 2009. Accessed 04/07 2020.http://www.hcup-us.ahrq.gov/reports/statbriefs/sb78.pdf
11. Edwards J, Markert R, Bricker D. Discharge against medical advice: how often do we intervene? J Hosp Med. 2013;8(10):574-577. https://doi.org/10.1002/jhm.2087
12. Stearns CR, Bakamjian A, Sattar S, Weintraub MR. Discharges against medical advice at a county hospital: provider perceptions and practice. J Hosp Med. 2017;12(1):11-17. https://doi.org/10.1002/jhm.2672
13. Garland A, Fransoo R, Olafson K, et al. The Epidemiology and Outcomes of Critical Illness in Manitoba. Manitoba Centre for Health Policy; April 2012. Accessed April 7, 2020. http://mchp-appserv.cpe.umanitoba.ca/reference/MCHP_ICU_Report_WEB_(20120403).pdf
14. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. https://doi.org/10.1097/00005650-199801000-00004
15. Austin PC. A comparison of 12 algorithms for matching on the propensity score. Stat Med. 2014;33(6):1057-1069. https://doi.org/10.1002/sim.6004
16. Austin PC. Optimal caliper widths for propensity‐score matching when estimating differences in means and differences in proportions in observational studies. Pharm Stat. 2011;10(2):150-161. https://doi.org/10.1002/pst.433
17. Austin PC, Mamdani MM. A comparison of propensity score methods: a case‐study estimating the effectiveness of post‐AMI statin use. Stat Med. 2006;25(12):2084-2106. https://doi.org/10.1002/sim.2328
18. Normand ST, Landrum MB, Guadagnoli E, et al. Validating recommendations for coronary angiography following acute myocardial infarction in the elderly: a matched analysis using propensity scores. J Clin Epidemiol. 2001;54(4):387-398. https://doi.org/10.1016/s0895-4356(00)00321-8
19. Mullahy J. Specification and testing of some modified count data models. J Econometrics. 1986;33(3):341-365. https://doi.org/10.1016/0304-4076(86)90002-3
20. Halasyamani L, Kripalani S, Coleman E, et al. Transition of care for hospitalized elderly patients—development of a discharge checklist for hospitalists. J Hosp Med. 2006;1(6):354-360. https://doi.org/10.1002/jhm.129
21. Hernandez AF, Greiner MA, Fonarow GC, et al. Relationship between early physician follow-up and 30-day readmission among Medicare beneficiaries hospitalized for heart failure. JAMA. 2010;303(17):1716-1722. https://doi.org/10.1001/jama.2010.533
22. Leschke J, Panepinto JA, Nimmer M, Hoffmann RG, Yan K, Brousseau DC. Outpatient follow‐up and rehospitalizations for sickle cell disease patients. Pediatr Blood Cancer. 2012;58(3):406-409. https://doi.org/10.1002/pbc.23140
23. Misky GJ, Wald HL, Coleman EA. Post‐hospitalization transitions: Examining the effects of timing of primary care provider follow‐up. J Hosp Med. 2010;5(7):392-397. https://doi.org/10.1002/jhm.666
24. Muus K, Knudson A, Klug MG, Gokun J, Sarrazin M, Kaboli P. Effect of post-discharge follow-up care on re-admissions among US veterans with congestive heart failure: a rural-urban comparison. Rural Remote Health. 2010;10(2):1447.https://doi.org/10.22605/RRH1447
25. Ryan J, Kang S, Dolacky S, Ingrassia J, Ganeshan R. Change in readmissions and follow-up visits as part of a heart failure readmission quality improvement initiative. Am J Med. 2013;126(11):989-994.e1. https://doi.org/10.1016/j.amjmed.2013.06.027
26. Alfandre D. Improving quality in against medical advice discharges—more empirical evidence, enhanced professional education, and directed systems changes. J Hosp Med. 2017;12(1):59-60. https://doi.org/10.1002/jhm.2678
27. Nagarajan M, Offurum AI, Gulati M, Onukwugha E. Discharges Against Medical Advice: Prevalence, Predictors, and Populations. In: Alfandre D, ed. Against‐Medical‐Advice Discharges from the Hospital. Springer; 2018:11-29.

References

1. Alfandre DJ. “I’m going home”: discharges against medical advice. Mayo Clin Proc. 2009;84(3):255-260. https://doi.org/10.4065/84.3.255
2. Garland A, Ramsey CD, Fransoo R, et al. Rates of readmission and death associated with leaving hospital against medical advice: a population-based study. CMAJ. 2013;185(14):1207-1214. https://doi.org/10.1503/cmaj.130029
3. Fiscella K, Meldrum S, Barnett S. Hospital discharge against advice after myocardial infarction: deaths and readmissions. Am J Med. 2007;120(12):1047-1053. https://doi.org/10.1016/j.amjmed.2007.08.024
4. Baptist AP, Warrier I, Arora R, Ager J, Massanari RM. Hospitalized patients with asthma who leave against medical advice: characteristics, reasons, and outcomes. J Allergy Clin Immunol. 2007;119(4):924-929. https://doi.org/10.1016/j.jaci.2006.11.695
5. Kumar N. Burden of 30-day readmissions associated with discharge against medical advice among inpatients in the United States. Am J Med. 2019;132(6):708-717.e4. https://doi.org/10.1016/j.amjmed.2019.01.023
6. Kwok CS, Walsh MN, Volgman A, et al. Discharge against medical advice after hospitalisation for acute myocardial infarction. Heart. 2019;105(4):315-321. https://doi.org/10.1136/heartjnl-2018-313671
7. Patel B, Prousi G, Shah M, et al. Thirty-day readmission rate in acute heart failure patients discharged against medical advice in a matched cohort study. Mayo Clin Proc. 2018;93(10):1397-1403. https://doi.org/10.1016/j.mayocp.2018.04.023
8. Southern WN, Nahvi S, Arnsten JH. Increased risk of mortality and readmission among patients discharged against medical advice. Am J Med. 2012;125(6):594-602. https://doi.org/10.1016/j.amjmed.2011.12.017
9. Onukwugha E, Mullins D, Loh FE, Saunders E, Shaya FT, Weir MR. Readmissions after unauthorized discharges in the cardiovascular setting. Med Care. 2011;49(2):215-224. https://doi.org/10.1097/mlr.0b013e31820192a5
10. Stranges E, Wier L, Merrill CT, Steiner C. Hospitalizations in which Patients Leave the Hospital against Medical Advice (AMA), 2007. HCUP Statistical Brief #78. Healthcare Cost and Utilization Project, Agency for Healthcare Research and Quality; August 2009. Accessed 04/07 2020.http://www.hcup-us.ahrq.gov/reports/statbriefs/sb78.pdf
11. Edwards J, Markert R, Bricker D. Discharge against medical advice: how often do we intervene? J Hosp Med. 2013;8(10):574-577. https://doi.org/10.1002/jhm.2087
12. Stearns CR, Bakamjian A, Sattar S, Weintraub MR. Discharges against medical advice at a county hospital: provider perceptions and practice. J Hosp Med. 2017;12(1):11-17. https://doi.org/10.1002/jhm.2672
13. Garland A, Fransoo R, Olafson K, et al. The Epidemiology and Outcomes of Critical Illness in Manitoba. Manitoba Centre for Health Policy; April 2012. Accessed April 7, 2020. http://mchp-appserv.cpe.umanitoba.ca/reference/MCHP_ICU_Report_WEB_(20120403).pdf
14. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. https://doi.org/10.1097/00005650-199801000-00004
15. Austin PC. A comparison of 12 algorithms for matching on the propensity score. Stat Med. 2014;33(6):1057-1069. https://doi.org/10.1002/sim.6004
16. Austin PC. Optimal caliper widths for propensity‐score matching when estimating differences in means and differences in proportions in observational studies. Pharm Stat. 2011;10(2):150-161. https://doi.org/10.1002/pst.433
17. Austin PC, Mamdani MM. A comparison of propensity score methods: a case‐study estimating the effectiveness of post‐AMI statin use. Stat Med. 2006;25(12):2084-2106. https://doi.org/10.1002/sim.2328
18. Normand ST, Landrum MB, Guadagnoli E, et al. Validating recommendations for coronary angiography following acute myocardial infarction in the elderly: a matched analysis using propensity scores. J Clin Epidemiol. 2001;54(4):387-398. https://doi.org/10.1016/s0895-4356(00)00321-8
19. Mullahy J. Specification and testing of some modified count data models. J Econometrics. 1986;33(3):341-365. https://doi.org/10.1016/0304-4076(86)90002-3
20. Halasyamani L, Kripalani S, Coleman E, et al. Transition of care for hospitalized elderly patients—development of a discharge checklist for hospitalists. J Hosp Med. 2006;1(6):354-360. https://doi.org/10.1002/jhm.129
21. Hernandez AF, Greiner MA, Fonarow GC, et al. Relationship between early physician follow-up and 30-day readmission among Medicare beneficiaries hospitalized for heart failure. JAMA. 2010;303(17):1716-1722. https://doi.org/10.1001/jama.2010.533
22. Leschke J, Panepinto JA, Nimmer M, Hoffmann RG, Yan K, Brousseau DC. Outpatient follow‐up and rehospitalizations for sickle cell disease patients. Pediatr Blood Cancer. 2012;58(3):406-409. https://doi.org/10.1002/pbc.23140
23. Misky GJ, Wald HL, Coleman EA. Post‐hospitalization transitions: Examining the effects of timing of primary care provider follow‐up. J Hosp Med. 2010;5(7):392-397. https://doi.org/10.1002/jhm.666
24. Muus K, Knudson A, Klug MG, Gokun J, Sarrazin M, Kaboli P. Effect of post-discharge follow-up care on re-admissions among US veterans with congestive heart failure: a rural-urban comparison. Rural Remote Health. 2010;10(2):1447.https://doi.org/10.22605/RRH1447
25. Ryan J, Kang S, Dolacky S, Ingrassia J, Ganeshan R. Change in readmissions and follow-up visits as part of a heart failure readmission quality improvement initiative. Am J Med. 2013;126(11):989-994.e1. https://doi.org/10.1016/j.amjmed.2013.06.027
26. Alfandre D. Improving quality in against medical advice discharges—more empirical evidence, enhanced professional education, and directed systems changes. J Hosp Med. 2017;12(1):59-60. https://doi.org/10.1002/jhm.2678
27. Nagarajan M, Offurum AI, Gulati M, Onukwugha E. Discharges Against Medical Advice: Prevalence, Predictors, and Populations. In: Alfandre D, ed. Against‐Medical‐Advice Discharges from the Hospital. Springer; 2018:11-29.

Issue
Journal of Hospital Medicine 15(12)
Issue
Journal of Hospital Medicine 15(12)
Page Number
716-722. Published Online First November 18, 2020
Page Number
716-722. Published Online First November 18, 2020
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Eberechukwu Onukwugha, PhD, MS; Email: eonukwug@rx.umaryland.edu; Telephone: 410-706-8981.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Article PDF Media
Media Files

Models for Implementing Buprenorphine Treatment in the VHA

Article Type
Changed
Article PDF
Issue
Federal Practitioner - 26(5)
Publications
Page Number
48-57
Sections
Article PDF
Article PDF
Issue
Federal Practitioner - 26(5)
Issue
Federal Practitioner - 26(5)
Page Number
48-57
Page Number
48-57
Publications
Publications
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Article PDF Media

Template Design and Analysis: Integrating Informatics Solutions to Improve Clinical Documentation

Article Type
Changed

Standardized template design is a useful tool to improve clinical documentation and reliable reporting of health care outcomes when constructed with clear objectives and with collaboration of key stakeholders. A standardized template should not only capture accurate diagnostic information, but also inform quality improvement (QI) measures and best practices.

Kang and colleagues showed that a correlation exists between organizational satisfaction and improved quality outcomes.1 A new initiative should have a well-defined purpose reinforced by collaborative workgroups and engaged employees who understand their clinical care role with electronic health record (EHR) modifications.

Several studies have shown how the usefulness of templates achieve multipurpose goals, such as accurate documentation and improved care. Valluru and colleagues showed a significant increase in vaccination rates for patients with inflammatory bowel disease after implementing a standardized template.2 By using a standardized template, Thaker and colleagues showed improved documentation regarding obesity and increased nutritional and physical activity counseling.3 Furthermore, Grogan and colleagues showed that templates are useful for house staff education on International Classification of Diseases (ICD) terminology and demonstrated improved documentation in the postintervention group.4,5

This article discusses the US Department of Veterans Affairs (VA) North Florida/South Georgia Veterans Health System (NF/SGVHS) integrated informatics solutions within template design in the Veterans Health Administration (VHA) EHR system that was associated with an increase in its case severity index (CSI) through improved clinical documentation capture.

 

Methods

According to policy activities that constitute research at NF/SGVHS, institutional review board approval was not required as this work met the criteria for operational improvement activities exempt from ethics review.

NF/SGVHS includes 2 hospitals: Malcom Randall VA Medical Center (MRVAMC) in Gainesville, Florida, and Lake City VA Medical Center (LCVAMC) in Lake City, Florida. MRVAMC is a large, 1a, academic VA facility composed of rotating residents and fellows and includes multiple specialty care services. LCVAMC is a smaller, nonteaching facility.

Template Design Impact

CSI is a risk-adjusted formula developed by the Inpatient Evaluation Center within VHA. CSI is incorporated into the VHA quality metrics reporting system, Strategic Analytics for Improvement and Learning (SAIL). CSI risk-adjusts metrics such as length of stay and mortality before releasing SAIL reports. CSI is calculated separately for acute level of care (LOC) and for the intensive care unit (ICU). In fiscal year (FY) 2017, acute LOC preimplementation data for CSI at NF/SGVHS were 0.76 for MRVAMC and 0.81 for LCVAMC, which was significantly below the national VHA average of 0.96 (Table).

A below-average CSI conveys a less complicated case mix compared with most other VA facilities. Although smaller VA facilities may have a less complicated case mix, it is unusual for large, tertiary care 1a VA facilities to have a low CSI. This low CSI is usually due to inadequate documentation, which affects not only risk-adjusted quality metrics outcomes, but also potential reimbursement.6

An interdisciplinary team composed of attendings, residents, and a clinical document improvement specialist identified the below-average acute LOC CSI for MRVAMC and LCVAMC compared with that of the national VHA average. Further analysis by chart reviews showed inconsistencies with standardized documentation despite prior health care provider education on ICD terminology and specific groups of common comorbidities analyzed in administrative data reviews for risk-adjustment purposes, known as Elixhauser comorbidities.5,7

A chart review showed lack of clarity regarding primary reason(s) for admission and chronic comorbidities within NF/SGVHS. Using Pareto chart analysis, the template team designed a standardized history and physical (H&P) medicine template based on NF/SGVHS common medicine admissions (Figure 1). A Pareto chart is a valuable QI tool that assists with identifying majority contributors to a problem(s) being analyzed when evaluating a large set of data points. Subsequently, this tool helps focus direction on QI efforts.8



The template had the usual H&P elements not shown (eg, chief complaint, history of present illness, etc), and highlights the assessment/plan section containing primary reason(s) for admission and chronic comorbidities (Figure 1). The complete assessment and plan section on the template can be found in the Appendix.

To simplify the template interface, only single clicks were required to expand diagnostic and chronic comorbidity checkboxes. Subcategories then appeared to select diagnosis and chronic comorbidities along with free text for additional documentation.

In addition, data objects were created within the template that permitted the ability to retrieve information from the VHA EHR and insert specific data points of interest in the template; for example, body mass index to assess degree of obesity and estimated glomerular filtration rate to determine the stage of chronic kidney disease. This allowed users to easily reference data in one template in lieu of searching for data in multiple places in the EHR.9

Results

The standardized H&P medicine template was implemented at MRVAMC and LCVAMC in June 2018 (the final month of the third quarter of FY 2018). As clinical providers throughout NF/SGVHS used the standardized template, acute LOC postimplementation data for CSI significantly improved. Although the national VHA average slightly decreased from 0.96 in the first quarter of FY 2017 to 0.89, in the first quarter of FY 2019, MRVAMC acute LOC CSI improved from 0.76 to 0.97, and LCVAMC acute LOC CSI improved from 0.81 to 1.07 during the same period.

In addition, compliance also was monitored within MRVAMC and LCVAMC for about 1 year after standardized H&P medicine template implementation. Compliance was determined by how often the standardized H&P medicine template was used for inpatient medicine admissions to the acute care wards vs other H&P notes used (such as personalized templates).

Methodology for compliance analysis included acquisition of completed H&P medicine notes from June 18, 2018 to June 30, 2019, within the VHA Veterans Information Systems and Technology Architecture (VistA) clinical and business information system using the search strings: “H&P admission history and physical” and “history of present illness.”10

A review identified 10,845 completed medicine H&P notes. Nine hundred eighteen notes were excluded as their search function yielded a location not corresponding to MRVAMC or LCVAMC. Of the 9,927 notes remaining, 8,025 of these were completed medicine H&P notes at MRVAMC and 1,902 were completed medicine H&P notes at LCVAMC (Figure 2).



From June 18, 2018 to June 30, 2019 at MRVAMC, compliance was reviewed monthly for the 8,025 completed H&P medicine notes. Of the completed H&P medicine notes, the standardized H&P medicine template was used 43.2% in June 2018. By June 2019, MRVAMC clinical providers demonstrated significant improvement for standardized H&P medicine template use at 89.9% (Figure 3). Total average compliance from June 18, 2018 to June 30, 2019, was 88.4%, which doubled compliance from the initial introduction of the standardized H&P medicine template.



Compliance was reviewed monthly for the 1,902 completed H&P medicine notes from June 18, 2018 to June 30, 2019, at LCVAMC. Of the completed H&P medicine notes, the standardized template was used 48.2% of the time in June 2018. By June 2019, LCVAMC clinical providers demonstrated significant improvement for standardized H&P medicine template use, which increased to 96.9%. Total average compliance from June 18, 2018 to June 30, 2019, was 93.8%, which was almost double the baseline compliance rate.

Discussion

Template design with clear objectives, strategic collaboration, and integrated informatics solutions has the potential to increase accuracy of documentation. As shown, the NF/SGVHS template design was associated with significant improvement in acute LOC CSI for both MRVAMC and LCVAMC due to more accurate documentation using the standardized H&P medicine template.

Numerous factors contributed to the success of this template design. First, a clear vision for application of the template was communicated with key stakeholders. In addition, the template design team was focused on specific goals rather than a one size fits all approach, which was crucial for sustainable execution. Although interdisciplinary teamwork has the potential to result in innovative practices, large multidisciplinary teams also may have difficulty establishing a shared vision that can result in barriers to achieving project goals.

Balancing standardization and customization was essential for user buy-in. As noted by Gardner and Pearce, inviting clinical providers to participate in template design and allowing for customization has the potential to increase acceptance and use of templates.11 Although the original design for the standardized H&P medicine template started with the medicine service at NF/SGVHS, the design framework is applicable to numerous services where various clinical care elements can be customized.

Explaining the informatics tools built into the template allowed clinicians to see opportunities to improve clinical documentation and the impact it has on reporting health care outcomes. When improvement work involves integrating clinical care delivery and administrative expectations, it is essential that health care systems understand and strategically execute project initiatives at this critical juncture.

Finally, incorporation of a sustainability plan when process improvement strategies are implemented is vital. In addition to collaboration with the clinical providers during design and implementation of the standardized template, leadership buy-in was key. Compliance with standardized H&P medicine template use was monitored monthly and reviewed by the NF/SGVHS Chief of Staff.

As noted, LCVAMC postimplementation acute LOC CSI was higher than that of MRVAMC despite being a smaller facility. This might be due to the MRVAMC designation as a teaching institution. Medicine is the only inpatient service at LCVAMC staffed by hospitalists with limited specialists available for consultation, whereas MRVAMC is a tertiary care teaching facility with numerous inpatient services and subspecialties. As LCVAMC has more continuity, house staff rotating at MRVAMC require continued training/education on new templates and process changes.

Limitations

Although standardized template design was successful at NF/SGVHS, limitations should be noted. Our clinical documentation improvement (CDI) program also was expanded about the same time as the new templates were released. The expansion of the CDI program in addition to new template design likely had a synergistic effect on acute LOC CSI.

CSI is a complex, risk-adjusted model that includes numerous factors, including but not limited to diagnosis and comorbid conditions. Other factors include age, marital status, procedures, source of admission, specific laboratory values, medical or surgical diagnosis-related group, intensive care unit stays, and immunosuppressive status. CSI also includes operative and nonoperative components that average into an overall CSI. As the majority of CSI is composed of nonoperative constituents within NF/SGVHS, we do not believe this had any substantial impact on reporting of CSI improvements.

In addition, template entry into VHA EHR requires a location selection (such as a clinic name or ward name following an inpatient admission). Of the 10,845 completed H&P medicine notes identified in VistA, 918 notes were excluded from analysis as their search function yielded a location not corresponding to MRVAMC or LCVAMC. For the 918 notes excluded, we believe this was likely due to user error where locations not related to MRVAMC or LCVAMC were selected during standardized H&P medicine template entry.

Conclusions

After the NF/SGVHS implementation of a uniquely designed template embedded with informatics solutions within the VHA EHR, the CSI increased due to more accurate documentation.

Next steps include determining the impact of the NF/SGVHS template design on potential reimbursement and expanding template design into the outpatient setting where there are additional opportunities to improve clinical documentation and reliable reporting of health care outcomes.

Acknowledgments

The authors thank the following individuals for their experience and contribution: Beverley White is the Clinical Documentation Improvement Coordinator at North Florida/South Georgia Veterans Health System and provided expertise on documentation requirements. Russell Jacobitz and Susan Rozelle provided technical expertise on electronic health record system enhancements and implemented the template design. Jess Delaune, MD, and Robert Carroll, MD, provided additional physician input during template design. We also acknowledge the Inpatient Evaluation Center (IPEC) within the Veterans Health Administration (VHA). IPEC developed the case severity index, a risk-adjusted formula incorporated into the VHA quality metric reporting system, Strategic Analytics for Improvement and Learning (SAIL).

References

1. Kang R, Kunkel S, Columbo J, et al. Association of Hospital Employee satisfaction with patient safety and satisfaction within Veterans Affairs Medical Centers. Am J Med. 2019;132(4):530-534.e1. doi: 10.1016/j.amjmed.2018.11.031

2. Valluru, N, Kang L, Gaidos JK. Health maintenance documentation improves for veterans with IBD using a template in the Computerized Patient Record System. Dig Dis Sci. 2018;63(7):1782-1786. doi:10.1007%2Fs10620-018-5093-5

3. Thaker VV, Lee F, Bottino CJ, et al. Impact of an electronic template on documentation of obesity in a primary care clinic. Clin Pediatr. 2016;55(12):1152-1159. doi:10.1177/0009922815621331

4. Grogan EL, Speroff T, Deppen S, et al. Improving documentation of patient acuity level using a progress note template. J Am Coll Surg. 2004;199(3):468-475. doi:10.1016/j.jamcollsurg.2004.05.254

5. Centers for Disease Control and Prevention. Classification of diseases, functioning, and disability. https://www .cdc.gov/nchs/icd/index.htm. Updated June 30, 2020. Accessed October 12, 2020.

6. Marill K A, Gauharou ES, Nelson BK, et al. Prospective, randomized trial of template-assisted versus undirected written recording of physician records in the emergency department. Ann Emerg Med. 1999;33(5):500- 509. doi:10.1016/S0196-0644(99)70336-7

7. Elixhauser A, Steiner C, Harris DR, et al. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. doi:10.1097/00005650-199801000-00004

8. Hart KA, Steinfeldt BA, Braun RD. Formulation and applications of a probalistic Pareto chart. AIAA. 2015;0804. doi:10.2514/6.2015-0804

9. IBM. IBM knowledge center: overview of data objects. https://www.ibm.com/support/knowledgecenter /en/SSLTBW_2.3.0/com.ibm.zos.v2r3.cbclx01/data _objects.htm. Accessed October 12, 2020.

10. US Department of Veterans Affairs. History of IT at VA. https://www.oit.va.gov/about/history.cfm. Accessed October 18, 2020.

11. Gardner CL, Pearce PF. Customization of electronic medical record templates to improve end-user satisfaction. Comput Inform Nurs. 2013;31(3):115-121. doi:10.1097/NXN.0b013e3182771814

Article PDF
Author and Disclosure Information

Justin Iannello is  National Lead Physician Utilization Management Advisor for the Veterans Health Administration and Associate Chief of Staff for Clinical Informatics at the Southeast Louisiana Veterans Health Care System in New Orleans. Nida Waheed is Chief Resident in Quality and Patient Safety for the Department of Internal Medicine, and Patrick Neilan is Chief Resident for the Department of Internal Medicine, both at University of Florida in Gainesville.

Correspondence: Justin Iannello (JLIannello22@gmail.com)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 37(11)a
Publications
Topics
Page Number
527-531
Sections
Author and Disclosure Information

Justin Iannello is  National Lead Physician Utilization Management Advisor for the Veterans Health Administration and Associate Chief of Staff for Clinical Informatics at the Southeast Louisiana Veterans Health Care System in New Orleans. Nida Waheed is Chief Resident in Quality and Patient Safety for the Department of Internal Medicine, and Patrick Neilan is Chief Resident for the Department of Internal Medicine, both at University of Florida in Gainesville.

Correspondence: Justin Iannello (JLIannello22@gmail.com)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Justin Iannello is  National Lead Physician Utilization Management Advisor for the Veterans Health Administration and Associate Chief of Staff for Clinical Informatics at the Southeast Louisiana Veterans Health Care System in New Orleans. Nida Waheed is Chief Resident in Quality and Patient Safety for the Department of Internal Medicine, and Patrick Neilan is Chief Resident for the Department of Internal Medicine, both at University of Florida in Gainesville.

Correspondence: Justin Iannello (JLIannello22@gmail.com)

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF

Standardized template design is a useful tool to improve clinical documentation and reliable reporting of health care outcomes when constructed with clear objectives and with collaboration of key stakeholders. A standardized template should not only capture accurate diagnostic information, but also inform quality improvement (QI) measures and best practices.

Kang and colleagues showed that a correlation exists between organizational satisfaction and improved quality outcomes.1 A new initiative should have a well-defined purpose reinforced by collaborative workgroups and engaged employees who understand their clinical care role with electronic health record (EHR) modifications.

Several studies have shown how the usefulness of templates achieve multipurpose goals, such as accurate documentation and improved care. Valluru and colleagues showed a significant increase in vaccination rates for patients with inflammatory bowel disease after implementing a standardized template.2 By using a standardized template, Thaker and colleagues showed improved documentation regarding obesity and increased nutritional and physical activity counseling.3 Furthermore, Grogan and colleagues showed that templates are useful for house staff education on International Classification of Diseases (ICD) terminology and demonstrated improved documentation in the postintervention group.4,5

This article discusses the US Department of Veterans Affairs (VA) North Florida/South Georgia Veterans Health System (NF/SGVHS) integrated informatics solutions within template design in the Veterans Health Administration (VHA) EHR system that was associated with an increase in its case severity index (CSI) through improved clinical documentation capture.

 

Methods

According to policy activities that constitute research at NF/SGVHS, institutional review board approval was not required as this work met the criteria for operational improvement activities exempt from ethics review.

NF/SGVHS includes 2 hospitals: Malcom Randall VA Medical Center (MRVAMC) in Gainesville, Florida, and Lake City VA Medical Center (LCVAMC) in Lake City, Florida. MRVAMC is a large, 1a, academic VA facility composed of rotating residents and fellows and includes multiple specialty care services. LCVAMC is a smaller, nonteaching facility.

Template Design Impact

CSI is a risk-adjusted formula developed by the Inpatient Evaluation Center within VHA. CSI is incorporated into the VHA quality metrics reporting system, Strategic Analytics for Improvement and Learning (SAIL). CSI risk-adjusts metrics such as length of stay and mortality before releasing SAIL reports. CSI is calculated separately for acute level of care (LOC) and for the intensive care unit (ICU). In fiscal year (FY) 2017, acute LOC preimplementation data for CSI at NF/SGVHS were 0.76 for MRVAMC and 0.81 for LCVAMC, which was significantly below the national VHA average of 0.96 (Table).

A below-average CSI conveys a less complicated case mix compared with most other VA facilities. Although smaller VA facilities may have a less complicated case mix, it is unusual for large, tertiary care 1a VA facilities to have a low CSI. This low CSI is usually due to inadequate documentation, which affects not only risk-adjusted quality metrics outcomes, but also potential reimbursement.6

An interdisciplinary team composed of attendings, residents, and a clinical document improvement specialist identified the below-average acute LOC CSI for MRVAMC and LCVAMC compared with that of the national VHA average. Further analysis by chart reviews showed inconsistencies with standardized documentation despite prior health care provider education on ICD terminology and specific groups of common comorbidities analyzed in administrative data reviews for risk-adjustment purposes, known as Elixhauser comorbidities.5,7

A chart review showed lack of clarity regarding primary reason(s) for admission and chronic comorbidities within NF/SGVHS. Using Pareto chart analysis, the template team designed a standardized history and physical (H&P) medicine template based on NF/SGVHS common medicine admissions (Figure 1). A Pareto chart is a valuable QI tool that assists with identifying majority contributors to a problem(s) being analyzed when evaluating a large set of data points. Subsequently, this tool helps focus direction on QI efforts.8



The template had the usual H&P elements not shown (eg, chief complaint, history of present illness, etc), and highlights the assessment/plan section containing primary reason(s) for admission and chronic comorbidities (Figure 1). The complete assessment and plan section on the template can be found in the Appendix.

To simplify the template interface, only single clicks were required to expand diagnostic and chronic comorbidity checkboxes. Subcategories then appeared to select diagnosis and chronic comorbidities along with free text for additional documentation.

In addition, data objects were created within the template that permitted the ability to retrieve information from the VHA EHR and insert specific data points of interest in the template; for example, body mass index to assess degree of obesity and estimated glomerular filtration rate to determine the stage of chronic kidney disease. This allowed users to easily reference data in one template in lieu of searching for data in multiple places in the EHR.9

Results

The standardized H&P medicine template was implemented at MRVAMC and LCVAMC in June 2018 (the final month of the third quarter of FY 2018). As clinical providers throughout NF/SGVHS used the standardized template, acute LOC postimplementation data for CSI significantly improved. Although the national VHA average slightly decreased from 0.96 in the first quarter of FY 2017 to 0.89, in the first quarter of FY 2019, MRVAMC acute LOC CSI improved from 0.76 to 0.97, and LCVAMC acute LOC CSI improved from 0.81 to 1.07 during the same period.

In addition, compliance also was monitored within MRVAMC and LCVAMC for about 1 year after standardized H&P medicine template implementation. Compliance was determined by how often the standardized H&P medicine template was used for inpatient medicine admissions to the acute care wards vs other H&P notes used (such as personalized templates).

Methodology for compliance analysis included acquisition of completed H&P medicine notes from June 18, 2018 to June 30, 2019, within the VHA Veterans Information Systems and Technology Architecture (VistA) clinical and business information system using the search strings: “H&P admission history and physical” and “history of present illness.”10

A review identified 10,845 completed medicine H&P notes. Nine hundred eighteen notes were excluded as their search function yielded a location not corresponding to MRVAMC or LCVAMC. Of the 9,927 notes remaining, 8,025 of these were completed medicine H&P notes at MRVAMC and 1,902 were completed medicine H&P notes at LCVAMC (Figure 2).



From June 18, 2018 to June 30, 2019 at MRVAMC, compliance was reviewed monthly for the 8,025 completed H&P medicine notes. Of the completed H&P medicine notes, the standardized H&P medicine template was used 43.2% in June 2018. By June 2019, MRVAMC clinical providers demonstrated significant improvement for standardized H&P medicine template use at 89.9% (Figure 3). Total average compliance from June 18, 2018 to June 30, 2019, was 88.4%, which doubled compliance from the initial introduction of the standardized H&P medicine template.



Compliance was reviewed monthly for the 1,902 completed H&P medicine notes from June 18, 2018 to June 30, 2019, at LCVAMC. Of the completed H&P medicine notes, the standardized template was used 48.2% of the time in June 2018. By June 2019, LCVAMC clinical providers demonstrated significant improvement for standardized H&P medicine template use, which increased to 96.9%. Total average compliance from June 18, 2018 to June 30, 2019, was 93.8%, which was almost double the baseline compliance rate.

Discussion

Template design with clear objectives, strategic collaboration, and integrated informatics solutions has the potential to increase accuracy of documentation. As shown, the NF/SGVHS template design was associated with significant improvement in acute LOC CSI for both MRVAMC and LCVAMC due to more accurate documentation using the standardized H&P medicine template.

Numerous factors contributed to the success of this template design. First, a clear vision for application of the template was communicated with key stakeholders. In addition, the template design team was focused on specific goals rather than a one size fits all approach, which was crucial for sustainable execution. Although interdisciplinary teamwork has the potential to result in innovative practices, large multidisciplinary teams also may have difficulty establishing a shared vision that can result in barriers to achieving project goals.

Balancing standardization and customization was essential for user buy-in. As noted by Gardner and Pearce, inviting clinical providers to participate in template design and allowing for customization has the potential to increase acceptance and use of templates.11 Although the original design for the standardized H&P medicine template started with the medicine service at NF/SGVHS, the design framework is applicable to numerous services where various clinical care elements can be customized.

Explaining the informatics tools built into the template allowed clinicians to see opportunities to improve clinical documentation and the impact it has on reporting health care outcomes. When improvement work involves integrating clinical care delivery and administrative expectations, it is essential that health care systems understand and strategically execute project initiatives at this critical juncture.

Finally, incorporation of a sustainability plan when process improvement strategies are implemented is vital. In addition to collaboration with the clinical providers during design and implementation of the standardized template, leadership buy-in was key. Compliance with standardized H&P medicine template use was monitored monthly and reviewed by the NF/SGVHS Chief of Staff.

As noted, LCVAMC postimplementation acute LOC CSI was higher than that of MRVAMC despite being a smaller facility. This might be due to the MRVAMC designation as a teaching institution. Medicine is the only inpatient service at LCVAMC staffed by hospitalists with limited specialists available for consultation, whereas MRVAMC is a tertiary care teaching facility with numerous inpatient services and subspecialties. As LCVAMC has more continuity, house staff rotating at MRVAMC require continued training/education on new templates and process changes.

Limitations

Although standardized template design was successful at NF/SGVHS, limitations should be noted. Our clinical documentation improvement (CDI) program also was expanded about the same time as the new templates were released. The expansion of the CDI program in addition to new template design likely had a synergistic effect on acute LOC CSI.

CSI is a complex, risk-adjusted model that includes numerous factors, including but not limited to diagnosis and comorbid conditions. Other factors include age, marital status, procedures, source of admission, specific laboratory values, medical or surgical diagnosis-related group, intensive care unit stays, and immunosuppressive status. CSI also includes operative and nonoperative components that average into an overall CSI. As the majority of CSI is composed of nonoperative constituents within NF/SGVHS, we do not believe this had any substantial impact on reporting of CSI improvements.

In addition, template entry into VHA EHR requires a location selection (such as a clinic name or ward name following an inpatient admission). Of the 10,845 completed H&P medicine notes identified in VistA, 918 notes were excluded from analysis as their search function yielded a location not corresponding to MRVAMC or LCVAMC. For the 918 notes excluded, we believe this was likely due to user error where locations not related to MRVAMC or LCVAMC were selected during standardized H&P medicine template entry.

Conclusions

After the NF/SGVHS implementation of a uniquely designed template embedded with informatics solutions within the VHA EHR, the CSI increased due to more accurate documentation.

Next steps include determining the impact of the NF/SGVHS template design on potential reimbursement and expanding template design into the outpatient setting where there are additional opportunities to improve clinical documentation and reliable reporting of health care outcomes.

Acknowledgments

The authors thank the following individuals for their experience and contribution: Beverley White is the Clinical Documentation Improvement Coordinator at North Florida/South Georgia Veterans Health System and provided expertise on documentation requirements. Russell Jacobitz and Susan Rozelle provided technical expertise on electronic health record system enhancements and implemented the template design. Jess Delaune, MD, and Robert Carroll, MD, provided additional physician input during template design. We also acknowledge the Inpatient Evaluation Center (IPEC) within the Veterans Health Administration (VHA). IPEC developed the case severity index, a risk-adjusted formula incorporated into the VHA quality metric reporting system, Strategic Analytics for Improvement and Learning (SAIL).

Standardized template design is a useful tool to improve clinical documentation and reliable reporting of health care outcomes when constructed with clear objectives and with collaboration of key stakeholders. A standardized template should not only capture accurate diagnostic information, but also inform quality improvement (QI) measures and best practices.

Kang and colleagues showed that a correlation exists between organizational satisfaction and improved quality outcomes.1 A new initiative should have a well-defined purpose reinforced by collaborative workgroups and engaged employees who understand their clinical care role with electronic health record (EHR) modifications.

Several studies have shown how the usefulness of templates achieve multipurpose goals, such as accurate documentation and improved care. Valluru and colleagues showed a significant increase in vaccination rates for patients with inflammatory bowel disease after implementing a standardized template.2 By using a standardized template, Thaker and colleagues showed improved documentation regarding obesity and increased nutritional and physical activity counseling.3 Furthermore, Grogan and colleagues showed that templates are useful for house staff education on International Classification of Diseases (ICD) terminology and demonstrated improved documentation in the postintervention group.4,5

This article discusses the US Department of Veterans Affairs (VA) North Florida/South Georgia Veterans Health System (NF/SGVHS) integrated informatics solutions within template design in the Veterans Health Administration (VHA) EHR system that was associated with an increase in its case severity index (CSI) through improved clinical documentation capture.

 

Methods

According to policy activities that constitute research at NF/SGVHS, institutional review board approval was not required as this work met the criteria for operational improvement activities exempt from ethics review.

NF/SGVHS includes 2 hospitals: Malcom Randall VA Medical Center (MRVAMC) in Gainesville, Florida, and Lake City VA Medical Center (LCVAMC) in Lake City, Florida. MRVAMC is a large, 1a, academic VA facility composed of rotating residents and fellows and includes multiple specialty care services. LCVAMC is a smaller, nonteaching facility.

Template Design Impact

CSI is a risk-adjusted formula developed by the Inpatient Evaluation Center within VHA. CSI is incorporated into the VHA quality metrics reporting system, Strategic Analytics for Improvement and Learning (SAIL). CSI risk-adjusts metrics such as length of stay and mortality before releasing SAIL reports. CSI is calculated separately for acute level of care (LOC) and for the intensive care unit (ICU). In fiscal year (FY) 2017, acute LOC preimplementation data for CSI at NF/SGVHS were 0.76 for MRVAMC and 0.81 for LCVAMC, which was significantly below the national VHA average of 0.96 (Table).

A below-average CSI conveys a less complicated case mix compared with most other VA facilities. Although smaller VA facilities may have a less complicated case mix, it is unusual for large, tertiary care 1a VA facilities to have a low CSI. This low CSI is usually due to inadequate documentation, which affects not only risk-adjusted quality metrics outcomes, but also potential reimbursement.6

An interdisciplinary team composed of attendings, residents, and a clinical document improvement specialist identified the below-average acute LOC CSI for MRVAMC and LCVAMC compared with that of the national VHA average. Further analysis by chart reviews showed inconsistencies with standardized documentation despite prior health care provider education on ICD terminology and specific groups of common comorbidities analyzed in administrative data reviews for risk-adjustment purposes, known as Elixhauser comorbidities.5,7

A chart review showed lack of clarity regarding primary reason(s) for admission and chronic comorbidities within NF/SGVHS. Using Pareto chart analysis, the template team designed a standardized history and physical (H&P) medicine template based on NF/SGVHS common medicine admissions (Figure 1). A Pareto chart is a valuable QI tool that assists with identifying majority contributors to a problem(s) being analyzed when evaluating a large set of data points. Subsequently, this tool helps focus direction on QI efforts.8



The template had the usual H&P elements not shown (eg, chief complaint, history of present illness, etc), and highlights the assessment/plan section containing primary reason(s) for admission and chronic comorbidities (Figure 1). The complete assessment and plan section on the template can be found in the Appendix.

To simplify the template interface, only single clicks were required to expand diagnostic and chronic comorbidity checkboxes. Subcategories then appeared to select diagnosis and chronic comorbidities along with free text for additional documentation.

In addition, data objects were created within the template that permitted the ability to retrieve information from the VHA EHR and insert specific data points of interest in the template; for example, body mass index to assess degree of obesity and estimated glomerular filtration rate to determine the stage of chronic kidney disease. This allowed users to easily reference data in one template in lieu of searching for data in multiple places in the EHR.9

Results

The standardized H&P medicine template was implemented at MRVAMC and LCVAMC in June 2018 (the final month of the third quarter of FY 2018). As clinical providers throughout NF/SGVHS used the standardized template, acute LOC postimplementation data for CSI significantly improved. Although the national VHA average slightly decreased from 0.96 in the first quarter of FY 2017 to 0.89, in the first quarter of FY 2019, MRVAMC acute LOC CSI improved from 0.76 to 0.97, and LCVAMC acute LOC CSI improved from 0.81 to 1.07 during the same period.

In addition, compliance also was monitored within MRVAMC and LCVAMC for about 1 year after standardized H&P medicine template implementation. Compliance was determined by how often the standardized H&P medicine template was used for inpatient medicine admissions to the acute care wards vs other H&P notes used (such as personalized templates).

Methodology for compliance analysis included acquisition of completed H&P medicine notes from June 18, 2018 to June 30, 2019, within the VHA Veterans Information Systems and Technology Architecture (VistA) clinical and business information system using the search strings: “H&P admission history and physical” and “history of present illness.”10

A review identified 10,845 completed medicine H&P notes. Nine hundred eighteen notes were excluded as their search function yielded a location not corresponding to MRVAMC or LCVAMC. Of the 9,927 notes remaining, 8,025 of these were completed medicine H&P notes at MRVAMC and 1,902 were completed medicine H&P notes at LCVAMC (Figure 2).



From June 18, 2018 to June 30, 2019 at MRVAMC, compliance was reviewed monthly for the 8,025 completed H&P medicine notes. Of the completed H&P medicine notes, the standardized H&P medicine template was used 43.2% in June 2018. By June 2019, MRVAMC clinical providers demonstrated significant improvement for standardized H&P medicine template use at 89.9% (Figure 3). Total average compliance from June 18, 2018 to June 30, 2019, was 88.4%, which doubled compliance from the initial introduction of the standardized H&P medicine template.



Compliance was reviewed monthly for the 1,902 completed H&P medicine notes from June 18, 2018 to June 30, 2019, at LCVAMC. Of the completed H&P medicine notes, the standardized template was used 48.2% of the time in June 2018. By June 2019, LCVAMC clinical providers demonstrated significant improvement for standardized H&P medicine template use, which increased to 96.9%. Total average compliance from June 18, 2018 to June 30, 2019, was 93.8%, which was almost double the baseline compliance rate.

Discussion

Template design with clear objectives, strategic collaboration, and integrated informatics solutions has the potential to increase accuracy of documentation. As shown, the NF/SGVHS template design was associated with significant improvement in acute LOC CSI for both MRVAMC and LCVAMC due to more accurate documentation using the standardized H&P medicine template.

Numerous factors contributed to the success of this template design. First, a clear vision for application of the template was communicated with key stakeholders. In addition, the template design team was focused on specific goals rather than a one size fits all approach, which was crucial for sustainable execution. Although interdisciplinary teamwork has the potential to result in innovative practices, large multidisciplinary teams also may have difficulty establishing a shared vision that can result in barriers to achieving project goals.

Balancing standardization and customization was essential for user buy-in. As noted by Gardner and Pearce, inviting clinical providers to participate in template design and allowing for customization has the potential to increase acceptance and use of templates.11 Although the original design for the standardized H&P medicine template started with the medicine service at NF/SGVHS, the design framework is applicable to numerous services where various clinical care elements can be customized.

Explaining the informatics tools built into the template allowed clinicians to see opportunities to improve clinical documentation and the impact it has on reporting health care outcomes. When improvement work involves integrating clinical care delivery and administrative expectations, it is essential that health care systems understand and strategically execute project initiatives at this critical juncture.

Finally, incorporation of a sustainability plan when process improvement strategies are implemented is vital. In addition to collaboration with the clinical providers during design and implementation of the standardized template, leadership buy-in was key. Compliance with standardized H&P medicine template use was monitored monthly and reviewed by the NF/SGVHS Chief of Staff.

As noted, LCVAMC postimplementation acute LOC CSI was higher than that of MRVAMC despite being a smaller facility. This might be due to the MRVAMC designation as a teaching institution. Medicine is the only inpatient service at LCVAMC staffed by hospitalists with limited specialists available for consultation, whereas MRVAMC is a tertiary care teaching facility with numerous inpatient services and subspecialties. As LCVAMC has more continuity, house staff rotating at MRVAMC require continued training/education on new templates and process changes.

Limitations

Although standardized template design was successful at NF/SGVHS, limitations should be noted. Our clinical documentation improvement (CDI) program also was expanded about the same time as the new templates were released. The expansion of the CDI program in addition to new template design likely had a synergistic effect on acute LOC CSI.

CSI is a complex, risk-adjusted model that includes numerous factors, including but not limited to diagnosis and comorbid conditions. Other factors include age, marital status, procedures, source of admission, specific laboratory values, medical or surgical diagnosis-related group, intensive care unit stays, and immunosuppressive status. CSI also includes operative and nonoperative components that average into an overall CSI. As the majority of CSI is composed of nonoperative constituents within NF/SGVHS, we do not believe this had any substantial impact on reporting of CSI improvements.

In addition, template entry into VHA EHR requires a location selection (such as a clinic name or ward name following an inpatient admission). Of the 10,845 completed H&P medicine notes identified in VistA, 918 notes were excluded from analysis as their search function yielded a location not corresponding to MRVAMC or LCVAMC. For the 918 notes excluded, we believe this was likely due to user error where locations not related to MRVAMC or LCVAMC were selected during standardized H&P medicine template entry.

Conclusions

After the NF/SGVHS implementation of a uniquely designed template embedded with informatics solutions within the VHA EHR, the CSI increased due to more accurate documentation.

Next steps include determining the impact of the NF/SGVHS template design on potential reimbursement and expanding template design into the outpatient setting where there are additional opportunities to improve clinical documentation and reliable reporting of health care outcomes.

Acknowledgments

The authors thank the following individuals for their experience and contribution: Beverley White is the Clinical Documentation Improvement Coordinator at North Florida/South Georgia Veterans Health System and provided expertise on documentation requirements. Russell Jacobitz and Susan Rozelle provided technical expertise on electronic health record system enhancements and implemented the template design. Jess Delaune, MD, and Robert Carroll, MD, provided additional physician input during template design. We also acknowledge the Inpatient Evaluation Center (IPEC) within the Veterans Health Administration (VHA). IPEC developed the case severity index, a risk-adjusted formula incorporated into the VHA quality metric reporting system, Strategic Analytics for Improvement and Learning (SAIL).

References

1. Kang R, Kunkel S, Columbo J, et al. Association of Hospital Employee satisfaction with patient safety and satisfaction within Veterans Affairs Medical Centers. Am J Med. 2019;132(4):530-534.e1. doi: 10.1016/j.amjmed.2018.11.031

2. Valluru, N, Kang L, Gaidos JK. Health maintenance documentation improves for veterans with IBD using a template in the Computerized Patient Record System. Dig Dis Sci. 2018;63(7):1782-1786. doi:10.1007%2Fs10620-018-5093-5

3. Thaker VV, Lee F, Bottino CJ, et al. Impact of an electronic template on documentation of obesity in a primary care clinic. Clin Pediatr. 2016;55(12):1152-1159. doi:10.1177/0009922815621331

4. Grogan EL, Speroff T, Deppen S, et al. Improving documentation of patient acuity level using a progress note template. J Am Coll Surg. 2004;199(3):468-475. doi:10.1016/j.jamcollsurg.2004.05.254

5. Centers for Disease Control and Prevention. Classification of diseases, functioning, and disability. https://www .cdc.gov/nchs/icd/index.htm. Updated June 30, 2020. Accessed October 12, 2020.

6. Marill K A, Gauharou ES, Nelson BK, et al. Prospective, randomized trial of template-assisted versus undirected written recording of physician records in the emergency department. Ann Emerg Med. 1999;33(5):500- 509. doi:10.1016/S0196-0644(99)70336-7

7. Elixhauser A, Steiner C, Harris DR, et al. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. doi:10.1097/00005650-199801000-00004

8. Hart KA, Steinfeldt BA, Braun RD. Formulation and applications of a probalistic Pareto chart. AIAA. 2015;0804. doi:10.2514/6.2015-0804

9. IBM. IBM knowledge center: overview of data objects. https://www.ibm.com/support/knowledgecenter /en/SSLTBW_2.3.0/com.ibm.zos.v2r3.cbclx01/data _objects.htm. Accessed October 12, 2020.

10. US Department of Veterans Affairs. History of IT at VA. https://www.oit.va.gov/about/history.cfm. Accessed October 18, 2020.

11. Gardner CL, Pearce PF. Customization of electronic medical record templates to improve end-user satisfaction. Comput Inform Nurs. 2013;31(3):115-121. doi:10.1097/NXN.0b013e3182771814

References

1. Kang R, Kunkel S, Columbo J, et al. Association of Hospital Employee satisfaction with patient safety and satisfaction within Veterans Affairs Medical Centers. Am J Med. 2019;132(4):530-534.e1. doi: 10.1016/j.amjmed.2018.11.031

2. Valluru, N, Kang L, Gaidos JK. Health maintenance documentation improves for veterans with IBD using a template in the Computerized Patient Record System. Dig Dis Sci. 2018;63(7):1782-1786. doi:10.1007%2Fs10620-018-5093-5

3. Thaker VV, Lee F, Bottino CJ, et al. Impact of an electronic template on documentation of obesity in a primary care clinic. Clin Pediatr. 2016;55(12):1152-1159. doi:10.1177/0009922815621331

4. Grogan EL, Speroff T, Deppen S, et al. Improving documentation of patient acuity level using a progress note template. J Am Coll Surg. 2004;199(3):468-475. doi:10.1016/j.jamcollsurg.2004.05.254

5. Centers for Disease Control and Prevention. Classification of diseases, functioning, and disability. https://www .cdc.gov/nchs/icd/index.htm. Updated June 30, 2020. Accessed October 12, 2020.

6. Marill K A, Gauharou ES, Nelson BK, et al. Prospective, randomized trial of template-assisted versus undirected written recording of physician records in the emergency department. Ann Emerg Med. 1999;33(5):500- 509. doi:10.1016/S0196-0644(99)70336-7

7. Elixhauser A, Steiner C, Harris DR, et al. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. doi:10.1097/00005650-199801000-00004

8. Hart KA, Steinfeldt BA, Braun RD. Formulation and applications of a probalistic Pareto chart. AIAA. 2015;0804. doi:10.2514/6.2015-0804

9. IBM. IBM knowledge center: overview of data objects. https://www.ibm.com/support/knowledgecenter /en/SSLTBW_2.3.0/com.ibm.zos.v2r3.cbclx01/data _objects.htm. Accessed October 12, 2020.

10. US Department of Veterans Affairs. History of IT at VA. https://www.oit.va.gov/about/history.cfm. Accessed October 18, 2020.

11. Gardner CL, Pearce PF. Customization of electronic medical record templates to improve end-user satisfaction. Comput Inform Nurs. 2013;31(3):115-121. doi:10.1097/NXN.0b013e3182771814

Issue
Federal Practitioner - 37(11)a
Issue
Federal Practitioner - 37(11)a
Page Number
527-531
Page Number
527-531
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Article PDF Media